Skip to main content

Arvist Stack Installation Guide

1 Prerequisites

  • Linux host (Arch recommended) with Docker & Docker Compose
  • ≈ 500 GB free disk for recordings (attach NAS for larger/longer retention)
  • Internet access to pull container images (and optionally push to Cloudflare R2)

2 Repository Setup

Set up your folders with the right structure and permission to manage the configurations.

#!/bin/bash

# Create the required folders
BASEDIR=../..
# List of directories to create
dirs=(
"$BASEDIR/docker/postgres"
"$BASEDIR/docker/plugins/path_obstruction/logs"
"$BASEDIR/docker/arvist-notifications/logs"
"$BASEDIR/docker/mqtt/config"
"$BASEDIR/docker/mqtt/log"
"$BASEDIR/docker/mqtt/data"
"$BASEDIR/docker/arvist_collision/forklift_forklift"
"$BASEDIR/docker/arvist_collision/forklift_person"
"$BASEDIR/docker/palletscan/logs"
"$BASEDIR/docker/arvist/shipment_count"
"$BASEDIR/docker/arvist/shipment_inspection"
"$BASEDIR/docker/arvist/incidents"
"$BASEDIR/docker/arvist/pallets"
"$BASEDIR/docker/arvist/latest"
"$BASEDIR/docker/arvist/videos"
"$BASEDIR/docker/event-service/logs"
)

# Create all directories
for dir in "${dirs[@]}"; do
mkdir -p "$dir"
done

# Find all *.example.* files
example_files=$(find $BASEDIR -type f -name "*.example.*")

# Process each example file
for file in $example_files; do
# Skip the special case file to be ignored
if [[ "$file" == "$BASEDIR/docker/nvr/config/coral_config.example.yml" ]]; then
continue
fi

# Handle the special case for cpu_config.example.yml
if [[ "$file" == "$BASEDIR/docker/nvr/config/cpu_config.example.yml" ]]; then
cp "$file" "$BASEDIR/docker/nvr/config/config.yaml"
continue
fi

# Create a non-example file by removing ".example." from the file name
new_file=$(echo "$file" | sed 's/\.example\.//')
if [[ -e "$new_file" ]]; then
# File already exists, do not overwrite
continue
fi
cp "$file" "$new_file"
done

# Echo a message listing all the .example. files found
if [[ -n "$example_files" ]]; then
echo "The following .example. files were found and processed:"
echo "$example_files"
echo "Please ensure these files are properly filled."
else
echo "No .example. files were found."
fi

If self-hosting we will add you to our private docker repositories, which can be pulled and deployed with a similar configuration to this:

version: '3.9'
services:
arvist_central:
container_name: arvist_central
# image: arvist/arvist-web:latest
build: ./arvist
ports:
- 80:3000
# environment:
# - NODE_ENV=production # uncomment if you want to run in dev mode for backend development
env_file:
- ./docker/.env
- ./docker/arvist/.env
volumes:
- ./docker/arvist/:/.storage
# include below for backend development:
- ./arvist/api/src:/app/src # uncomment for backend development
- /app/node_modules # uncomment for backend development
restart: always
depends_on:
mqtt:
condition: service_started
nvr:
condition: service_healthy
postgres:
condition: service_started
event-service:
condition: service_started
networks:
- central_net
- db_net
mqtt:
container_name: mqtt-arvist
image: eclipse-mosquitto:1.6
volumes:
- ./docker/mqtt/config:/mosquitto/config
- ./docker/mqtt/log:/mosquitto/log
- ./docker/mqtt/data:/mosquitto/data
restart: always
networks:
- central_net
primary_detector_provider:
image: arvist/arvist-model-primary-detector:latest
volumes:
- exported-model:/arvist/exported_model:rw
devices:
- /dev/dri
networks:
- central_net
nvr:
# To wait model-provider export the model to openvino format - optimized for the device
depends_on:
primary_detector_provider:
condition: service_completed_successfully
container_name: nvr
restart: unless-stopped
image: ghcr.io/blakeblackshear/frigate:0.15.0
shm_size: '512mb' # update for your cameras based on calculation above
healthcheck:
test: ['CMD', 'curl', '-f', 'http://localhost:5000/api/version']
interval: 10s
timeout: 5s
retries: 5
devices:
# - /dev/bus/usb:/dev/bus/usb # passes the USB Coral, needs to be modified for other versions
# - /dev/apex_0:/dev/apex_0 # passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux
- /dev/dri/renderD128 # for intel hwaccel, needs to be updated for your hardware
volumes:
- /etc/localtime:/etc/localtime:ro
- ./docker/nvr/config:/config
- ./docker/nvr/storage:/media/frigate
- exported-model:/model:ro
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
target: /tmp/cache
tmpfs:
size: 1000000000
ports:
# - '8971:8971' # for auth -- KEEP COMMENTED FOR NOW
- '5000:5000'
- '8554:8554' # RTSP feeds
- '8555:8555/tcp' # WebRTC over tcp
- '8555:8555/udp' # WebRTC over udp
cap_add:
- CAP_PERFMON
- CAP_NET_ADMIN
- CAP_NET_RAW
networks:
- central_net
# labels:
# - "traefik.enable=true"
# - "traefik.http.routers.nvr.rule=Host(`hostname.tailnet-name.ts.net`)"
# - "traefik.http.routers.nvr.entrypoints=websecure"
# - "traefik.http.services.nvr.loadbalancer.server.port=5000"
# - "traefik.docker.network=central_net"
arvist_collision:
container_name: arvist_collision
# build: ./plugins/safety/arvist_collision
image: arvist/arvist-collision:latest
restart: unless-stopped
volumes:
- ./docker/arvist_collision/forklift_forklift:/arvist_collision/.storage/forklift_forklift
- ./docker/arvist_collision/forklift_person:/arvist_collision/.storage/forklift_person
- ./docker/arvist_collision/config:/arvist_collision/.storage/config
- ./docker/arvist/incidents:/arvist_collision/incidents
depends_on:
- mqtt
- arvist_central
- nvr
- postgres
env_file:
- ./docker/.env
- ./docker/arvist_collision/config/.env
networks:
- central_net
- db_net

arvist-model-provider-palletscan:
# service to export the model to openvino format - optimized for the device
# All models can be access from the path /palletscan/model/ on any service that mounts the volume "palletscan-models"

# tags avaible: (latest, dev or experimental)
# latest: for production (from master branch if avaible)
# dev: for development (from dev branch if avaible)
# experimental: for testing (from experimental branch if avaible)
image: arvist/arvist-model-provider-palletscan:latest # tags avaible: (latest, dev or experimental)
# build: ./palletscan/model
devices:
- '/dev/dri'
volumes:
- /dev/dri:/dev/dri
- palletscan-models:/palletscan/model/optimized_models:rw # (read-write) Model provider by arvist-model-provider-palletscan
- ./docker/palletscan/config:/palletscan/config # to save the log files or read any config file

# FOR PRODUCTION USE ONLY
training-data-collector:
image: arvist/arvist-ml-training-data-collector:latest
build:
context: ./ml/training_data_collector
dockerfile: Dockerfile
env_file:
- ./docker/.env
- ./docker/arvist/.env
volumes:
- ./docker/nvr/storage/clips:/data/input
- ./docker/arvist/data_uploader:/var/log # persistence of logs if desired
restart: unless-stopped

palletscan:
container_name: palletscan
build: ./palletscan/service
image: arvist/arvist-palletscan:latest
restart: unless-stopped
volumes:
- ./docker/palletscan/config:/palletscan/config
- ./docker/arvist/shipment_inspection:/palletscan/.storage/shipment_inspection
- ./docker/arvist/pallets:/palletscan/.storage/pallets
- palletscan-models:/palletscan/model/:r # (read only) Model provider by arvist-model-provider-palletscan
# - /dev/dri:/dev/dri #it can help to force it shares GPU with container, use it in case the GPU is not detected without it
devices:
- /dev/dri/renderD128
depends_on:
mqtt:
condition: service_started
arvist_central:
condition: service_started
nvr:
condition: service_started
arvist-model-provider-palletscan:
condition: service_completed_successfully
env_file:
- ./docker/.env
- ./docker/palletscan/config/.env
networks:
- central_net
- db_net
event-service:
build: event-service/
container_name: arvist-event-service
volumes:
- ./docker/arvist/incidents:/.storage/incidents
- ./docker/arvist/latest:/.storage/latest
- ./docker/arvist/videos:/.storage/videos
- ./docker/event-service/logs:/app/logs
env_file:
- ./docker/.env
- ./docker/event-service/.env
restart: always
depends_on:
nvr:
condition: service_healthy
postgres:
condition: service_started
mqtt:
condition: service_started
networks:
- central_net
- db_net
postgres:
image: postgres:latest
container_name: postgres
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: password
POSTGRES_DB: arvist
volumes:
- ./docker/postgres:/var/lib/postgresql/data
ports:
- '5432:5432'
restart: unless-stopped
networks:
- db_net
arvist_notifications:
# image: arvist/notifications:latest
build: ./notifications
container_name: arvist_notifications
restart: unless-stopped
env_file:
- ./docker/.env
- ./docker/arvist-notifications/.env
volumes:
- ./docker/arvist-notifications/logs:/usr/src/app/logs
networks:
- central_net
depends_on:
mqtt:
condition: service_started
arvist_path_obstruction:
image: arvist/arvist-path-obstruction:dev
build: ./plugins/safety/path_obstruction
restart: unless-stopped
user: appuser
volumes:
- ./docker/plugins/path_obstruction/config:/path_obstruction/.storage/config
- ./docker/arvist:/.storage
# (optional) debug folder to save the debug files
# make sure to create this folder first
# - ./docker/plugins/path_obstruction/debug/:/path_obstruction/debug # (optional)
depends_on:
- mqtt
- arvist_central
- nvr
- postgres
env_file:
- ./docker/.env
- ./docker/plugins/path_obstruction/config/.env
networks:
- central_net
- db_net
security_opt:
- no-new-privileges:false
asset_tracking:
build: plugins/productivity/asset_tracking
# image: arvist/arvist-asset_tracking:latest
container_name: asset_tracking
restart: unless-stopped
privileged: true
volumes:
- ./docker/asset_tracking/config:/asset_tracking/config
depends_on:
- mqtt
- arvist_central
- nvr
- postgres
env_file:
- ./docker/.env
- ./docker/asset_tracking/config/.env
networks:
- central_net
- db_net
networks:
central_net:
driver: bridge
db_net:
driver: bridge

volumes:
exported-model:
palletscan-models:

3 Start the Stack

# rename or reference the compose file
docker compose -f docker-compose.yml up -d
  • Health-checks ensure correct start-order.
  • Browse to http://<host-ip>/ after containers show healthy.

4 Environment Variables

4.1 Global (docker/.env)

VarDefaultUse
TIMEZONEAmerica/Chicagolocal tz
MQTT_HOSTmqttbroker host
MQTT_PORT8883broker port
MQTT_USERNAMEMQTT user
MQTT_PASSWORDMQTT pass
CLOUDFLARE_*R2 creds

Notes

  • CLOUDFLARE_REGION{ enam | wnam | weur | eeur | apac }.
  • Leave MQTT_USERNAME/PASSWORD blank for anonymous broker access.

4.2 Path-Obstruction Plug-in (docker/plugins/path_obstruction/config/.env)

VarExampleUse
DB_*arvist_plugins / 5432Postgres
MODULE_NAMEObject Obscuring Pathwaylabel
GENERATE_REPORTtruesave PDF
LOGURU_LEVELINFOlog level
DEBUG_MODE(off)extra dumps
SAMPLING_PROCESSING8frames/loop
MIN_CONTOUR_AREA250px² cutoff

DEBUG_MODE=true dumps frames into ./debug/.


4.3 Collision Plug-in (docker/arvist_collision/config/.env)

VarDefaultUse
DB_*arvist / 5432Postgres
GENERATE_REPORTtruePDF
DEVfalsecontinuous mode
LOGGER_LEVELINFOlogs

Extra thresholds (e.g. DTW_THRESHOLD, FORKLIFT_DIST_WIDTH_THRESHOLD_RATIO) tune collision risk logic.


4.4 PalletScan Service (docker/palletscan/config/.env)

VarDefaultUse
LOGGER_LEVELINFOlogs
ML_API_URLhttp://arvist-gpuOCR/ML
DEVfalsedev mode
DEV_CAMERASlimit cams
POSTGRES_*arvist / postgresDB
PRODUCT_CONFIDENCE_THRESHOLD0.6detection
more spinner / detector tuning

4.5 Notifications Service (docker/arvist-notifications)

VarDefault/ExampleUse
MQTT_HOSTmqttsubscribe
MQTT_TOPIC_PREFIXarvist/notificationstopic root
SLACK_BOT_TOKENSlack alerts
TWILIO_*SMS / WhatsApp
SMTP_*smtp.mailgun.orgemail
LOG_LEVELinfologs

At least one channel (Slack, Twilio, SMTP) must be configured.


4.6 Event Service (docker/event-service/.env)

VarDefaultUse
BASE_URLAPI base
TIMEZONEUTCtz
OPENAI_API_KEYGPT
DB**, MQTT**infra
INCIDENTS_PATH./.storage/incidentsstorage

5 Operational Tips

  • Volumes persist models/configs; don’t delete casually.
  • Cameras: RTSP main 2688×1520@15 fps, sub 1280×720@5 fps.
  • Storage: attach NAS if >70 camera-days needed.
  • VPN: connect via VPN such as CF Zero Trust for remote support.

6 Basic Commands

ActionCommand
Start alldocker compose up -d
Stop alldocker compose down
Logsdocker compose logs -f <service>
Update imagesdocker compose pull && docker compose up -d
DB backuppg_dump -Fc -f backup.pg arvist

Deploy with the settings above, then refine thresholds and credentials to match your warehouse environment.