๐ŸŽฅ Frigate NVR Configuration Generator

Professional-grade configuration tool for Frigate NVR

Compatible with Frigate 0.14.x, 0.15.x, 0.16.x, and 0.17.x (stable releases)

๐Ÿ’™ Support the Frigate Team

Frigate is a free, open-source project built by Blake Blackshear and dedicated contributors who work tirelessly to provide the best NVR solution. Their incredible effort makes this software possible for everyone.

If Frigate has helped you secure your home or business, please consider supporting the project:

Your support helps maintain development, add new features, and keep Frigate free for everyone. Thank you! ๐Ÿ™

System Settings

Storage Settings

MQTT Settings

Cameras Configuration

docker-compose.yml

config/config.yml

๐Ÿ“Š Storage Calculator

๐Ÿ“– Complete Documentation

๐Ÿš€ Quick Start Guide

  1. Fill in your system settings (hardware acceleration, detector type, server IP)
  2. Configure storage paths and retention periods based on your needs
  3. If using MQTT for Home Assistant integration, enable and configure broker details
  4. Add your cameras with RTSP URLs (see "Finding Camera RTSP URLs" below)
  5. For each camera, configure both main stream (recording) and sub stream (detection)
  6. Set appropriate bitrates and encoding (CBR recommended)
  7. Copy the generated docker-compose.yml
  8. Copy the generated config/config.yml
  9. Deploy on your server (see deployment steps below)
  10. Get your initial admin password from logs (CRITICAL - see below)

๐Ÿ” CRITICAL: Getting Your Frigate Password

On first startup, Frigate generates a random admin password. You MUST retrieve it from logs immediately:

Method 1: View logs immediately after starting Frigate

docker logs frigate 2>&1 | grep -i password

Look for a line like:
Frigate admin user password: Xh9kL2mP5qR8tY

Method 2: If you missed it, reset the password

  1. Edit your config/config.yml
  2. Add this at the VERY TOP of the file (before any other configuration):
auth:
  reset_admin_password: true
  1. Restart Frigate: docker restart frigate
  2. Check logs again: docker logs frigate 2>&1 | grep -i password
  3. Copy the new password and log in to the UI
  4. IMPORTANT: Remove the reset_admin_password: true line from config immediately
  5. Change password in UI: Settings โ†’ Users โ†’ Update Password
  6. Choose a strong password and save it securely

โš ๏ธ SECURITY NOTES:

  • Username is always: admin (lowercase)
  • Always change the default password immediately after first login
  • Never leave reset_admin_password: true in your config permanently
  • Store your password in a password manager
  • Enable two-factor authentication if available in future versions

๐ŸŽฅ Finding Your Camera RTSP URLs

Method 1: Camera Web Interface (Most Reliable)

  1. Find your camera's IP address (check router DHCP table or use IP scanner like Advanced IP Scanner)
  2. Open web browser and navigate to http://CAMERA_IP
  3. Log in with camera credentials (common defaults: admin/admin, admin/12345, admin/password)
  4. Navigate to Settings โ†’ Network โ†’ Stream or Video Settings
  5. Look for RTSP settings or Stream URLs
  6. Copy both "Main Stream" (high resolution) and "Sub Stream" (low resolution) URLs
  7. Note the resolution, bitrate, and encoding settings for each stream

Method 2: Test with VLC Media Player

  1. Download and install VLC from videolan.org
  2. Open VLC โ†’ Media โ†’ Open Network Stream
  3. Try common RTSP URL formats (see below)
  4. If video plays successfully, that's your working RTSP URL!
  5. Right-click video โ†’ Codec Information to see resolution and bitrate

Method 3: ONVIF Device Manager (Advanced)

  1. Download ONVIF Device Manager
  2. Scan your network for ONVIF-compatible cameras
  3. Device manager will display all available streams with exact URLs
  4. This works for most modern IP cameras

Common RTSP URL Formats by Camera Brand:

Hikvision:

rtsp://admin:password@192.168.1.64:554/Streaming/Channels/101 (main stream)
rtsp://admin:password@192.168.1.64:554/Streaming/Channels/102 (sub stream)

Dahua / Loryta / Amcrest:

rtsp://admin:password@192.168.1.64:554/cam/realmonitor?channel=1&subtype=0 (main)
rtsp://admin:password@192.168.1.64:554/cam/realmonitor?channel=1&subtype=1 (sub)

Reolink:

rtsp://admin:password@192.168.1.64:554/h264Preview_01_main (main stream)
rtsp://admin:password@192.168.1.64:554/h264Preview_01_sub (sub stream)

Amcrest (Alternative Format):

rtsp://admin:password@192.168.1.64:554/cam/realmonitor?channel=1&subtype=0

Generic / ONVIF Cameras:

rtsp://admin:password@192.168.1.64:554/stream1 (main)
rtsp://admin:password@192.168.1.64:554/stream2 (sub)

Also try: /live, /h264, /cam1, /video1, /live/main, /live/sub

Unifi Protect:

rtsp://username:password@192.168.1.64:7447/CAMERA_ID

Note: Unifi cameras require specific format, check Unifi documentation

Troubleshooting RTSP Connection:

  • Camera unreachable: Ensure camera and server are on same network or properly routed subnet
  • Authentication failed: Double-check username and password (URL encoding needed for special characters)
  • Port blocked: Default RTSP port is 554, check firewall rules on server and camera
  • Stream not found: Try different stream paths, check camera documentation
  • Connection timeout: Some cameras need ?tcp at end of URL to force TCP mode
  • H.265 codec issues: Ensure camera is set to H.264 for maximum compatibility
  • Multiple channels: For multi-channel cameras, change channel=1 to channel=2, channel=3, etc.

Best Practices for Camera Configuration:

  • Main stream: 1920ร—1080 (1080p) @ 15fps, 4096-6144 kbps, H.264 encoding
  • Sub stream: 1280ร—720 (720p) @ 5fps, 1024-2048 kbps, H.264 encoding
  • Frame rate: 15fps for main, 5fps for sub (higher FPS = more storage/bandwidth)
  • Bitrate mode: CBR (Constant Bitrate) strongly recommended over VBR
  • I-frame interval: Set to match framerate (e.g., 15 for 15fps)
  • Disable: WDR, HDR, 3D-NR on sub stream (keeps detection fast)

โš™๏ธ Hardware Acceleration Complete Guide

Intel GPU (VAAPI) - RECOMMENDED for Most Users

  • Best for: Intel CPUs with integrated graphics (6th gen+), Intel Arc GPUs
  • Performance: Excellent H.264/H.265 encoding and decoding, low power consumption
  • Cost: Free (built into CPU), no additional hardware needed
  • Compatibility: Works with 10-15+ cameras simultaneously
  • Setup: Requires /dev/dri device passthrough (automatically configured by this tool)
  • Detector: Pair with OpenVINO detector for GPU-accelerated AI inference

Setup Commands (on host machine):

# Add current user to render and video groups
sudo usermod -aG render,video $(whoami)

# Set permissions for DRI devices
sudo chmod -R 777 /dev/dri

# IMPORTANT: Logout and login OR reboot for changes to take effect
sudo reboot

Verify GPU access after reboot:

# Check DRI devices exist
ls -la /dev/dri
# Should see: card0, renderD128, etc.

# Test hardware acceleration (install vainfo if needed)
vainfo
# Should display GPU model and supported codecs

Google Coral Edge TPU - BEST Performance/Watt

  • Best for: Dedicated AI acceleration, maximum efficiency, low power setups
  • Performance: 4 TOPS (trillion operations per second), ~2W power consumption
  • Capacity: Handles 10-15 cameras per Coral device at 5fps detection
  • Cost: ~$25-60 USD depending on model (USB vs PCIe/M.2)
  • Models: USB Accelerator (plug-and-play), M.2/PCIe (faster, requires compatible slot)
  • Limitation: Only works with EdgeTPU-compatible models (pre-compiled)

Setup for USB Coral:

# Install EdgeTPU runtime on host
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt update
sudo apt install libedgetpu1-std

# Verify Coral is detected
lsusb | grep "Google"
# Should show: "Global Unichip Corp. Google Coral"

Setup for PCIe/M.2 Coral:

# Install drivers
sudo apt install gasket-dkms libedgetpu1-std

# Load kernel module
sudo modprobe apex

# Verify device
ls /dev/apex_0
# Should exist if detected correctly

โš ๏ธ Note: Coral TPU only supports specific pre-compiled models. Cannot use custom YOLO models without conversion.

NVIDIA GPU - For Dedicated Graphics Cards

  • Best for: Existing NVIDIA GTX/RTX card owners, TensorRT acceleration
  • Performance: Best AI inference speed with TensorRT, excellent encoding with NVENC
  • Capacity: 20-30+ cameras depending on GPU model
  • Cost: $200-2000+ USD (if buying new)
  • Power: Higher power consumption (75-350W depending on model)
  • Requirements: NVIDIA Container Toolkit must be installed on host

Install NVIDIA Container Toolkit (Debian/Ubuntu):

# Add NVIDIA repository
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | \
  sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg

distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
  sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
  sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

# Install toolkit
sudo apt update
sudo apt install -y nvidia-container-toolkit

# Configure Docker
sudo nvidia-ctk runtime configure --runtime=docker

# Restart Docker
sudo systemctl restart docker

Verify GPU access in container:

docker exec frigate nvidia-smi
# Should display GPU stats and memory usage

CPU Only - Universal Compatibility

  • Best for: Testing, temporary setups, very low camera count (1-2 cameras)
  • Performance: Slowest option, high CPU usage, not recommended for production
  • Capacity: 1-2 cameras at 720p, 5fps detection maximum
  • Cost: Free, no additional hardware
  • Setup: No special configuration needed
  • Recommendation: Only use if no other option available, upgrade to GPU/Coral ASAP

Optimization tips for CPU-only:

  • Lower detection FPS to 3fps instead of 5fps
  • Use lowest resolution sub stream possible (640ร—480 if available)
  • Limit to 1-2 cameras maximum
  • Disable motion mask (let motion detection reduce load)
  • Use SSDLite MobileNet v2 model (fastest)

AMD GPU (ROCm) - Experimental

  • Status: Experimental support, limited testing
  • Best for: AMD GPU owners willing to troubleshoot
  • Performance: Varies by model, generally good encoding but limited AI acceleration
  • Requirements: ROCm drivers and runtime installed
  • Recommendation: Only use if you already own AMD GPU, otherwise choose Intel/NVIDIA/Coral

๐Ÿ–ฅ๏ธ Proxmox GPU Sharing Guide

Intel integrated GPUs can be shared between Proxmox host and LXC containers, but NOT with VMs (requires full passthrough for VMs).

Share Intel iGPU with LXC Container (Recommended):

  1. On Proxmox host, identify GPU devices:
    ls -la /dev/dri
    # Note the major:minor numbers (usually 226:0 and 226:128)
  2. Edit LXC container configuration (replace 103 with your container ID):
    nano /etc/pve/lxc/103.conf
  3. Add these lines at the bottom of the file:
    # GPU passthrough for Frigate
    lxc.cgroup2.devices.allow: c 226:0 rwm
    lxc.cgroup2.devices.allow: c 226:128 rwm
    lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
    lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
  4. Save file and start/restart the container:
    pct start 103
  5. Inside container, verify GPU access:
    ls -la /dev/dri
    # Should see renderD128 and card0
    
    # Install vainfo to test (optional)
    apt install -y vainfo
    vainfo
    # Should display Intel GPU information
  6. Add user to required groups (inside container):
    usermod -aG render,video root

โš ๏ธ IMPORTANT: VM GPU Passthrough Limitations

You CANNOT share a GPU between Proxmox host + VM + LXC simultaneously.

  • LXC containers: Can share GPU with host (host retains control, multiple containers can use same GPU)
  • VMs: Require full PCI passthrough (GPU exclusively assigned, no sharing possible)
  • Solution for multi-use: Use separate GPUs, OR run Frigate in LXC instead of VM
  • Recommendation: Use LXC for Frigate (lighter, better GPU sharing, easier management)

Full PCI Passthrough to VM (Intel iGPU - Advanced):

  1. Enable IOMMU in BIOS:
    • Intel: Enable VT-d
    • AMD: Enable AMD-Vi
  2. Edit GRUB configuration:
    nano /etc/default/grub
    
    # Find line: GRUB_CMDLINE_LINUX_DEFAULT="quiet"
    # Change to:
    GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
    
    # For AMD CPU, use:
    GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt"
    
    # Save and update GRUB
    update-grub
    reboot
  3. Find GPU PCI address:
    lspci -nn | grep VGA
    # Example output: 00:02.0 VGA compatible controller [0300]: Intel Corporation [8086:9a49]
    # Note the address: 00:02.0
  4. In Proxmox UI:
    • Select VM โ†’ Hardware โ†’ Add โ†’ PCI Device
    • Select your GPU (00:02.0)
    • Check "All Functions"
    • Check "Primary GPU" if using for VM display
    • Click Add
  5. Start VM and verify:
    # Inside VM
    lspci | grep VGA
    # Should show GPU passed through

Troubleshooting Proxmox GPU Issues:

  • Permission denied in LXC: Verify cgroup2 rules and mount entries in config
  • Device not found: Check major:minor numbers with ls -la /dev/dri on host
  • VM passthrough fails: Ensure IOMMU enabled in BIOS and GRUB
  • Black screen after passthrough: Try disabling "Primary GPU" option
  • Conflicting usage: Cannot passthrough GPU being used by Proxmox console

๐ŸŽฏ Detection Models Complete Guide

Models included by default in Frigate:

  • SSDLite MobileNet v2 (OpenVINO/CPU) - 300ร—300 - Default, balanced speed/accuracy, no setup required
  • SSD MobileNet v2 (EdgeTPU) - Optimized for Google Coral, excellent efficiency

Additional models (require manual download and configuration):

YOLOv8 Nano - Recommended Upgrade

Advantages: Better accuracy than default, modern architecture, 640ร—640 input

  1. Download YOLOv8n ONNX model:
    mkdir -p /opt/frigate/config/model_cache
    cd /opt/frigate/config/model_cache
    
    # Download from Ultralytics GitHub
    wget https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.onnx
    
    # Verify file downloaded
    ls -lh yolov8n.onnx
  2. Update config.yml model section:
    model:
      path: /config/model_cache/yolov8n.onnx
      input_tensor: nchw
      input_pixel_format: bgr
      width: 640
      height: 640
      labelmap_path: /openvino-model/coco_80cl.txt
  3. Restart Frigate and test:
    docker restart frigate
    docker logs frigate -f | grep model

YOLOv9 Compact - Better Accuracy

Advantages: Improved detection, especially for small objects, newer architecture

  1. Download YOLOv9c from official repository
  2. Convert to ONNX format if necessary:
    # If you have .pt file, convert to ONNX
    pip install ultralytics
    python -c "from ultralytics import YOLO; YOLO('yolov9c.pt').export(format='onnx')"
  3. Place yolov9c.onnx in /opt/frigate/config/model_cache/
  4. Update config.yml:
    model:
      path: /config/model_cache/yolov9c.onnx
      input_tensor: nchw
      input_pixel_format: bgr
      width: 640
      height: 640

YOLOv10 Nano - Newest Generation

Advantages: Latest YOLO architecture, improved speed/accuracy balance

  1. Download from Ultralytics repository
  2. Convert to ONNX if needed
  3. Configure with 640ร—640 input size and nchw tensor format

YOLOv11 Nano - Latest Model (Experimental)

Advantages: Cutting-edge performance, may require testing

  1. Download YOLOv11n ONNX:
    cd /opt/frigate/config/model_cache
    wget [URL_to_yolov11n.onnx]  # Check Ultralytics releases
  2. Configure with 640ร—640 input, nchw tensor format
  3. Test thoroughly before production use

YOLOX Tiny - Lightweight Option

Advantages: Faster than default on some hardware, 416ร—416 input

  1. Download from Intel OpenVINO Model Zoo:
    cd /opt/frigate/config/model_cache
    wget https://github.com/openvinotoolkit/open_model_zoo/raw/master/models/public/yolox-tiny/FP16/yolox-tiny.xml
    wget https://github.com/openvinotoolkit/open_model_zoo/raw/master/models/public/yolox-tiny/FP16/yolox-tiny.bin
  2. Update config.yml:
    model:
      path: /config/model_cache/yolox-tiny.xml
      input_tensor: nchw
      input_pixel_format: bgr
      width: 416
      height: 416

โš ๏ธ Important Model Considerations:

  • Model size vs. speed: Larger models (YOLOv9, YOLOv11) = better accuracy but slower detection
  • Input resolution: Higher input resolution = better small object detection but slower
  • Testing required: Always test new models with your cameras before production deployment
  • Default is excellent: SSDLite MobileNet v2 works great for most use cases, don't fix what isn't broken
  • Hardware dependency: Model performance varies significantly by detector type (OpenVINO vs Coral vs CPU)
  • Coral limitation: Coral Edge TPU only supports pre-compiled EdgeTPU models, cannot use YOLO without complex conversion

Model Performance Comparison:

Model Input Size Speed Accuracy Best For
SSDLite MobileNet v2 300ร—300 โšกโšกโšกโšก โญโญโญ Default, balanced
YOLOv8 Nano 640ร—640 โšกโšกโšก โญโญโญโญ Best overall upgrade
YOLOv9 Compact 640ร—640 โšกโšก โญโญโญโญโญ High accuracy needs
YOLOX Tiny 416ร—416 โšกโšกโšกโšก โญโญโญ Speed priority
EdgeTPU (Coral) 320ร—320 โšกโšกโšกโšกโšก โญโญโญ Max efficiency

๐Ÿ“Š Stream Encoding: CBR vs VBR Explained

Understanding bitrate encoding modes is CRITICAL for Frigate stability.

โœ… CBR (Constant Bitrate) - STRONGLY RECOMMENDED for Frigate

How it works:

  • Maintains steady bitrate regardless of scene complexity
  • Simple scenes get more quality than needed (slight "waste")
  • Complex scenes get exactly enough bitrate (no drops)
  • Network bandwidth is predictable and constant

Pros:

  • Predictable bandwidth usage (easy to calculate storage requirements)
  • Stable stream quality at all times
  • Better for AI detection (consistent frame quality)
  • No bitrate spikes causing network congestion
  • Easier to troubleshoot issues

Cons:

  • Slightly larger file sizes during static scenes (5-10% more storage)
  • Potentially "wastes" bitrate when nothing is happening

Recommended bitrates:

  • Sub stream (720p @ 5fps): 1024-2048 kbps
  • Main stream (1080p @ 15fps): 4096-6144 kbps
  • Main stream (4K @ 15fps): 8192-16384 kbps

โœ… USE CBR for all Frigate cameras - main and sub streams!

โš ๏ธ VBR (Variable Bitrate) - Use with EXTREME Caution

How it works:

  • Adjusts bitrate dynamically based on scene complexity
  • Static scenes use very low bitrate (saves bandwidth/storage)
  • Complex scenes spike to higher bitrate (better quality)
  • Unpredictable bandwidth consumption

Pros:

  • Smaller file sizes for static scenes (20-30% storage savings)
  • Potentially better quality during action scenes
  • Efficient bandwidth usage when nothing happening

Cons (MAJOR ISSUES FOR FRIGATE):

  • Unpredictable bandwidth spikes can overload network
  • Bitrate can drop TOO LOW during motion, causing blocky detection frames
  • AI detection suffers from inconsistent frame quality
  • False negatives (missed detections) when bitrate drops during critical moments
  • Network buffer issues causing frame drops
  • Difficult to troubleshoot ("sometimes it works, sometimes it doesn't")

โ›” Problem scenario: Person walks across frame โ†’ camera lowers bitrate to save bandwidth โ†’ blocky frames โ†’ Frigate misses detection!

Only use VBR if:

  • Severely bandwidth-constrained network (e.g., remote location with cellular)
  • Camera supports "minimum quality floor" setting (prevents bitrate from dropping too low)
  • You're willing to sacrifice detection accuracy for storage savings
  • Extensive testing done with your specific cameras and scenarios

How to configure encoding in camera:

  1. Log into camera web interface
  2. Navigate to Video Settings or Stream Configuration
  3. For EACH stream (main and sub):
    • Set "Rate Control" or "Bitrate Mode" to CBR
    • Set bitrate to recommended values above
    • Set I-frame interval (GOP) to match framerate (e.g., 15 for 15fps stream)
    • Set encoding to H.264 (not H.265/HEVC unless specific requirement)
    • Disable smart encoding, WDR, 3D-NR on sub stream (keeps detection fast)
  4. Save settings and test stream in VLC before deploying

Verifying your stream configuration:

# Test with FFmpeg
ffprobe -v error -select_streams v:0 -show_entries stream=codec_name,bit_rate,r_frame_rate,width,height rtsp://admin:pass@192.168.1.64/stream1

# Check bitrate stability over time
ffmpeg -i rtsp://admin:pass@192.168.1.64/stream1 -t 60 -f null - 2>&1 | grep bitrate

๐Ÿ’พ Storage Requirements Calculator & Best Practices

Formula for storage calculation:

Storage (GB) = Cameras ร— Bitrate (Mbps) ร— Hours ร— 0.45

The 0.45 factor accounts for H.264 compression efficiency

Example Calculation: 4 cameras, 24/7 recording

Configuration:

  • Sub stream: 1280ร—720 @ 5fps, 2 Mbps (for detection - NOT recorded)
  • Main stream: 1920ร—1080 @ 15fps, 5 Mbps (for recording)
  • 4 cameras total
  • 24/7 continuous recording

Calculation:

Per camera per day:
5 Mbps ร— 24 hours ร— 0.45 = 54 GB/day

All 4 cameras per day:
54 GB ร— 4 cameras = 216 GB/day

Weekly storage:
216 GB ร— 7 days = 1,512 GB = 1.5 TB/week

Monthly storage:
216 GB ร— 30 days = 6,480 GB = 6.5 TB/month

Yearly storage:
216 GB ร— 365 days = 78,840 GB = 79 TB/year

Storage Recommendations by Camera Count:

  • 1-2 cameras: 1TB minimum, 2TB recommended (30 days retention)
  • 3-4 cameras: 2TB minimum, 4TB recommended (14-30 days retention)
  • 5-8 cameras: 4TB minimum, 8TB recommended (10-20 days retention)
  • 8-12 cameras: 8TB minimum, 12-16TB recommended (7-14 days retention)
  • 12+ cameras: 12TB+, consider NAS or RAID array

Storage Best Practices:

  • Separate storage types:
    • SSD: Frigate database, clips, snapshots (fast access needed)
    • HDD: Continuous recordings (large capacity, sequential writes)
  • Two-tier retention:
    • Continuous recordings: 3-7 days (fills storage quickly)
    • Event recordings: 30-90 days (smaller size, important footage)
  • Motion-based recording: Saves 60-90% storage vs 24/7 for low-activity areas
  • Monitor disk health: Use smartmontools to check drive status weekly
  • Plan for growth: Buy 50% more storage than calculated (future cameras, longer retention)
  • Backup strategy: Config + database weekly, important clips monthly

Storage Hardware Recommendations:

For Small Setup (1-4 cameras):

  • Single 4TB HDD (WD Purple or Seagate SkyHawk surveillance-rated)
  • Or: 2TB SSD (Samsung 870 QVO) if space-constrained

For Medium Setup (5-12 cameras):

  • Primary: 250GB SSD (database/clips)
  • Secondary: 8-12TB surveillance HDD (recordings)
  • Consider 2ร—HDD in RAID1 for redundancy

For Large Setup (12+ cameras):

  • NAS with 4-8 bay: Synology, QNAP, or TrueNAS
  • RAID5 or RAID6 configuration
  • Dedicated 500GB SSD for Frigate database
  • Multiple HDDs for recording pool (16-32TB total)

Monitoring storage usage:

# Check overall storage
df -h /mnt/frigate

# Check per-camera usage
du -sh /mnt/frigate/recordings/*

# Find oldest recordings
find /mnt/frigate/recordings -type f -printf '%T+ %p\n' | sort | head -20

# Check database size
du -sh /opt/frigate/config/frigate.db

๐Ÿ“ฆ Deployment Steps (Complete Walkthrough)

On your Linux server (Proxmox/Ubuntu/Debian/Any Linux):

Step 1: Create directory structure

# Create Frigate directories
sudo mkdir -p /opt/frigate/config
sudo mkdir -p /mnt/frigate

# Set permissions (replace 'youruser' with your username)
sudo chown -R youruser:youruser /opt/frigate
sudo chown -R youruser:youruser /mnt/frigate

# Navigate to Frigate directory
cd /opt/frigate

Step 2: Save docker-compose.yml

# Open nano editor
nano docker-compose.yml

# Paste the generated docker-compose.yml from this tool
# (Copy from the green "Copy to Clipboard" button above)

# Save and exit:
# Press Ctrl+O (save)
# Press Enter (confirm filename)
# Press Ctrl+X (exit)

Step 3: Save config.yml

# Open nano editor for config
nano config/config.yml

# Paste the generated config.yml from this tool
# (Copy from the green "Copy to Clipboard" button above)

# Save and exit: Ctrl+O, Enter, Ctrl+X

Step 4: Verify configuration files

# Check docker-compose.yml syntax
cat docker-compose.yml

# Check config.yml syntax
cat config/config.yml

# Verify directory structure
ls -la /opt/frigate/
ls -la /opt/frigate/config/

Step 5: Start Frigate

# Start Frigate container
docker compose up -d

# Verify container is running
docker ps | grep frigate

# Should show: frigate container with status "Up"

Step 6: Get admin password from logs

# Wait 10-15 seconds for Frigate to fully start, then:
docker logs frigate 2>&1 | grep -i password

# Look for line like:
# "Frigate admin user password: Xh9kL2mP5qR8tY"

# COPY THIS PASSWORD IMMEDIATELY!

Step 7: Access Frigate Web UI

  1. Open web browser on any device on your network
  2. Navigate to: http://YOUR_SERVER_IP:8971
  3. Example: http://192.168.1.100:8971
  4. Login with:
    • Username: admin
    • Password: (from logs above)

Step 8: Change password immediately

  1. After login, click Settings (gear icon) in top right
  2. Navigate to Users tab
  3. Click "Update Password" for admin user
  4. Enter new strong password (use password manager!)
  5. Click Save
  6. Test login with new password

Step 9: Verify everything works

# Check logs for errors
docker logs frigate --tail 100 | grep -i error

# Verify go2rtc streams are working
docker exec frigate curl -s http://127.0.0.1:1984/api/streams

# Should show all your cameras with "producers" field populated

# Check GPU access (if using Intel VAAPI)
docker exec frigate ls -la /dev/dri

# Check GPU access (if using NVIDIA)
docker exec frigate nvidia-smi

# Check Coral TPU (if using)
docker exec frigate ls -la /dev/bus/usb  # for USB Coral
docker exec frigate ls -la /dev/apex_0    # for PCIe Coral

Step 10: Monitor Frigate

# Watch logs in real-time
docker logs frigate -f

# Check for any warnings or errors
# Ctrl+C to stop watching

# Check detection FPS (should be close to your configured FPS)
docker logs frigate 2>&1 | grep "detection_fps"

# Check storage usage
df -h /mnt/frigate

โš ๏ธ Common First-Startup Issues:

  • "No frames received": Check camera RTSP URLs, test in VLC first
  • GPU not detected: Verify device passthrough, check permissions
  • High CPU usage: Ensure hardware acceleration working, check detector config
  • Can't access UI: Check firewall, verify port 8971 not blocked
  • Coral not found: Install libedgetpu runtime, check USB permissions

๐Ÿ—๏ธ How How Frigate Works - Complete Architecture

System Architecture Overview

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚                           FRIGATE NVR ARCHITECTURE                          โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

    ๐Ÿ“น IP Cameras (RTSP Streams)
         โ”‚
         โ”œโ”€โ”€โ”€ Main Stream (1080p/4K @ 15fps, 4-8 Mbps) โ”€โ”€โ”
         โ”‚                                                 โ”‚
         โ””โ”€โ”€โ”€ Sub Stream (720p @ 5fps, 1-2 Mbps) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”โ”‚
                                                          โ”‚โ”‚
         โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜โ”‚
         โ†“                                                  โ”‚
    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”                                   โ”‚
    โ”‚    go2rtc       โ”‚ (Restreaming Engine)              โ”‚
    โ”‚  - RTSP โ†’ WebRTCโ”‚                                   โ”‚
    โ”‚  - Low latency  โ”‚                                   โ”‚
    โ”‚  - Browser view โ”‚                                   โ”‚
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜                                   โ”‚
         โ”‚                                                  โ”‚
         โ†“                                                  โ†“
    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
    โ”‚                     FFmpeg                              โ”‚
    โ”‚  - Decode H.264/H.265                                  โ”‚
    โ”‚  - Hardware acceleration (VAAPI/NVENC)                 โ”‚
    โ”‚  - Frame extraction                                    โ”‚
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
         โ”‚                                    โ”‚
         โ”‚ (Sub stream frames)                โ”‚ (Main stream)
         โ†“                                    โ†“
    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”              โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
    โ”‚  Motion Detectionโ”‚              โ”‚   Recording      โ”‚
    โ”‚  - Frame diff    โ”‚              โ”‚   - Continuous   โ”‚
    โ”‚  - Threshold     โ”‚              โ”‚   - Events only  โ”‚
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜              โ”‚   - Segments     โ”‚
         โ”‚                            โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
         โ†“ (motion detected)                    โ”‚
    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”                       โ†“
    โ”‚  Object Detector โ”‚                  ๐Ÿ’พ Disk Storage
    โ”‚  - OpenVINO GPU  โ”‚                  /mnt/frigate/
    โ”‚  - Coral TPU     โ”‚                  โ”œโ”€โ”€ recordings/
    โ”‚  - CPU/NVIDIA    โ”‚                  โ”œโ”€โ”€ clips/
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜                  โ””โ”€โ”€ snapshots/
         โ”‚
         โ†“ (objects found)
    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
    โ”‚  Object Tracking โ”‚
    โ”‚  - Kalman filter โ”‚
    โ”‚  - Multi-frame   โ”‚
    โ”‚  - Zones         โ”‚
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
         โ”‚
         โ†“ (confirmed objects)
    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
    โ”‚    Events        โ”‚
    โ”‚  - Snapshots     โ”‚
    โ”‚  - Clips         โ”‚
    โ”‚  - Metadata      โ”‚
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
         โ”‚
         โ”œโ”€โ†’ ๐Ÿ—„๏ธ SQLite Database (events, stats)
         โ”‚
         โ””โ”€โ†’ ๐Ÿ“ก MQTT Broker
              โ”œโ”€โ†’ Home Assistant
              โ”œโ”€โ†’ Notifications
              โ””โ”€โ†’ Automations

Detailed Component Breakdown

๐Ÿ“ก go2rtc - The Restreaming Engine

  • Purpose: Converts RTSP streams to modern web-compatible formats (WebRTC, MSE)
  • Why needed: RTSP doesn't work in browsers, adds 5-10 second latency
  • Benefits: Sub-second latency for live view, universal browser compatibility
  • Technology: WebRTC for bidirectional (two-way audio), MSE for viewing only
  • Resource usage: Minimal CPU (~1% per stream), no transcoding needed
  • Port: Internal port 1984, external WebRTC port 8555

๐ŸŽฌ FFmpeg - The Video Processor

  • Purpose: Decode camera streams, extract frames for analysis, encode recordings
  • Main stream processing: Decode โ†’ Segment โ†’ Write to disk (recordings)
  • Sub stream processing: Decode โ†’ Extract frames โ†’ Send to detector
  • Hardware acceleration: Offloads decoding/encoding to GPU (VAAPI/NVENC)
  • Why important: Without HW accel, CPU usage is 10-20% per camera
  • Configuration: Automatically configured based on your hardware selection

๐Ÿ‘๏ธ Motion Detection - The First Filter

  • Purpose: Reduce AI detector load by only processing frames with motion
  • How it works: Compare consecutive frames, calculate pixel differences
  • Threshold: Configurable sensitivity (default: 30 pixels changed)
  • Performance impact: Saves 70-90% of detector cycles on static cameras
  • Best practice: Always enable unless camera constantly moving (PTZ)
  • Masks: Exclude areas like trees, flags, busy roads from motion detection

๐Ÿค– Object Detector - The AI Brain

  • Purpose: Identify objects (person, car, dog, etc.) in frames with motion
  • Input: 300ร—300 to 640ร—640 pixel frame (from sub stream)
  • Output: Bounding boxes with confidence scores (0-100%)
  • Frequency: Runs at detection FPS (typically 5fps, configurable)
  • Hardware options:
    • OpenVINO (Intel GPU): 5-10ms per frame, 10-15 cameras
    • Coral TPU: 10-15ms per frame, 10-15 cameras, 2W power
    • TensorRT (NVIDIA): 3-8ms per frame, 20+ cameras
    • CPU: 50-200ms per frame, 1-2 cameras only
  • Models: Default SSDLite MobileNet v2, upgradable to YOLO variants

๐ŸŽฏ Object Tracking - The Smart Filter

  • Purpose: Track objects across multiple frames, reduce false positives
  • How it works: Uses Kalman filter to predict object movement between frames
  • Benefits: Single person walking = 1 event (not 50 separate detections)
  • Minimum frames: Object must appear in 3+ consecutive frames to trigger event
  • Zones: Only create events when object enters defined zones
  • Filtering: Min/max area, confidence threshold, object type filters

๐Ÿ“ผ Recording Engine - The Storage Manager

  • Continuous recording: 24/7 recording in 60-second segments
  • Event recording: Only record when objects detected (saves storage)
  • Retention: Separate policies for continuous vs event recordings
  • Segments: Files split every 60 seconds for easy seeking
  • Storage path: /mnt/frigate/recordings/CAMERA/DATE/HOUR/
  • Cleanup: Automatic deletion based on retention policies

๐Ÿ“ธ Snapshots & Clips - The Evidence

  • Snapshots: Best frame of each detected object (JPEG image)
  • Clips: Video segment with object present (MP4, usually 10-30 seconds)
  • Storage: /mnt/frigate/clips/ and /mnt/frigate/snapshots/
  • Thumbnails: Automatically generated for UI gallery view
  • Retention: Separate from recordings, typically 30+ days

๐Ÿ—„๏ธ Database - The Event Logger

  • Type: SQLite (single file, no server needed)
  • Location: /config/frigate.db
  • Contents: Events, object metadata, statistics, timelines
  • Size: Grows ~1-5 MB per day per camera
  • Backup critical: Contains all event history and search indexes
  • Performance: Store on SSD for fast queries

๐Ÿ“ก MQTT - The Integration Layer

  • Purpose: Real-time event notifications and two-way control
  • Publishes: Object detections, camera status, statistics
  • Subscribes: Commands (enable/disable detection, snapshots, etc.)
  • Home Assistant: Auto-discovered, creates entities automatically
  • Automations: Trigger lights, sirens, notifications on detections
  • Topics: frigate/CAMERA/person, frigate/CAMERA/car, etc.

Data Flow Example: Person Detection

  1. T=0ms: Camera sends RTSP stream โ†’ go2rtc receives both main and sub streams
  2. T=10ms: FFmpeg decodes sub stream frame (720p) using GPU hardware acceleration
  3. T=15ms: Motion detection compares frame to previous โ†’ detects 15% pixel change โ†’ triggers object detection
  4. T=20ms: Frame resized to 300ร—300 pixels โ†’ sent to OpenVINO detector on Intel GPU
  5. T=28ms: Detector identifies: "person" at 87% confidence, bounding box coordinates (x:450, y:200, w:80, h:180)
  6. T=30ms: Object tracker receives detection โ†’ checks history โ†’ person not seen before โ†’ creates new tracked object ID
  7. T=200ms: Person detected in frame 2 โ†’ tracker confirms same person (ID matches) โ†’ increments frame counter
  8. T=400ms: Person detected in frame 3 โ†’ tracker confirms โ†’ threshold reached (3+ frames) โ†’ EVENT CREATED
  9. T=405ms:
    • Snapshot saved: /mnt/frigate/snapshots/front_door/person-20251228-181234.jpg
    • MQTT message published: frigate/front_door/person โ†’ payload: "ON"
    • Database entry created: event ID, timestamp, camera, object type
    • Home Assistant notified โ†’ automation triggered โ†’ phone notification sent
  10. T=410ms onwards: Tracker continues following person across frames โ†’ updates bounding box โ†’ records movement path
  11. T=5000ms (5 sec): Person exits frame โ†’ tracker holds object for 3 seconds (configurable timeout)
  12. T=8000ms (8 sec): Timeout reached โ†’ person object marked as "left frame" โ†’ event ended
  13. T=8100ms:
    • Clip created: 10-second video from main stream (5 sec before + 5 sec after)
    • Best snapshot selected: frame with highest confidence score (87%)
    • MQTT message: frigate/front_door/person โ†’ payload: "OFF"
    • Timeline updated in database with complete event duration

Total time: 8.1 seconds from first detection to complete event logged

Performance varies based on hardware, model, and configuration. Example based on Intel 11th gen CPU with iGPU + OpenVINO detector.

Performance Bottlenecks & Optimization

โŒ Common Bottlenecks:

  • CPU decoding without GPU: 20-40% CPU per camera โ†’ use VAAPI/NVENC
  • CPU detection: 100-400ms per frame โ†’ upgrade to GPU/Coral
  • High resolution detection: 1080p detection = 5x slower than 720p โ†’ use sub streams
  • VBR encoding: Variable bitrate causes frame drops โ†’ switch to CBR
  • Database on HDD: Slow queries, UI lag โ†’ move to SSD
  • No motion detection: Detector runs 100% of time โ†’ enable motion masks
  • High detection FPS: 10fps = 2x load vs 5fps โ†’ lower FPS unless needed

โœ… Optimization Checklist:

  • โœ“ Enable hardware acceleration (VAAPI/NVENC) for FFmpeg
  • โœ“ Use dedicated detector (OpenVINO GPU or Coral TPU, not CPU)
  • โœ“ Configure sub streams at 720p or lower for detection
  • โœ“ Set cameras to CBR encoding mode
  • โœ“ Enable motion detection on all cameras
  • โœ“ Use motion masks to exclude high-motion areas (trees, roads)
  • โœ“ Lower detection FPS to 5fps (or even 3fps for static cameras)
  • โœ“ Store database on SSD, recordings on HDD
  • โœ“ Set proper retention policies (don't keep 90 days if unnecessary)
  • โœ“ Use zones to limit detection to areas of interest

๐Ÿ’ป Useful Scripts & Commands

๐Ÿ” Authentication & Access

Get initial admin password:

docker logs frigate 2>&1 | grep -i password

Reset admin password:

# Add to config.yml:
auth:
  reset_admin_password: true

# Restart and check logs:
docker restart frigate
docker logs frigate 2>&1 | grep -i password

# REMOVE the line from config after getting new password!

๐Ÿ” Diagnostics & Monitoring

Check for errors in logs:

docker logs frigate 2>&1 | grep -i error

Watch logs in real-time:

docker logs frigate -f
# Press Ctrl+C to stop

Check detection FPS for all cameras:

docker logs frigate 2>&1 | grep "detection_fps"

Check inference speed (detector performance):

docker logs frigate 2>&1 | grep "inference_speed"

View camera stream status:

docker exec frigate curl -s http://127.0.0.1:1984/api/streams | jq

๐Ÿ–ฅ๏ธ Hardware Verification

Check Intel GPU access (VAAPI):

# Inside container:
docker exec frigate ls -la /dev/dri

# Should show: card0, renderD128, etc.

# Check GPU info:
docker exec frigate vainfo
# Requires intel-media-va-driver installed in container

Check NVIDIA GPU access:

docker exec frigate nvidia-smi

# Should display GPU name, memory, utilization

Check Google Coral TPU:

# USB Coral:
docker exec frigate ls -la /dev/bus/usb

# PCIe/M.2 Coral:
docker exec frigate ls -la /dev/apex_0

# If not found, check host:
lsusb | grep "Google"
ls /dev/apex_0

๐Ÿ’พ Storage Management

Check overall storage usage:

df -h /mnt/frigate

Check storage per camera:

du -sh /mnt/frigate/recordings/*

Find oldest recordings:

find /mnt/frigate/recordings -type f -printf '%T+ %p\n' | sort | head -20

Check database size:

du -sh /opt/frigate/config/frigate.db

Manually clean old recordings (careful!):

# Delete recordings older than 7 days:
find /mnt/frigate/recordings -type f -mtime +7 -delete

# Delete specific camera recordings:
rm -rf /mnt/frigate/recordings/front_door/2025-12-01/

๐Ÿ”„ Container Management

Restart Frigate:

docker restart frigate

Stop Frigate:

docker stop frigate

Start Frigate:

docker start frigate

Update Frigate to latest stable:

cd /opt/frigate
docker compose pull
docker compose up -d

Check Frigate version:

docker exec frigate cat /VERSION

๐Ÿ”ง Configuration Testing

Test camera RTSP stream with FFmpeg:

# Test if stream is accessible:
ffprobe -v error -show_entries stream=codec_name,width,height,r_frame_rate \
  rtsp://admin:password@192.168.1.64/stream1

# Test recording 10 seconds:
ffmpeg -i rtsp://admin:password@192.168.1.64/stream1 -t 10 -c copy test.mp4

Validate config.yml syntax:

# Check for YAML syntax errors:
docker run --rm -v /opt/frigate/config:/config \
  ghcr.io/blakeblackshear/frigate:stable \
  python3 -c "import yaml; yaml.safe_load(open('/config/config.yml'))"

# If no output, syntax is valid!

Check go2rtc stream status:

docker exec frigate curl -s http://127.0.0.1:1984/api/streams

๐Ÿ“ฆ Backup & Restore

Backup configuration:

tar -czf frigate-config-$(date +%Y%m%d).tar.gz /opt/frigate/config/

Backup database:

# Stop Frigate first for clean backup:
docker stop frigate

# Copy database:
cp /opt/frigate/config/frigate.db /backup/frigate-db-$(date +%Y%m%d).db

# Start Frigate:
docker start frigate

Automated backup script:

#!/bin/bash
# Save as /usr/local/bin/backup-frigate.sh

BACKUP_DIR="/mnt/backups/frigate"
DATE=$(date +%Y%m%d_%H%M%S)
mkdir -p "$BACKUP_DIR"

echo "Backing up Frigate config and database..."

# Backup config
tar -czf "$BACKUP_DIR/config-$DATE.tar.gz" /opt/frigate/config/

# Backup database (with Frigate stopped)
docker stop frigate
sleep 5
cp /opt/frigate/config/frigate.db "$BACKUP_DIR/frigate-$DATE.db"
docker start frigate

# Backup recent clips (last 30 days)
find /mnt/frigate/clips -mtime -30 -type f | \
  tar -czf "$BACKUP_DIR/clips-$DATE.tar.gz" -T -

# Delete backups older than 90 days
find "$BACKUP_DIR" -mtime +90 -delete

echo "Backup complete: $BACKUP_DIR"

Schedule daily backups with cron:

crontab -e

# Add this line (runs daily at 3 AM):
0 3 * * * /usr/local/bin/backup-frigate.sh >> /var/log/frigate-backup.log 2>&1

Restore from backup:

# Stop Frigate:
docker stop frigate

# Restore config:
tar -xzf /backup/frigate-config-20251228.tar.gz -C /

# Restore database:
cp /backup/frigate-db-20251228.db /opt/frigate/config/frigate.db

# Start Frigate:
docker start frigate

๐ŸŽฌ Video Processing

Create timelapse from recordings:

# 24-hour timelapse at 100x speed:
ffmpeg -pattern_type glob -i '/mnt/frigate/recordings/front_door/2025-12-27/*/*.mp4' \
  -vf "setpts=0.01*PTS,fps=30" \
  -c:v libx264 -preset fast -crf 23 \
  timelapse-front-door-2025-12-27.mp4

# Result: 24 hours compressed to ~14 minutes

Extract specific time range:

# Extract 10 minutes starting from specific time:
ffmpeg -ss 14:30:00 -i /mnt/frigate/recordings/front_door/2025-12-27/14/*.mp4 \
  -t 00:10:00 -c copy output.mp4

Convert clip to GIF:

ffmpeg -i /mnt/frigate/clips/front_door-person-20251227.mp4 \
  -vf "fps=10,scale=480:-1:flags=lanczos" \
  -c:v gif output.gif

๐ŸŒ Network Troubleshooting

Test camera connectivity:

# Ping camera:
ping -c 4 192.168.1.64

# Check if RTSP port is open:
telnet 192.168.1.64 554

# Or with netcat:
nc -zv 192.168.1.64 554

Check Frigate port accessibility:

# From server:
curl http://localhost:8971

# From another computer:
curl http://192.168.1.100:8971

Monitor network bandwidth:

# Install iftop:
sudo apt install iftop

# Monitor traffic:
sudo iftop -i eth0

# Or use docker stats:
docker stats frigate

๐Ÿš€ Advanced Tips & Best Practices

โšก Performance Optimization

  • Lower detection FPS: 5fps is plenty for most scenarios. Reducing to 3fps saves GPU while maintaining good detection. Use 10fps only for very fast-moving objects.
  • Use sub streams for detection: 720p is plenty for detection, don't use 1080p/4K. Higher resolution = slower detection with minimal accuracy gain.
  • Enable motion detection: Only runs object detection when motion present. Saves 70-90% GPU/CPU cycles on static cameras.
  • Optimize camera settings: Set cameras to CBR encoding, disable WDR/HDR on sub stream, lower bitrate on sub stream to minimum acceptable quality.
  • Use SSD for clips/snapshots: Faster UI response, less wear on spinning drives. Keep recordings on HDD (sequential writes are fine).
  • Reduce snapshot quality: Default 70% JPEG quality is good, can lower to 50-60% for storage savings with minimal visual impact.
  • Disable unused features: If not using audio detection, disable it. If not using all object types, filter to only what you need.
  • Limit recording retention: 7 days continuous + 30 days events is usually enough. Adjust based on your actual needs.
  • Use zones effectively: Only detect in areas that matter. Reduces false positives and unnecessary processing.
  • Motion masks: Exclude trees, flags, busy roads from motion detection to reduce detector load.

๐ŸŒ Secure Remote Access Methods

โš ๏ธ NEVER expose Frigate directly to the internet!

Port forwarding Frigate (port 8971) is a major security risk. Use these methods instead:

Method 1: Tailscale VPN (Recommended - Easiest)

Why it's best: Zero configuration NAT traversal, works behind CGNAT, free for personal use, WireGuard-based encryption

  1. Install Tailscale on Frigate server:
    curl -fsSL https://tailscale.com/install.sh | sh
    sudo tailscale up
  2. Install Tailscale app on your phone/laptop
  3. Sign in with same account on all devices
  4. Access Frigate from anywhere: http://TAILSCALE_IP:8971
  5. Find Tailscale IP: tailscale ip -4

โœ… Fully encrypted, zero port forwarding, works everywhere, even on cellular/hotel WiFi

Method 2: WireGuard VPN (Traditional VPN)

When to use: You want full control, already have VPN server, or need to access other services too

  1. Install WireGuard on your router or Proxmox/server:
    sudo apt install wireguard
  2. Generate keys and configure WireGuard (many tutorials available)
  3. Forward only WireGuard port (e.g., 51820) through router
  4. Connect via WireGuard app on phone/laptop
  5. Access Frigate on LAN IP: http://192.168.1.X:8971

โš ๏ธ Requires port forwarding (51820) and manual configuration

Method 3: Cloudflare Tunnel (Advanced - No Port Forwarding)

When to use: Behind CGNAT, no port forwarding possible, want HTTPS with custom domain

  1. Sign up for free Cloudflare account
  2. Add your domain to Cloudflare (or use Cloudflare registrar)
  3. Install cloudflared on Frigate server:
    curl -L --output cloudflared.deb https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb
    sudo dpkg -i cloudflared.deb
  4. Authenticate: cloudflared tunnel login
  5. Create tunnel: cloudflared tunnel create frigate
  6. Configure tunnel to point to localhost:8971
  7. Access via custom domain: https://frigate.yourdomain.com

โœ… Free, automatic HTTPS, no port forwarding, works behind CGNAT

โš ๏ธ Requires domain name and Cloudflare account, traffic passes through Cloudflare servers

Method 4: Nginx Reverse Proxy + Let's Encrypt (Self-Hosted HTTPS)

When to use: Want full control, already have domain, comfortable with web server administration

  1. Install Nginx and Certbot:
    sudo apt install nginx certbot python3-certbot-nginx
  2. Configure Nginx virtual host for Frigate
  3. Obtain Let's Encrypt SSL certificate:
    sudo certbot --nginx -d frigate.yourdomain.com
  4. Add HTTP basic authentication for extra security
  5. Forward ONLY port 443 (HTTPS) through router to Nginx server
  6. Access: https://frigate.yourdomain.com

โœ… Full control, self-hosted, professional HTTPS setup

โš ๏ธ Requires domain, port forwarding, and web server knowledge

Method 5: Nebula Mesh VPN (For Tech Enthusiasts)

When to use: Want peer-to-peer mesh network, maximum security, connecting multiple sites

Nebula is a scalable overlay network created by Slack. Similar to Tailscale but self-managed.

  • More complex setup than Tailscale
  • Better for large deployments (connecting multiple homes, offices)
  • Fully open source and self-hosted
  • Requires lighthouse server (can be small VPS)

Security Best Practices (All Methods):

  • โœ… Use strong, unique password for Frigate admin account
  • โœ… Enable two-factor authentication when available
  • โœ… Keep Frigate updated to latest stable version
  • โœ… Regularly review access logs
  • โœ… Use separate VLAN for cameras (isolate from main network)
  • โœ… Disable UPnP on router (prevents unauthorized port forwards)
  • โœ… Set camera passwords to strong values (not default admin/admin)
  • โŒ Never use port forwarding directly to Frigate (port 8971)
  • โŒ Never use default credentials on any component
  • โŒ Never trust "security by obscurity" (changing port doesn't help)

๐Ÿ  Home Assistant Integration Deep Dive

Frigate integrates seamlessly with Home Assistant via MQTT for powerful automations.

Setup Steps:

  1. Ensure MQTT configured in Frigate (this tool does it for you)
  2. In Home Assistant: Configuration โ†’ Integrations โ†’ Add Integration
  3. Search for "Frigate" โ†’ Install
  4. Enter Frigate URL: http://192.168.1.100:8971
  5. Frigate auto-discovers all cameras and creates entities

Example Automation: Alert on Person Detection

automation:
  - alias: "Notify on Person at Front Door"
    trigger:
      - platform: mqtt
        topic: "frigate/front_door/person"
        payload: "ON"
    condition:
      - condition: state
        entity_id: alarm_control_panel.home
        state: "armed_away"
    action:
      - service: notify.mobile_app
        data:
          title: "Person Detected"
          message: "Someone is at the front door"
          data:
            image: "https://frigate.local:8971/api/front_door/person/snapshot.jpg"
            actions:
              - action: "VIEW_CLIP"
                title: "View Clip"
              - action: "DISMISS"
                title: "Dismiss"

Example: Turn on Lights When Person Detected at Night

automation:
  - alias: "Front Porch Light on Person Detection"
    trigger:
      - platform: mqtt
        topic: "frigate/front_door/person"
        payload: "ON"
    condition:
      - condition: sun
        after: sunset
        before: sunrise
    action:
      - service: light.turn_on
        target:
          entity_id: light.front_porch
        data:
          brightness: 255
      - delay: "00:05:00"
      - service: light.turn_off
        target:
          entity_id: light.front_porch

Example: Disable Detection When Home

automation:
  - alias: "Disable Frigate When Home"
    trigger:
      - platform: state
        entity_id: person.john
        to: "home"
    action:
      - service: switch.turn_off
        target:
          entity_id: switch.frigate_front_door_detect
  
  - alias: "Enable Frigate When Away"
    trigger:
      - platform: state
        entity_id: person.john
        to: "not_home"
    action:
      - service: switch.turn_on
        target:
          entity_id: switch.frigate_front_door_detect

Available Entities in Home Assistant:

  • Cameras: camera.frigate_CAMERA_NAME
  • Sensors: sensor.frigate_CAMERA_NAME_person_count (and car, dog, etc.)
  • Binary Sensors: binary_sensor.frigate_CAMERA_NAME_person (on/off state)
  • Switches: switch.frigate_CAMERA_NAME_detect (enable/disable detection)
  • Switches: switch.frigate_CAMERA_NAME_recordings (enable/disable recording)
  • Switches: switch.frigate_CAMERA_NAME_snapshots (enable/disable snapshots)

๐ŸŽฏ Zone Detection & Filtering

Zones allow you to only trigger events when objects enter specific areas.

Example: Only alert when person enters "porch" zone

cameras:
  front_door:
    zones:
      porch:
        coordinates: 100,200,300,200,300,400,100,400
        objects:
          - person
      driveway:
        coordinates: 400,300,800,300,800,600,400,600
        objects:
          - person
          - car
    objects:
      filters:
        person:
          mask: 50,50,150,50,150,150,50,150  # Exclude neighbor's yard
          min_area: 2000  # Minimum 2000 pixels (filters out distant people)
          max_area: 100000  # Maximum size (filters out vehicles misclassified as person)
          min_score: 0.6  # 60% confidence minimum
          threshold: 0.7  # 70% for tracking
          min_ratio: 0.2  # Aspect ratio filter (prevents horizontal blobs)
          max_ratio: 5.0

How to create zones:

  1. Open Frigate UI โ†’ Camera โ†’ Debug view
  2. Click on the frame to get pixel coordinates
  3. Draw your zone on paper/screenshot
  4. Add coordinates to config (clockwise from top-left)
  5. Coordinates are x,y pairs: x1,y1,x2,y2,x3,y3,x4,y4

Filtering Best Practices:

  • min_area: Use to filter distant objects (person = 2000-5000 pixels, car = 8000-15000 pixels)
  • max_area: Prevent misclassifications (person shouldn't be bigger than 100000 pixels)
  • min_score: Detector confidence (0.5-0.7 good for person, 0.6-0.8 for vehicles)
  • min_ratio/max_ratio: Aspect ratio to filter weird shapes (person ~0.3-2.0, car ~1.5-3.5)
  • Zones: Only create events when object enters specific zone (ignore street traffic)
  • Masks: Permanently exclude areas from detection (trees, neighbor's yard, sky)

๐Ÿ’ก Pro Tips Collection

  • Stationary object detection: Detect abandoned packages/bags by tracking objects that don't move for X minutes. Use stationary object filter with threshold parameter.
  • Pet vs Person differentiation: Use min_area to separate cats/dogs from people. People are typically 5000+ pixels, pets 1000-3000 pixels at typical camera distance.
  • Shadow reduction: Enable improve_contrast in camera config. Lower detection sensitivity near walls where shadows frequently appear.
  • Night mode optimization: Enable camera IR mode, use lower confidence thresholds for detection at night (0.5 instead of 0.7). Consider separate day/night zones if needed.
  • Multi-camera tracking: Track person across multiple cameras using MQTT + Home Assistant automation. Create "person entering house" event by chaining camera detections.
  • Custom labels: Train custom models using Frigate+ to detect specific objects (packages, specific vehicles, tools, specific animals).
  • Geofencing integration: Auto-disable detection when family is home via Home Assistant presence detection. Saves processing and prevents false alerts.
  • Smart retention: Keep recordings with events forever (or very long), delete no-event recordings after 3-7 days. Saves storage while keeping important footage.
  • Alert fatigue fix: Use cooldown periods in Home Assistant automations (don't alert on same person for 5-10 minutes). Prevents notification spam.
  • Privacy zones: Mask neighbor's windows/yards in detection config using motion mask coordinates. Respects privacy while maintaining your security.
  • False positive reduction: Increase min_frames from 3 to 5 or 10 for objects that must be present longer before alerting (reduces birds, reflections).
  • Audio detection: Frigate supports audio event detection (glass breaking, barking). Enable in config for additional alert triggers.
  • PTZ camera support: Frigate supports PTZ (pan-tilt-zoom) cameras with ONVIF protocol. Can auto-track detected objects.
  • License plate recognition: Use Frigate+ or third-party integration for LPR (License Plate Recognition). Requires dedicated model.
  • Facial recognition: Integrate with Double-Take or Compreface for facial recognition via MQTT integration.

๐Ÿ“ฑ Mobile App Access

Official Mobile Access Methods:

  • Web UI: Just open http://YOUR_IP:8971 in mobile browser. Works great, fully responsive!
  • Home Assistant App: Integrate Frigate โ†’ access via HA mobile app with full functionality
  • Frigate Mobile (iOS/Android): Third-party app "Frigate Viewer" available in app stores

Progressive Web App (PWA) - Best Mobile Experience:

  1. Open Frigate UI in mobile browser (Chrome/Safari)
  2. Android Chrome: Menu (โ‹ฎ) โ†’ "Add to Home Screen"
  3. iOS Safari: Share button (โฌ†) โ†’ "Add to Home Screen"
  4. Frigate icon appears on home screen like native app
  5. Opens full-screen without browser UI, acts like real app!

โœ… PWA gives app-like experience without app store, updates automatically

๐ŸŒŸ Community Resources

  • Official Documentation: docs.frigate.video - Complete documentation with examples
  • GitHub Repository: github.com/blakeblackshear/frigate - Source code, issue tracker, feature requests
  • Reddit Community: r/frigate_nvr - Active community, setup help, show-off your configs
  • Home Assistant Forum: Search "Frigate" for hundreds of integration examples and automation ideas
  • YouTube Tutorials: Search "Frigate NVR tutorial" for video guides (Everything Smart Home, BeardedTinker, etc.)
  • Frigate+ Service: frigate.video/plus - Cloud AI model training, supports development

๐Ÿ™ Remember to support the project:

Frigate is free and open source thanks to Blake Blackshear and contributors. If it saves you money or time, please consider donating or subscribing to Frigate+ to ensure continued development!