A Docker container that runs Duplicacy CLI backups on a cron schedule
to an S3-compatible storage with encryption, pruning, and notifications.
- Features
- Architecture
- Quick Start
- Configuration Reference
- Scripts
- Backup Verification
- Notification Format
- Troubleshooting
- Guides
- Contributing
- S3-compatible storage -- Garage, MinIO, AWS S3, Backblaze B2, and any S3-compatible provider
- Multi-repo from a single container -- one tiny daily wrapper per repository
- AES-256-GCM encryption -- per-repo encryption passwords
- Parallel uploads -- configurable thread count via
DUPLICACY_THREADS - Staggered cron schedules -- avoid storage contention across servers
- Per-repo lock files -- automatic timeout kills stuck backups after
MAX_RUNTIME_HOURS - Weekly exhaustive prune -- reclaims actual storage space by scanning all chunks
- Monthly integrity check -- verifies all backup chunks and triggers Garage data scrubs
- Filter files -- exclude caches, thumbnails, and temporary data
- Telegram notifications -- via Shoutrrr (supports 70+ services)
- Multi-architecture Docker image -- amd64, arm64, armv7
- Alpine-based -- minimal image footprint
- UnRAID and Linux support -- back up shares, boot USB,
/etc,/home, crontabs, Tailscale state
Each server backs up to an S3 endpoint. When using Garage with replication factor 3, data is automatically replicated across all cluster nodes -- no secondary Duplicacy storage needed:
Server A ──backup──> Garage S3 cluster (RF=3)
Server B ──backup──> ├─ Node 1
Server C ──backup──> ├─ Node 2
└─ Node 3
The daily wrapper scripts are four lines each and source the shared dual-executor.sh, which handles locking, backup, prune, and notification logic.
Copy docker-compose.yml and fill in your values:
services:
duplicacy-cli-cron:
image: drumsergio/duplicacy-cli-cron:3.2.5.2
container_name: duplicacy-cli-cron
restart: unless-stopped
volumes:
- /mnt/user/appdata/duplicacy/config:/config
- /mnt/user/appdata/duplicacy/cron:/etc/periodic
- /mnt/user:/local_shares
- /boot:/boot_usb
environment:
CRON_DAILY: "0 2 * * *"
CRON_WEEKLY: "0 4 * * 6"
DUPLICACY_THREADS: "8"
HOST: MyServer
TZ: Europe/Madrid
SHOUTRRR_URL: telegram://TOKEN@telegram?chats=CHAT_ID¬ification=no&parseMode=markdown
ENDPOINT_1: "192.168.1.100:3900"
BUCKET: duplicacy
REGION: garage
MAX_RUNTIME_HOURS: "71"
DUPLICACY_APPDATA_S3_ID: YOUR_S3_ACCESS_KEY
DUPLICACY_APPDATA_S3_SECRET: YOUR_S3_SECRET_KEY
DUPLICACY_APPDATA_PASSWORD: YOUR_ENCRYPTION_PASSSee docker-compose.yml in this repo for the full example with comments.
Edit config/config-s3.sh with your storage name, snapshot ID, and repo path. Then run it inside the container:
docker exec duplicacy-cli-cron sh /config/config-s3.shRepeat for each folder you want to back up (e.g., appdata, Multimedia, system, boot).
Each backup location gets a tiny wrapper script placed in the daily cron directory. The wrapper sets per-repo constants and sources the shared dual-executor.sh:
#!/usr/bin/env sh
STORAGENAME="appdata"
SNAPSHOTID="appdata"
REPO_DIR="/local_shares/appdata"
THREADS_OVERRIDE="8"
. /config/dual-executor.shPlace dual-executor.sh in the config volume, then create one wrapper per repo:
# Copy the executor to the config volume
cp scripts/dual-executor.sh /mnt/user/appdata/duplicacy/config/
# Create wrapper scripts in the cron directory
cat > /mnt/user/appdata/duplicacy/cron/daily/00-boot.sh << 'EOF'
#!/usr/bin/env sh
STORAGENAME="boot"
SNAPSHOTID="boot"
REPO_DIR="/boot_usb"
THREADS_OVERRIDE="8"
. /config/dual-executor.sh
EOF
chmod +x /mnt/user/appdata/duplicacy/cron/daily/00-boot.shScripts are executed alphabetically by run-parts, so prefix with numbers to control order (e.g., 00-boot.sh, 01-Multimedia.sh, 02-appdata.sh).
Tip: Use
THREADS_OVERRIDEper repo to tune performance. For HDD-backed repos with large files (media), lower threads (4-8) reduce disk seek contention. For SSD/NVMe or small-file repos, higher threads (8-16) improve throughput.
Copy scripts/exhaustive-prune.sh to the weekly cron directory:
cp scripts/exhaustive-prune.sh /mnt/user/appdata/duplicacy/cron/weekly/01-exhaustive-prune.sh
chmod +x /mnt/user/appdata/duplicacy/cron/weekly/01-exhaustive-prune.shThe exhaustive prune auto-discovers all repos under /local_shares/*/ and prunes them. It also handles extra repos (/boot_usb for UnRAID) and respects daily backup lock files to avoid conflicts.
Copy scripts/monthly-integrity-check.sh to the monthly cron directory:
cp scripts/monthly-integrity-check.sh /mnt/user/appdata/duplicacy/cron/monthly/01-integrity-check.sh
chmod +x /mnt/user/appdata/duplicacy/cron/monthly/01-integrity-check.shThis script verifies all backup chunks across every repo and, if you use Garage, triggers a data scrub on the target storage node.
| Variable | Default | Description |
|---|---|---|
CRON_DAILY |
0 2 * * * |
When daily backup scripts run |
CRON_WEEKLY |
0 4 * * 6 |
When weekly exhaustive prune runs (Saturday by default) |
CRON_MONTHLY |
0 5 1 * * |
When monthly integrity check runs (1st of month) |
DUPLICACY_THREADS |
4 |
Default parallel upload/download threads |
HOST |
$(hostname) |
Machine name shown in notifications |
TZ |
Etc/UTC |
Timezone |
SHOUTRRR_URL |
(empty) | Notification URL (Shoutrrr format) |
ENDPOINT_1 |
(required) | S3 endpoint for storage |
BUCKET |
(required) | S3 bucket name |
REGION |
(required) | S3 region (use garage for Garage) |
MAX_RUNTIME_HOURS |
71 |
Kill stuck backups after this many hours |
GARAGE_ADMIN_TOKEN |
(empty) | Garage admin API token (for monthly scrub trigger) |
Duplicacy resolves credentials from environment variables by storage name:
DUPLICACY_<STORAGENAME>_S3_ID -> S3 access key ID
DUPLICACY_<STORAGENAME>_S3_SECRET -> S3 secret access key
DUPLICACY_<STORAGENAME>_PASSWORD -> repository encryption password
Example for a storage named appdata:
DUPLICACY_APPDATA_S3_ID: GKabc123...
DUPLICACY_APPDATA_S3_SECRET: f42b4be...
DUPLICACY_APPDATA_PASSWORD: mySecretPasswordWhen multiple servers share the same S3 backend, stagger CRON_DAILY to avoid contention:
| Server | CRON_DAILY |
Description |
|---|---|---|
| Server A | 0 2 * * * |
Runs at 2:00 AM |
| Server B | 0 3 * * * |
Runs at 3:00 AM |
| Server C | 0 4 * * * |
Runs at 4:00 AM |
Create .duplicacy/filters inside a repo to exclude paths from backup. This reduces backup time and storage for regenerable data:
# Exclude cache and generated content
-Cache/
-EncodedVideo/
-Thumbs/
-.DS_Store
-Thumbs.db
-*.tmp
See the Duplicacy wiki on filters for the full syntax.
Each daily wrapper creates a lock file at /tmp/duplicacy-<SNAPSHOTID>.lock. If a previous run is still active:
- Within
MAX_RUNTIME_HOURS: the new run is skipped with a Telegram notification. - Exceeds
MAX_RUNTIME_HOURS: the stuck process is killed and a fresh backup starts.
Daily prune (skipped on Saturdays when the weekly exhaustive prune runs):
-keep 0:180 # Delete all snapshots older than 180 days
-keep 30:90 # Keep one snapshot every 30 days if older than 90 days
-keep 7:30 # Keep one snapshot every 7 days if older than 30 days
-keep 1:7 # Keep one snapshot every day if older than 7 days
Weekly exhaustive prune runs with the -exhaustive flag to scan all chunks and reclaim actual storage space.
| Script | Schedule | Purpose |
|---|---|---|
scripts/dual-executor.sh |
Daily (sourced) | Shared backup + prune logic |
scripts/exhaustive-prune.sh |
Weekly | Full chunk scan across all repos to reclaim space |
scripts/monthly-integrity-check.sh |
Monthly | Chunk verification and Garage scrub trigger |
#!/usr/bin/env sh
STORAGENAME="Multimedia"
SNAPSHOTID="Multimedia"
REPO_DIR="/local_shares/Multimedia"
THREADS_OVERRIDE="8"
. /config/dual-executor.shdocker exec duplicacy-cli-cron sh -c \
'cd /local_shares/appdata && duplicacy list -storage appdata'Run an on-demand integrity check for a specific repo:
docker exec duplicacy-cli-cron sh -c \
'cd /local_shares/appdata && duplicacy check -storage appdata -threads 4'To restore from a specific revision to a target path:
docker exec duplicacy-cli-cron sh -c \
'cd /local_shares/appdata && duplicacy restore -r 42 -storage appdata -stats'Add -overwrite to replace existing files, or use -delete to remove files not present in the snapshot. See the Duplicacy CLI restore docs for full options.
For Garage S3 storage, check bucket sizes to confirm data is being stored:
# Using the Garage admin API
curl -s -H "Authorization: Bearer ${GARAGE_ADMIN_TOKEN}" \
http://192.168.1.100:3903/v2/GetBucketInfo?id=YOUR_BUCKET_ID | jq .bytesSuccessful backup:
[green] MyServer -- appdata
[ok] [sync] Pruned
Skipped (previous run still in progress):
[skip] MyServer -- Multimedia
Skipped -- previous run still in progress (PID: 325)
Failed backup:
[red] MyServer -- appdata
[fail] [sync] Pruned
Stuck job killed:
[warn] MyServer -- appdata
Killed after 71h timeout (PID: 1234)
A stale lock file may be left behind if the container was restarted mid-backup. Remove it manually:
docker exec duplicacy-cli-cron rm -f /tmp/duplicacy-<SNAPSHOTID>.lockIf the problem recurs, your backup may genuinely need more time. Increase MAX_RUNTIME_HOURS or reduce the data volume being backed up.
Each repo directory must be initialized with duplicacy init before backups can run. Verify the .duplicacy directory exists:
docker exec duplicacy-cli-cron ls -la /local_shares/appdata/.duplicacy/If missing, re-run the initialization script:
docker exec duplicacy-cli-cron sh /config/config-s3.shCron job output is redirected to PID 1 stdout so Docker can capture it. Check with:
docker logs --tail 100 duplicacy-cli-cronIf logs are empty, verify the cron scripts are executable:
docker exec duplicacy-cli-cron ls -la /etc/periodic/daily/All wrapper scripts must have the execute bit set (chmod +x).
The weekly exhaustive prune scans all chunks across all repos. For large repositories, this is expected. If it overlaps with daily backups, it will wait up to 1 hour for locks to clear. Options:
- Stagger the weekly schedule earlier (e.g.,
CRON_WEEKLY: "0 0 * * 6") - Ensure daily backups finish well before the weekly prune starts
Duplicacy's memory usage scales with thread count. If the container is being OOM-killed:
- Lower
DUPLICACY_THREADSorTHREADS_OVERRIDE - Add a memory limit in your
docker-compose.yml:mem_limit: 512m
- Deploying Garage S3 (v2.x) and Hooking It Up to Duplicacy -- S3 approach (recommended)
- Backup Bliss: A Dockerized Duplicacy Setup for Your Home Servers -- NFS approach (legacy)
| Project | Description |
|---|---|
| duplicacy-exporter | Prometheus exporter for real-time backup metrics |
| duplicacy-ha | Home Assistant integration for backup monitoring |
Contributions are welcome. Open an issue or submit a pull request.
This project follows the Contributor Covenant Code of Conduct.
duplicacy-container-- Runtime image and Helm chart for the Kubernetes Duplicacy stackduplicacy-exporter-- Prometheus exporter for Duplicacy backup metrics- Duplicacy -- Lock-free deduplication cloud backup tool