-
-
Notifications
You must be signed in to change notification settings - Fork 10
S3 Offsite Sync
The S3 Offsite Sync plugin mirrors BorgBackup repositories to S3-compatible object storage for disaster recovery, geographic redundancy, and long-term archival.
What It Does:
- Automatically syncs Borg repositories to cloud object storage after each prune operation
- Uses
rclonefor efficient, incremental transfers - Supports any S3-compatible storage provider
- Optional server backup sync for complete disaster recovery
Why Use It:
- Disaster Recovery: Protect against server hardware failure, data center disasters
- Geographic Redundancy: Store backups in different physical locations
- Compliance: Meet regulatory requirements for offsite backup storage
- Cloud Economics: Leverage inexpensive cloud storage (Wasabi, Backblaze B2)
- A backup plan's prune job completes successfully
- BBS automatically queues an
s3_syncjob for that repository - The agent runs
rclone syncto mirror the repository to S3 - Only changed files are transferred (incremental sync)
- Old files removed from the repository during pruning are also removed from S3
-
Full Mirror: The entire Borg repository directory is synced, including:
- Archive data chunks
- Repository metadata
- Config files
- Lock files (if present)
- Incremental: Only new or changed files are uploaded
- Deletions: Files removed from the repo (during pruning) are removed from S3
- Bandwidth Limiting: Configurable to prevent saturating network connections
The S3 sync plugin works with any S3-compatible object storage:
| Provider | Endpoint Example | Notes |
|---|---|---|
| AWS S3 | s3.amazonaws.com |
Most widely used, multiple regions |
| Wasabi | s3.wasabisys.com |
Cost-effective, fast, no egress fees |
| Backblaze B2 | s3.us-west-002.backblazeb2.com |
Very inexpensive, pay-as-you-go |
| DigitalOcean Spaces | nyc3.digitaloceanspaces.com |
Integrated with DO infrastructure |
| MinIO | minio.example.com |
Self-hosted S3-compatible storage |
| Cloudflare R2 | <account>.r2.cloudflarestorage.com |
No egress fees |
| Any S3 API | Custom endpoint | Must support S3 API v4 signatures |
There are three ways to configure S3 sync:
Use one set of S3 credentials for all clients and backup plans.
Best For: Single organization, all backups in one S3 bucket
Setup:
- Navigate to Settings → Offsite Storage tab
- Fill in global S3 settings (see configuration section below)
- Click Test Connection to verify
- Save settings
- Enable S3 sync on individual backup plans (no additional config needed)
Screenshot: Settings → Offsite Storage tab with S3 configuration form
Create per-client S3 configurations with different credentials or buckets.
Best For: Multiple clients, different S3 accounts per client, MSPs
Setup:
- Navigate to client detail → Plugins tab
- Enable S3 Offsite Sync plugin
- Click Add Configuration
- Fill in S3 settings specific to this client
- Save configuration
- Attach configuration to backup plans
Screenshot: S3 plugin configuration form on Plugins tab
Configure unique S3 settings for individual backup plans.
Best For: Different retention or storage classes per backup plan
Setup:
- Edit a backup plan
- In the S3 Sync section, select "Custom configuration for this plan"
- Fill in S3 settings
- Save plan
| Parameter | Description | Example |
|---|---|---|
| Endpoint | S3 API endpoint URL | s3.wasabisys.com |
| Region | Storage region |
us-east-1, eu-central-1
|
| Bucket Name | S3 bucket (must already exist) | bbs-backups |
| Access Key | S3 access key ID | AKIAIOSFODNN7EXAMPLE |
| Secret Key | S3 secret access key | wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY |
| Parameter | Description | Default | Example |
|---|---|---|---|
| Path Prefix | Prefix for all objects in bucket | (none) | backups/borg/ |
| Bandwidth Limit | Upload speed limit | Unlimited |
10M (10 MB/s) |
| Storage Class | S3 storage class | STANDARD |
GLACIER, INTELLIGENT_TIERING
|
| Server-Side Encryption | Enable SSE-S3 encryption | Disabled | Enabled |
Endpoint: s3.amazonaws.com (or region-specific: s3.us-west-2.amazonaws.com)
Region: us-east-1, us-west-2, eu-west-1, etc.
Bucket: your-bucket-name
Endpoint: s3.wasabisys.com (or region-specific: s3.us-east-2.wasabisys.com)
Region: us-east-1, us-east-2, us-west-1, eu-central-1
Bucket: your-bucket-name
Endpoint: s3.us-west-002.backblazeb2.com (check your account for exact endpoint)
Region: us-west-002 (varies by bucket)
Bucket: your-bucket-name
Endpoint: nyc3.digitaloceanspaces.com (or your region)
Region: nyc3, sfo3, ams3, sgp1
Bucket: your-space-name
Endpoint: minio.example.com:9000
Region: us-east-1 (MinIO default, or custom)
Bucket: backups
Create a bucket in your chosen S3 provider:
AWS S3 Example:
aws s3 mb s3://bbs-backups --region us-east-1Wasabi Example (via web console):
- Log in to Wasabi console
- Create Bucket → Name:
bbs-backups, Region:us-east-1 - Note the endpoint:
s3.wasabisys.com
Create an S3 access key with appropriate permissions:
AWS S3 IAM Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::bbs-backups",
"arn:aws:s3:::bbs-backups/*"
]
}
]
}- Navigate to Settings → Offsite Storage
- Fill in the form:
-
S3 Endpoint:
s3.wasabisys.com -
Region:
us-east-1 -
Bucket Name:
bbs-backups - Access Key: Your access key
- Secret Key: Your secret key
-
Path Prefix (optional):
borgbackups/ -
Bandwidth Limit (optional):
20M
-
S3 Endpoint:
- Click Test Connection
- Verify success message
- Click Save
Screenshot: S3 settings page with Test Connection button highlighted
- Edit a backup plan
- Scroll to S3 Offsite Sync section
- Check Enable S3 sync for this plan
- Select Use global S3 settings
- Save plan
Screenshot: Backup plan editor showing S3 sync enable checkbox
BBS can also sync its own server backups (created by bin/bbs-backup) to S3 for complete disaster recovery.
- Navigate to Settings → Offsite Storage
- Configure global S3 settings (required)
- Check Sync server backups to S3
- Save
- Daily server backups (from
bin/bbs-backup) are synced to S3 - Synced to
{bucket}/{prefix}/_server-backups/ - Includes:
- MySQL database dump
-
/var/www/bbs/config/.env(with APP_KEY) - VERSION file
- Retention: keeps 7 most recent server backups in S3
To restore BBS after complete server loss:
- Install BBS on new server:
sudo bash bbs-install - Download server backup from S3:
rclone copy s3:bbs-backups/_server-backups/ /tmp/restore/
- Restore:
sudo /var/www/bbs/bin/bbs-restore /tmp/restore/bbs-backup-latest.tar.gz - Server is restored with all clients, backup plans, and configurations
See Server-Backup-and-Restore for detailed recovery procedures.
BBS can restore Borg repositories directly from S3 storage when local data is lost, corrupted, or when you need to create a copy of an existing repository.
From the Repository Detail Page, you have two restore options:
Overwrites the local repository with data from S3.
Use When:
- Local repository is corrupted or damaged
- Server disk was replaced or reformatted
- Sync got out of sync and you need to re-download
How It Works:
- Clears local repository data
- Downloads entire repository from S3
- Imports manifest to restore file catalog (if available)
- Repository is ready to use
Creates a new repository populated with data from S3.
Use When:
- Testing restore procedures without affecting the original
- Creating a standby copy on a different server
- Migrating backups to a new location
- Verifying S3 backup integrity
How It Works:
- Creates a new repository record with a custom name (default:
{original}-copy) - Creates the repository directory structure
- Downloads repository data from S3
- Imports manifest to populate file catalog
- New repository appears alongside the original
- Navigate to Clients → Select a client
- Click on the repository you want to restore
- Scroll to the S3 Offsite section
- Choose your restore mode:
- Replace: Click "Restore (Replace)" and confirm
- Copy: Enter a name for the new repository, click "Restore (Copy)"
- The restore job is queued and runs via the scheduler
- Monitor progress on the Queue page
BBS uses a manifest file (.bbs-manifest.json) to enable fast recovery of file catalogs after S3 restore.
- Complete list of archives in the repository
- Archive metadata (name, timestamp, size)
- File catalog data (paths, sizes, modification times)
- Repository configuration
- During S3 Sync: After each successful sync, BBS uploads a manifest alongside the repository data
- During S3 Restore: BBS downloads the manifest first
- Fast Catalog Recovery: If a manifest exists, the file catalog is populated instantly from manifest data
-
Fallback: If no manifest exists (legacy backups or external repositories), BBS queues a
catalog_syncjob to rebuild the catalog by reading from borg directly
- Instant File Browser: Restored repositories have a working file browser immediately
-
No Slow Scans: Avoids time-consuming
borg listoperations on large repositories - Complete Metadata: Preserves all archive information including file sizes and timestamps
Orphaned repositories are backups that exist in S3 but have been deleted from the local server. BBS automatically detects these and offers one-click recovery.
- Repository deleted locally but S3 data retained
- Server rebuilt without restoring database
- Accidental deletion of repository record
- Migration between BBS installations
- Navigate to Clients → Select a client with S3 sync configured
- Scroll to the S3 Offsite Backups (Orphaned) section
- BBS lists all repositories found in S3 that don't exist locally
- Locate the orphaned repository in the list
- Click Restore from S3
- BBS creates a new repository record
- Downloads the repository data from S3
- Imports manifest to restore file catalog
- Repository is fully restored and operational
- The client must have at least one backup plan with S3 sync enabled
- S3 credentials must be valid and have ListBucket permission
- The S3 bucket must be accessible from the BBS server
S3 restore jobs appear on the Queue page with type s3_restore.
- Queued: Waiting for scheduler to pick up
- Running: Actively downloading from S3
- Completed: Restore finished successfully
- Failed: Error occurred (check error log)
- Manifest Import: If a manifest was found, file catalog is immediately available
-
Catalog Sync: If no manifest, a
catalog_syncjob is automatically queued - Cache Clear: Borg cache is cleared to prevent "repository relocated" errors
- Ready to Use: Repository is available for browsing and restoring files
Complete workflow to recover from total server loss:
-
Install BBS on new server
sudo bash bbs-install
-
Restore server backup (if available in S3)
rclone copy s3:bbs-backups/_server-backups/ /tmp/restore/ sudo /var/www/bbs/bin/bbs-restore /tmp/restore/bbs-backup-latest.tar.gz
This restores your database with all clients, plans, and configurations.
-
Restore repositories from S3
- Navigate to each client's detail page
- Orphaned repositories appear automatically
- Click "Restore from S3" for each repository
- Wait for restore jobs to complete
-
Verify restored data
- Browse files in each repository
- Perform a test restore of critical files
- Verify archive counts match expectations
- Test Regularly: Periodically restore a repository to verify S3 backups are valid
- Monitor Manifests: Manifests are uploaded after each S3 sync — ensure syncs complete successfully
- Keep Server Backups: S3 server backup sync preserves your BBS database, making full disaster recovery possible
- Document Credentials: Keep S3 credentials accessible (but secure) for disaster recovery scenarios
- Copy Mode First: When unsure, use Copy mode to avoid overwriting potentially recoverable local data
- Navigate to Queue
- Look for job type
s3_sync - Click on a job to view details
Screenshot: Queue page showing s3_sync job
The s3_sync job detail page shows:
- Status: queued, running, completed, failed
- Progress: percentage complete, bytes transferred
- Duration: elapsed time
- Bandwidth: current transfer speed
- Error Log: rclone output, error messages
Screenshot: s3_sync job detail page with progress bar
| Message | Meaning |
|---|---|
| Transferred: 1.5 GB / 10 GB (15%) | Upload progress |
| Bandwidth limit: 10M | Speed limit applied |
| Deleted 3 files | Pruned files removed from S3 |
| Completed successfully | Sync finished |
| Error: Access Denied | S3 credentials invalid |
Control upload speed to prevent saturating network connections.
Set the bandwidth limit in S3 configuration:
-
Format:
<number><unit>where unit isK,M, orG -
Examples:
-
10M= 10 megabytes per second -
512K= 512 kilobytes per second -
1G= 1 gigabyte per second
-
-
Unlimited: Leave blank or set to
0
| Limit | Use Case |
|---|---|
1M |
Slow connection, avoid disrupting other services |
10M |
Typical business internet (100 Mbps) |
50M |
Fast connection (500 Mbps+) |
100M |
Gigabit connection, high-priority backups |
| Unlimited | Dedicated backup network, maximum speed |
- Limits apply per-job (concurrent syncs each get the full limit)
- rclone uses token bucket algorithm for smooth rate limiting
- Actual speed may be slightly lower due to protocol overhead
Different storage classes offer cost vs. access speed tradeoffs:
| Storage Class | Cost | Retrieval | Best For |
|---|---|---|---|
| STANDARD | Highest | Instant | Frequent access, disaster recovery |
| STANDARD_IA | Medium | Instant | Infrequent access (monthly) |
| GLACIER | Low | Hours | Long-term archival, compliance |
| GLACIER_DEEP_ARCHIVE | Lowest | 12+ hours | Rarely accessed archives |
| INTELLIGENT_TIERING | Variable | Instant | Automatic cost optimization |
For Active Backups (fast disaster recovery):
- Use
STANDARDorINTELLIGENT_TIERING - Instant access for emergency restores
For Long-Term Archival (compliance, historical):
- Use
GLACIERorGLACIER_DEEP_ARCHIVE - Acceptable retrieval delays for rare access
Configure S3 bucket lifecycle rules to automatically transition objects:
Example Lifecycle Rule:
- Keep in STANDARD for 30 days
- Transition to STANDARD_IA after 30 days
- Transition to GLACIER after 90 days
- Delete after 365 days
This is configured in your S3 provider's web console, not in BBS.
Error: "Connection refused" or "Could not resolve host"
Solutions:
- Verify endpoint URL is correct (no
https://prefix) - Check DNS resolution on BBS server
- Verify firewall allows outbound HTTPS (port 443)
Error: "Access Denied" or "InvalidAccessKeyId"
Solutions:
- Verify access key and secret key are correct
- Check S3 user/policy has
s3:ListBucket,s3:PutObject,s3:GetObject,s3:DeleteObjectpermissions - Ensure bucket exists and is in the specified region
Error: "NoSuchBucket"
Solutions:
- Create the bucket in your S3 provider
- Verify bucket name spelling
- Ensure bucket is in the correct region
Error: "Error: error reading source directory"
Solutions:
- Verify Borg repository path exists on client
- Check agent user has read permissions on repository
- Ensure repository is not corrupted (
borg check)
Error: "Upload failed: RequestTimeout"
Solutions:
- Check network connectivity between agent and S3 endpoint
- Reduce bandwidth limit (network may be unstable)
- Verify S3 endpoint is reachable (ping, curl test)
Error: "Quota exceeded" or "StorageLimitExceeded"
Solutions:
- Check S3 account storage quota
- Review bucket storage usage
- Implement lifecycle policies to delete old archives
Possible Causes:
- Bandwidth limit set too low
- Network congestion or slow uplink
- S3 provider throttling
- Large initial sync (first run transfers entire repository)
Solutions:
- Increase or remove bandwidth limit
- Schedule syncs during off-peak hours
- Check S3 provider for rate limits or throttling
- Be patient on first sync (subsequent syncs are incremental)
Possible Causes:
- Path prefix misconfiguration
- Files in different bucket or region
- S3 console showing cached data
Solutions:
- Verify path prefix in S3 config (e.g.,
backups/borg/) - List objects via CLI:
rclone ls s3:bucket/prefix - Refresh S3 console
Cause: Multiple backup plans syncing the same repository
Solution:
- Use one S3 sync config per unique repository
- If multiple plans share a repository, only enable S3 sync on one plan
- Consider syncing at the repository level, not per-plan
- Encrypt Credentials: BBS encrypts S3 keys using APP_KEY
- Least Privilege: Grant S3 users only necessary permissions (ListBucket, GetObject, PutObject, DeleteObject)
- Server-Side Encryption: Enable SSE-S3 for encryption at rest
- Private Buckets: Never make backup buckets public
- Rotate Keys: Periodically rotate S3 access keys
- First Sync: Initial sync is slow (uploads entire repo). Run during off-hours.
- Incremental Syncs: Subsequent syncs are fast (only changes uploaded)
- Bandwidth Limits: Set reasonable limits to avoid network saturation
- Concurrent Jobs: Multiple syncs run in parallel (up to max concurrent jobs limit)
- Storage Class: Use STANDARD for active backups, GLACIER for archival
- Lifecycle Policies: Automatically transition old archives to cheaper storage
- Retention: Align S3 retention with Borg prune retention
- Monitor Usage: Check S3 bill monthly, optimize if costs are high
- Monitor Sync Jobs: Check Queue regularly for failed s3_sync jobs
- Test Restores: Periodically verify you can restore from S3 (download and extract)
- Alerts: Enable notifications for failed sync jobs
- Redundancy: Consider multiple S3 regions or providers for critical data
- Automate: S3 sync runs automatically after prune (no manual intervention)
- Prune First: Sync happens after prune to avoid syncing data that will be deleted
- Schedule Prunes: Regular prune jobs ensure S3 stays in sync and doesn't accumulate deleted data
- Document Configs: Keep notes on which buckets are used for which clients
- Plugins — Overview of all BBS plugins
- Queue-and-Jobs — Monitoring S3 sync jobs
- Settings — Configuring global S3 settings
- Server-Backup-and-Restore — Complete disaster recovery procedures
- Troubleshooting — General BBS troubleshooting
📖 User Manual
Getting Started
Using BBS
- Dashboard
- Managing Clients
- Linux Agent Setup
- macOS Agent Setup
- Windows Agent Setup
- Docker Agent Setup
- Repositories
- Storage Setup
- Backup Plans
- Restoring Files
- Database Backups
- Plugins
- Remote Storage
- S3 Offsite Sync
Monitoring
Administration
- Settings
- User Management
- Single Sign-On
- Two-Factor Authentication
- Updating BBS
- Server Backup and Restore
Reference