Skip to content

S3 Offsite Sync

Marc Pope edited this page Feb 3, 2026 · 2 revisions

S3 Offsite Sync

The S3 Offsite Sync plugin mirrors BorgBackup repositories to S3-compatible object storage for disaster recovery, geographic redundancy, and long-term archival.

Overview

What It Does:

  • Automatically syncs Borg repositories to cloud object storage after each prune operation
  • Uses rclone for efficient, incremental transfers
  • Supports any S3-compatible storage provider
  • Optional server backup sync for complete disaster recovery

Why Use It:

  • Disaster Recovery: Protect against server hardware failure, data center disasters
  • Geographic Redundancy: Store backups in different physical locations
  • Compliance: Meet regulatory requirements for offsite backup storage
  • Cloud Economics: Leverage inexpensive cloud storage (Wasabi, Backblaze B2)

How S3 Sync Works

Automatic Sync After Prune

  1. A backup plan's prune job completes successfully
  2. BBS automatically queues an s3_sync job for that repository
  3. The agent runs rclone sync to mirror the repository to S3
  4. Only changed files are transferred (incremental sync)
  5. Old files removed from the repository during pruning are also removed from S3

Sync Behavior

  • Full Mirror: The entire Borg repository directory is synced, including:
    • Archive data chunks
    • Repository metadata
    • Config files
    • Lock files (if present)
  • Incremental: Only new or changed files are uploaded
  • Deletions: Files removed from the repo (during pruning) are removed from S3
  • Bandwidth Limiting: Configurable to prevent saturating network connections

Compatible Storage Providers

The S3 sync plugin works with any S3-compatible object storage:

Provider Endpoint Example Notes
AWS S3 s3.amazonaws.com Most widely used, multiple regions
Wasabi s3.wasabisys.com Cost-effective, fast, no egress fees
Backblaze B2 s3.us-west-002.backblazeb2.com Very inexpensive, pay-as-you-go
DigitalOcean Spaces nyc3.digitaloceanspaces.com Integrated with DO infrastructure
MinIO minio.example.com Self-hosted S3-compatible storage
Cloudflare R2 <account>.r2.cloudflarestorage.com No egress fees
Any S3 API Custom endpoint Must support S3 API v4 signatures

Setup Methods

There are three ways to configure S3 sync:

Method A: Global S3 Settings (Recommended for Single-Tenant)

Use one set of S3 credentials for all clients and backup plans.

Best For: Single organization, all backups in one S3 bucket

Setup:

  1. Navigate to SettingsOffsite Storage tab
  2. Fill in global S3 settings (see configuration section below)
  3. Click Test Connection to verify
  4. Save settings
  5. Enable S3 sync on individual backup plans (no additional config needed)

Screenshot: Settings → Offsite Storage tab with S3 configuration form

Method B: Named S3 Plugin Configs (Recommended for Multi-Tenant)

Create per-client S3 configurations with different credentials or buckets.

Best For: Multiple clients, different S3 accounts per client, MSPs

Setup:

  1. Navigate to client detail → Plugins tab
  2. Enable S3 Offsite Sync plugin
  3. Click Add Configuration
  4. Fill in S3 settings specific to this client
  5. Save configuration
  6. Attach configuration to backup plans

Screenshot: S3 plugin configuration form on Plugins tab

Method C: Inline Per-Plan S3 Config

Configure unique S3 settings for individual backup plans.

Best For: Different retention or storage classes per backup plan

Setup:

  1. Edit a backup plan
  2. In the S3 Sync section, select "Custom configuration for this plan"
  3. Fill in S3 settings
  4. Save plan

Configuration Parameters

Required Settings

Parameter Description Example
Endpoint S3 API endpoint URL s3.wasabisys.com
Region Storage region us-east-1, eu-central-1
Bucket Name S3 bucket (must already exist) bbs-backups
Access Key S3 access key ID AKIAIOSFODNN7EXAMPLE
Secret Key S3 secret access key wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

Optional Settings

Parameter Description Default Example
Path Prefix Prefix for all objects in bucket (none) backups/borg/
Bandwidth Limit Upload speed limit Unlimited 10M (10 MB/s)
Storage Class S3 storage class STANDARD GLACIER, INTELLIGENT_TIERING
Server-Side Encryption Enable SSE-S3 encryption Disabled Enabled

Endpoint Configuration by Provider

AWS S3

Endpoint: s3.amazonaws.com (or region-specific: s3.us-west-2.amazonaws.com)
Region: us-east-1, us-west-2, eu-west-1, etc.
Bucket: your-bucket-name

Wasabi

Endpoint: s3.wasabisys.com (or region-specific: s3.us-east-2.wasabisys.com)
Region: us-east-1, us-east-2, us-west-1, eu-central-1
Bucket: your-bucket-name

Backblaze B2

Endpoint: s3.us-west-002.backblazeb2.com (check your account for exact endpoint)
Region: us-west-002 (varies by bucket)
Bucket: your-bucket-name

DigitalOcean Spaces

Endpoint: nyc3.digitaloceanspaces.com (or your region)
Region: nyc3, sfo3, ams3, sgp1
Bucket: your-space-name

MinIO (Self-Hosted)

Endpoint: minio.example.com:9000
Region: us-east-1 (MinIO default, or custom)
Bucket: backups

Setting Up Global S3 Sync

Step 1: Create S3 Bucket

Create a bucket in your chosen S3 provider:

AWS S3 Example:

aws s3 mb s3://bbs-backups --region us-east-1

Wasabi Example (via web console):

  1. Log in to Wasabi console
  2. Create Bucket → Name: bbs-backups, Region: us-east-1
  3. Note the endpoint: s3.wasabisys.com

Step 2: Create Access Keys

Create an S3 access key with appropriate permissions:

AWS S3 IAM Policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:ListBucket",
        "s3:GetObject",
        "s3:PutObject",
        "s3:DeleteObject"
      ],
      "Resource": [
        "arn:aws:s3:::bbs-backups",
        "arn:aws:s3:::bbs-backups/*"
      ]
    }
  ]
}

Step 3: Configure BBS

  1. Navigate to SettingsOffsite Storage
  2. Fill in the form:
    • S3 Endpoint: s3.wasabisys.com
    • Region: us-east-1
    • Bucket Name: bbs-backups
    • Access Key: Your access key
    • Secret Key: Your secret key
    • Path Prefix (optional): borgbackups/
    • Bandwidth Limit (optional): 20M
  3. Click Test Connection
  4. Verify success message
  5. Click Save

Screenshot: S3 settings page with Test Connection button highlighted

Step 4: Enable S3 Sync on Backup Plans

  1. Edit a backup plan
  2. Scroll to S3 Offsite Sync section
  3. Check Enable S3 sync for this plan
  4. Select Use global S3 settings
  5. Save plan

Screenshot: Backup plan editor showing S3 sync enable checkbox

Server Backup Sync

BBS can also sync its own server backups (created by bin/bbs-backup) to S3 for complete disaster recovery.

Enabling Server Backup Sync

  1. Navigate to SettingsOffsite Storage
  2. Configure global S3 settings (required)
  3. Check Sync server backups to S3
  4. Save

How It Works

  • Daily server backups (from bin/bbs-backup) are synced to S3
  • Synced to {bucket}/{prefix}/_server-backups/
  • Includes:
    • MySQL database dump
    • /var/www/bbs/config/.env (with APP_KEY)
    • VERSION file
  • Retention: keeps 7 most recent server backups in S3

Disaster Recovery with Server Backups

To restore BBS after complete server loss:

  1. Install BBS on new server: sudo bash bbs-install
  2. Download server backup from S3:
    rclone copy s3:bbs-backups/_server-backups/ /tmp/restore/
  3. Restore: sudo /var/www/bbs/bin/bbs-restore /tmp/restore/bbs-backup-latest.tar.gz
  4. Server is restored with all clients, backup plans, and configurations

See Server-Backup-and-Restore for detailed recovery procedures.

Restoring Repositories from S3

BBS can restore Borg repositories directly from S3 storage when local data is lost, corrupted, or when you need to create a copy of an existing repository.

Repository Restore Options

From the Repository Detail Page, you have two restore options:

Replace Mode

Overwrites the local repository with data from S3.

Use When:

  • Local repository is corrupted or damaged
  • Server disk was replaced or reformatted
  • Sync got out of sync and you need to re-download

How It Works:

  1. Clears local repository data
  2. Downloads entire repository from S3
  3. Imports manifest to restore file catalog (if available)
  4. Repository is ready to use

Copy Mode

Creates a new repository populated with data from S3.

Use When:

  • Testing restore procedures without affecting the original
  • Creating a standby copy on a different server
  • Migrating backups to a new location
  • Verifying S3 backup integrity

How It Works:

  1. Creates a new repository record with a custom name (default: {original}-copy)
  2. Creates the repository directory structure
  3. Downloads repository data from S3
  4. Imports manifest to populate file catalog
  5. New repository appears alongside the original

Performing an S3 Restore

  1. Navigate to Clients → Select a client
  2. Click on the repository you want to restore
  3. Scroll to the S3 Offsite section
  4. Choose your restore mode:
    • Replace: Click "Restore (Replace)" and confirm
    • Copy: Enter a name for the new repository, click "Restore (Copy)"
  5. The restore job is queued and runs via the scheduler
  6. Monitor progress on the Queue page

The Manifest System

BBS uses a manifest file (.bbs-manifest.json) to enable fast recovery of file catalogs after S3 restore.

What the Manifest Contains

  • Complete list of archives in the repository
  • Archive metadata (name, timestamp, size)
  • File catalog data (paths, sizes, modification times)
  • Repository configuration

How It Works

  1. During S3 Sync: After each successful sync, BBS uploads a manifest alongside the repository data
  2. During S3 Restore: BBS downloads the manifest first
  3. Fast Catalog Recovery: If a manifest exists, the file catalog is populated instantly from manifest data
  4. Fallback: If no manifest exists (legacy backups or external repositories), BBS queues a catalog_sync job to rebuild the catalog by reading from borg directly

Benefits

  • Instant File Browser: Restored repositories have a working file browser immediately
  • No Slow Scans: Avoids time-consuming borg list operations on large repositories
  • Complete Metadata: Preserves all archive information including file sizes and timestamps

Orphaned S3 Backups

Orphaned repositories are backups that exist in S3 but have been deleted from the local server. BBS automatically detects these and offers one-click recovery.

When Orphans Occur

  • Repository deleted locally but S3 data retained
  • Server rebuilt without restoring database
  • Accidental deletion of repository record
  • Migration between BBS installations

Finding Orphaned Repositories

  1. Navigate to Clients → Select a client with S3 sync configured
  2. Scroll to the S3 Offsite Backups (Orphaned) section
  3. BBS lists all repositories found in S3 that don't exist locally

Recovering an Orphaned Repository

  1. Locate the orphaned repository in the list
  2. Click Restore from S3
  3. BBS creates a new repository record
  4. Downloads the repository data from S3
  5. Imports manifest to restore file catalog
  6. Repository is fully restored and operational

Orphan Detection Requirements

  • The client must have at least one backup plan with S3 sync enabled
  • S3 credentials must be valid and have ListBucket permission
  • The S3 bucket must be accessible from the BBS server

Restore Job Monitoring

S3 restore jobs appear on the Queue page with type s3_restore.

Progress Indicators

  • Queued: Waiting for scheduler to pick up
  • Running: Actively downloading from S3
  • Completed: Restore finished successfully
  • Failed: Error occurred (check error log)

After Restore Completes

  1. Manifest Import: If a manifest was found, file catalog is immediately available
  2. Catalog Sync: If no manifest, a catalog_sync job is automatically queued
  3. Cache Clear: Borg cache is cleared to prevent "repository relocated" errors
  4. Ready to Use: Repository is available for browsing and restoring files

Disaster Recovery Workflow

Complete workflow to recover from total server loss:

  1. Install BBS on new server

    sudo bash bbs-install
  2. Restore server backup (if available in S3)

    rclone copy s3:bbs-backups/_server-backups/ /tmp/restore/
    sudo /var/www/bbs/bin/bbs-restore /tmp/restore/bbs-backup-latest.tar.gz

    This restores your database with all clients, plans, and configurations.

  3. Restore repositories from S3

    • Navigate to each client's detail page
    • Orphaned repositories appear automatically
    • Click "Restore from S3" for each repository
    • Wait for restore jobs to complete
  4. Verify restored data

    • Browse files in each repository
    • Perform a test restore of critical files
    • Verify archive counts match expectations

Best Practices for S3 Restore

  • Test Regularly: Periodically restore a repository to verify S3 backups are valid
  • Monitor Manifests: Manifests are uploaded after each S3 sync — ensure syncs complete successfully
  • Keep Server Backups: S3 server backup sync preserves your BBS database, making full disaster recovery possible
  • Document Credentials: Keep S3 credentials accessible (but secure) for disaster recovery scenarios
  • Copy Mode First: When unsure, use Copy mode to avoid overwriting potentially recoverable local data

Monitoring S3 Sync Jobs

Viewing Sync Jobs in Queue

  1. Navigate to Queue
  2. Look for job type s3_sync
  3. Click on a job to view details

Screenshot: Queue page showing s3_sync job

Job Details

The s3_sync job detail page shows:

  • Status: queued, running, completed, failed
  • Progress: percentage complete, bytes transferred
  • Duration: elapsed time
  • Bandwidth: current transfer speed
  • Error Log: rclone output, error messages

Screenshot: s3_sync job detail page with progress bar

Common Status Messages

Message Meaning
Transferred: 1.5 GB / 10 GB (15%) Upload progress
Bandwidth limit: 10M Speed limit applied
Deleted 3 files Pruned files removed from S3
Completed successfully Sync finished
Error: Access Denied S3 credentials invalid

Bandwidth Limiting

Control upload speed to prevent saturating network connections.

Configuring Bandwidth Limits

Set the bandwidth limit in S3 configuration:

  • Format: <number><unit> where unit is K, M, or G
  • Examples:
    • 10M = 10 megabytes per second
    • 512K = 512 kilobytes per second
    • 1G = 1 gigabyte per second
  • Unlimited: Leave blank or set to 0

Bandwidth Limit Examples

Limit Use Case
1M Slow connection, avoid disrupting other services
10M Typical business internet (100 Mbps)
50M Fast connection (500 Mbps+)
100M Gigabit connection, high-priority backups
Unlimited Dedicated backup network, maximum speed

Dynamic Bandwidth Control

  • Limits apply per-job (concurrent syncs each get the full limit)
  • rclone uses token bucket algorithm for smooth rate limiting
  • Actual speed may be slightly lower due to protocol overhead

Storage Classes and Cost Optimization

S3 Storage Classes

Different storage classes offer cost vs. access speed tradeoffs:

Storage Class Cost Retrieval Best For
STANDARD Highest Instant Frequent access, disaster recovery
STANDARD_IA Medium Instant Infrequent access (monthly)
GLACIER Low Hours Long-term archival, compliance
GLACIER_DEEP_ARCHIVE Lowest 12+ hours Rarely accessed archives
INTELLIGENT_TIERING Variable Instant Automatic cost optimization

Choosing a Storage Class

For Active Backups (fast disaster recovery):

  • Use STANDARD or INTELLIGENT_TIERING
  • Instant access for emergency restores

For Long-Term Archival (compliance, historical):

  • Use GLACIER or GLACIER_DEEP_ARCHIVE
  • Acceptable retrieval delays for rare access

Lifecycle Policies

Configure S3 bucket lifecycle rules to automatically transition objects:

Example Lifecycle Rule:

  1. Keep in STANDARD for 30 days
  2. Transition to STANDARD_IA after 30 days
  3. Transition to GLACIER after 90 days
  4. Delete after 365 days

This is configured in your S3 provider's web console, not in BBS.

Troubleshooting

Test Connection Fails

Error: "Connection refused" or "Could not resolve host"

Solutions:

  • Verify endpoint URL is correct (no https:// prefix)
  • Check DNS resolution on BBS server
  • Verify firewall allows outbound HTTPS (port 443)

Error: "Access Denied" or "InvalidAccessKeyId"

Solutions:

  • Verify access key and secret key are correct
  • Check S3 user/policy has s3:ListBucket, s3:PutObject, s3:GetObject, s3:DeleteObject permissions
  • Ensure bucket exists and is in the specified region

Error: "NoSuchBucket"

Solutions:

  • Create the bucket in your S3 provider
  • Verify bucket name spelling
  • Ensure bucket is in the correct region

Sync Job Fails

Error: "Error: error reading source directory"

Solutions:

  • Verify Borg repository path exists on client
  • Check agent user has read permissions on repository
  • Ensure repository is not corrupted (borg check)

Error: "Upload failed: RequestTimeout"

Solutions:

  • Check network connectivity between agent and S3 endpoint
  • Reduce bandwidth limit (network may be unstable)
  • Verify S3 endpoint is reachable (ping, curl test)

Error: "Quota exceeded" or "StorageLimitExceeded"

Solutions:

  • Check S3 account storage quota
  • Review bucket storage usage
  • Implement lifecycle policies to delete old archives

Sync is Slow

Possible Causes:

  • Bandwidth limit set too low
  • Network congestion or slow uplink
  • S3 provider throttling
  • Large initial sync (first run transfers entire repository)

Solutions:

  • Increase or remove bandwidth limit
  • Schedule syncs during off-peak hours
  • Check S3 provider for rate limits or throttling
  • Be patient on first sync (subsequent syncs are incremental)

Sync Completes but Files Missing in S3

Possible Causes:

  • Path prefix misconfiguration
  • Files in different bucket or region
  • S3 console showing cached data

Solutions:

  • Verify path prefix in S3 config (e.g., backups/borg/)
  • List objects via CLI: rclone ls s3:bucket/prefix
  • Refresh S3 console

Duplicate Files or Wasted Space

Cause: Multiple backup plans syncing the same repository

Solution:

  • Use one S3 sync config per unique repository
  • If multiple plans share a repository, only enable S3 sync on one plan
  • Consider syncing at the repository level, not per-plan

Best Practices

Security

  • Encrypt Credentials: BBS encrypts S3 keys using APP_KEY
  • Least Privilege: Grant S3 users only necessary permissions (ListBucket, GetObject, PutObject, DeleteObject)
  • Server-Side Encryption: Enable SSE-S3 for encryption at rest
  • Private Buckets: Never make backup buckets public
  • Rotate Keys: Periodically rotate S3 access keys

Performance

  • First Sync: Initial sync is slow (uploads entire repo). Run during off-hours.
  • Incremental Syncs: Subsequent syncs are fast (only changes uploaded)
  • Bandwidth Limits: Set reasonable limits to avoid network saturation
  • Concurrent Jobs: Multiple syncs run in parallel (up to max concurrent jobs limit)

Cost Management

  • Storage Class: Use STANDARD for active backups, GLACIER for archival
  • Lifecycle Policies: Automatically transition old archives to cheaper storage
  • Retention: Align S3 retention with Borg prune retention
  • Monitor Usage: Check S3 bill monthly, optimize if costs are high

Reliability

  • Monitor Sync Jobs: Check Queue regularly for failed s3_sync jobs
  • Test Restores: Periodically verify you can restore from S3 (download and extract)
  • Alerts: Enable notifications for failed sync jobs
  • Redundancy: Consider multiple S3 regions or providers for critical data

Operational

  • Automate: S3 sync runs automatically after prune (no manual intervention)
  • Prune First: Sync happens after prune to avoid syncing data that will be deleted
  • Schedule Prunes: Regular prune jobs ensure S3 stays in sync and doesn't accumulate deleted data
  • Document Configs: Keep notes on which buckets are used for which clients

Related Documentation

Clone this wiki locally