Backup your Home Assistant to any S3-compatible storage provider.
This integration extends Home Assistant's built-in backup functionality to support any S3-compatible storage, not just AWS S3. Works with:
- ☁️ AWS S3
- 🗄️ MinIO
- 💾 Wasabi
- 🔒 Backblaze B2
- 🌊 DigitalOcean Spaces
- ☁️ Cloudflare R2
- 🏢 Synology C2 Object Storage
- 🚗 Garage (distributed self-hosted)
- 🖥️ Self-hosted S3-compatible storage
- And any other S3-compatible provider!
- Install via HACS or manually (see Installation)
- Restart Home Assistant
- Settings → Devices & Services → Add Integration
- Search for "BAUERGROUP - S3 Compatible Backup"
- Enter your credentials:
- Access Key ID
- Secret Access Key
- Bucket Name (must already exist!)
- Endpoint URL
- Region
- Settings → System → Backups → Select your S3 storage as backup location
- 📦 Full backup support - Upload, download, list, and delete backups
- 🔄 Multipart upload - Efficient handling of large backups (>20MB)
- 🌍 Region support - Configure any region for your S3 endpoint
- 🔗 Custom endpoints - Works with any S3-compatible API
- 🔐 Secure - Uses access credentials (Access Key ID + Secret Access Key)
- 🚀 Async - Non-blocking operations using aiobotocore
- 💾 Caching - Efficient backup listing with 5-minute cache
- 🔑 Re-Authentication - Automatic prompt when credentials expire
- ⚙️ Reconfigure - Change settings without removing the integration
- Open HACS in Home Assistant
- Go to Integrations
- Click the ⋮ menu → Custom repositories
- Add repository:
- URL:
https://github.com/bauer-group/IP-HomeassistantS3CompatibleBackup - Category: Integration
- URL:
- Click Install
- Restart Home Assistant
cd /config/custom_components
git clone https://github.com/bauer-group/IP-HomeassistantS3CompatibleBackup.git bauergroup_s3compatiblebackupThen restart Home Assistant.
- Settings → Devices & Services → Add Integration
- Search for "S3 Compatible Backup"
- Configure:
| Field | Description | Example |
|---|---|---|
| Access Key ID | Your S3 access key | AKIAIOSFODNN7EXAMPLE |
| Secret Access Key | Your S3 secret key | wJalrXUtnFEMI/K7MDENG/... |
| Bucket Name | Target bucket (must exist) | my-ha-backups |
| Endpoint URL | S3-compatible endpoint | https://s3.eu-central-1.amazonaws.com |
| Region | Storage region | eu-central-1 |
| Storage Prefix | Root folder for backups (optional) | homeassistant |
Once configured, the integration automatically appears as a backup location in Home Assistant:
- Go to Settings → System → Backups
- Create a new backup
- Select your S3 storage as the backup location
- Your backup will be uploaded to your S3 bucket
Backups are organized in a folder structure within your bucket:
my-bucket/
└── homeassistant/ # Storage Prefix (configurable)
└── backups/ # Fixed subfolder
├── Home_Assistant_2025-12-02.tar
├── Home_Assistant_2025-12-02.metadata.json
├── Home_Assistant_2025-12-01.tar
├── Home_Assistant_2025-12-01.metadata.json
└── ...
Each backup consists of two files:
{backup-name}.tar- The actual backup archive{backup-name}.metadata.json- Backup metadata for Home Assistant
The Storage Prefix allows you to:
- Keep backups separate from other data in the same bucket
- Run multiple Home Assistant instances with different prefixes
- Share a bucket across different applications
If you need to change your S3 configuration (bucket, endpoint, credentials, etc.):
- Go to Settings → Devices & Services
- Find BAUERGROUP - S3 Compatible Backup
- Click the ⋮ menu → Reconfigure
- Update your settings and save
If your credentials expire or become invalid, Home Assistant will automatically prompt you to re-authenticate:
- You'll see a notification that authentication failed
- Click the notification or go to Settings → Devices & Services
- Click Reconfigure on the integration
- Enter your new Access Key ID and Secret Access Key
- Go to AWS S3 Console
- Click Create bucket
- Enter a unique bucket name (e.g.,
my-homeassistant-backups) - Select your preferred region (e.g.,
eu-central-1) - Keep Block all public access enabled (recommended)
- Click Create bucket
- Go to IAM Console
- Click Users → Create user
- Enter a username (e.g.,
homeassistant-backup) - Click Next
- Select Attach policies directly
- Click Create policy and use the JSON below
- Attach the created policy to the user
- Click Create user
- Go to the user → Security credentials → Create access key
- Select Application running outside AWS
- Save the Access Key ID and Secret Access Key
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "HomeAssistantBackupPermissions",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListBucket",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::YOUR-BUCKET-NAME",
"arn:aws:s3:::YOUR-BUCKET-NAME/*"
]
}
]
}Note: Replace
YOUR-BUCKET-NAMEwith your actual bucket name.
| Field | Value |
|---|---|
| Access Key ID | AKIA... (from IAM) |
| Secret Access Key | ... (from IAM) |
| Bucket Name | my-homeassistant-backups |
| Endpoint URL | https://s3.eu-central-1.amazonaws.com |
| Region | eu-central-1 |
AWS S3 Endpoint URLs by Region:
| Region | Endpoint URL |
|---|---|
| US East (N. Virginia) | https://s3.us-east-1.amazonaws.com |
| US West (Oregon) | https://s3.us-west-2.amazonaws.com |
| EU (Frankfurt) | https://s3.eu-central-1.amazonaws.com |
| EU (Ireland) | https://s3.eu-west-1.amazonaws.com |
| Asia Pacific (Tokyo) | https://s3.ap-northeast-1.amazonaws.com |
| Asia Pacific (Sydney) | https://s3.ap-southeast-2.amazonaws.com |
Full list: AWS S3 Endpoints
MinIO is a high-performance, S3-compatible object storage that you can self-host.
Docker:
docker run -d \
--name minio \
-p 9000:9000 \
-p 9001:9001 \
-v /path/to/data:/data \
-e "MINIO_ROOT_USER=minioadmin" \
-e "MINIO_ROOT_PASSWORD=minioadmin" \
minio/minio server /data --console-address ":9001"Docker Compose:
version: '3.8'
services:
minio:
image: minio/minio
container_name: minio
ports:
- "9000:9000"
- "9001:9001"
volumes:
- ./minio-data:/data
environment:
MINIO_ROOT_USER: minioadmin
MINIO_ROOT_PASSWORD: minioadmin
command: server /data --console-address ":9001"- Open MinIO Console at
http://your-server:9001 - Login with root credentials
- Go to Buckets → Create Bucket
- Enter bucket name (e.g.,
homeassistant-backups) - Go to Access Keys → Create access key
- Save the Access Key and Secret Key
| Field | Value |
|---|---|
| Access Key ID | your-access-key |
| Secret Access Key | your-secret-key |
| Bucket Name | homeassistant-backups |
| Endpoint URL | http://your-minio-server:9000 |
| Region | us-east-1 (default for MinIO) |
Tip: For HTTPS, configure MinIO with TLS certificates or use a reverse proxy.
Wasabi offers hot cloud storage with no egress fees.
- Login to Wasabi Console
- Click Create Bucket
- Enter bucket name and select region
- Click Create Bucket
- Go to Access Keys → Create New Access Key
- Download or copy the credentials
| Field | Value |
|---|---|
| Access Key ID | your-wasabi-access-key |
| Secret Access Key | your-wasabi-secret-key |
| Bucket Name | your-bucket-name |
| Endpoint URL | https://s3.eu-central-1.wasabisys.com |
| Region | eu-central-1 |
Wasabi Endpoint URLs by Region:
| Region | Endpoint URL |
|---|---|
| US East 1 (N. Virginia) | https://s3.us-east-1.wasabisys.com |
| US East 2 (N. Virginia) | https://s3.us-east-2.wasabisys.com |
| US Central 1 (Texas) | https://s3.us-central-1.wasabisys.com |
| US West 1 (Oregon) | https://s3.us-west-1.wasabisys.com |
| EU Central 1 (Amsterdam) | https://s3.eu-central-1.wasabisys.com |
| EU Central 2 (Frankfurt) | https://s3.eu-central-2.wasabisys.com |
| EU West 1 (London) | https://s3.eu-west-1.wasabisys.com |
| EU West 2 (Paris) | https://s3.eu-west-2.wasabisys.com |
| AP Northeast 1 (Tokyo) | https://s3.ap-northeast-1.wasabisys.com |
| AP Northeast 2 (Osaka) | https://s3.ap-northeast-2.wasabisys.com |
| AP Southeast 1 (Singapore) | https://s3.ap-southeast-1.wasabisys.com |
| AP Southeast 2 (Sydney) | https://s3.ap-southeast-2.wasabisys.com |
Backblaze B2 is an affordable cloud storage solution with S3-compatible API.
- Login to Backblaze B2
- Click Create a Bucket
- Enter a unique bucket name
- Set Files in Bucket are: to Private
- Click Create a Bucket
- Note the Endpoint shown (e.g.,
s3.us-west-004.backblazeb2.com)
- Go to App Keys → Add a New Application Key
- Enter a name (e.g.,
homeassistant-backup) - Select Allow access to Bucket(s): your bucket
- Enable these capabilities:
listBucketslistFilesreadFileswriteFilesdeleteFiles
- Click Create New Key
- Important: Copy the
applicationKeyimmediately (shown only once!) - Note the
keyID(this is your Access Key ID)
| Field | Value |
|---|---|
| Access Key ID | keyID from Application Key |
| Secret Access Key | applicationKey from Application Key |
| Bucket Name | your-bucket-name |
| Endpoint URL | https://s3.us-west-004.backblazeb2.com |
| Region | us-west-004 |
Important: The region must match the endpoint. Extract it from the endpoint URL (e.g.,
s3.us-west-004.backblazeb2.com→ region isus-west-004).
DigitalOcean Spaces is an S3-compatible object storage service.
- Login to DigitalOcean
- Go to Spaces Object Storage → Create a Space
- Select a datacenter region
- Choose Restrict File Listing (recommended)
- Enter a unique name
- Click Create a Space
- Go to API → Spaces Keys → Generate New Key
- Enter a name and click Generate Key
- Copy the Key and Secret
| Field | Value |
|---|---|
| Access Key ID | DO... (your Spaces key) |
| Secret Access Key | ... (your Spaces secret) |
| Bucket Name | your-space-name |
| Endpoint URL | https://fra1.digitaloceanspaces.com |
| Region | fra1 |
DigitalOcean Spaces Endpoints:
| Region | Endpoint URL |
|---|---|
| New York (NYC3) | https://nyc3.digitaloceanspaces.com |
| San Francisco (SFO3) | https://sfo3.digitaloceanspaces.com |
| Amsterdam (AMS3) | https://ams3.digitaloceanspaces.com |
| Singapore (SGP1) | https://sgp1.digitaloceanspaces.com |
| Frankfurt (FRA1) | https://fra1.digitaloceanspaces.com |
| Sydney (SYD1) | https://syd1.digitaloceanspaces.com |
Cloudflare R2 is an S3-compatible object storage with zero egress fees.
- Login to Cloudflare Dashboard
- Go to R2 Object Storage → Create bucket
- Enter a bucket name
- Click Create bucket
- Go to R2 Object Storage → Manage R2 API Tokens
- Click Create API token
- Select permissions:
- Object Read & Write for your bucket
- Click Create API Token
- Copy the Access Key ID and Secret Access Key
- Find your Account ID in the Cloudflare dashboard URL or R2 overview page
| Field | Value |
|---|---|
| Access Key ID | your-r2-access-key-id |
| Secret Access Key | your-r2-secret-access-key |
| Bucket Name | your-bucket-name |
| Endpoint URL | https://<ACCOUNT_ID>.r2.cloudflarestorage.com |
| Region | auto |
Note: Replace
<ACCOUNT_ID>with your Cloudflare account ID.
Garage is a lightweight, self-hosted, geo-distributed storage system with S3-compatible API. It's designed for self-hosting and small-scale deployments.
Docker:
docker run -d \
--name garage \
-p 3900:3900 \
-p 3901:3901 \
-p 3902:3902 \
-v /path/to/garage/data:/var/lib/garage/data \
-v /path/to/garage/meta:/var/lib/garage/meta \
-v /path/to/garage.toml:/etc/garage.toml \
dxflrs/garage:latestDocker Compose:
version: '3.8'
services:
garage:
image: dxflrs/garage:latest
container_name: garage
ports:
- "3900:3900" # S3 API
- "3901:3901" # RPC
- "3902:3902" # Web/Admin
volumes:
- ./garage/data:/var/lib/garage/data
- ./garage/meta:/var/lib/garage/meta
- ./garage.toml:/etc/garage.toml# Create a bucket
garage bucket create homeassistant-backups
# Create an API key
garage key create homeassistant-backup-key
# Grant permissions to the bucket
garage bucket allow homeassistant-backups --read --write --key homeassistant-backup-key| Field | Value |
|---|---|
| Access Key ID | GK... (your Garage key ID) |
| Secret Access Key | ... (your Garage secret key) |
| Bucket Name | homeassistant-backups |
| Endpoint URL | http://your-garage-server:3900 |
| Region | garage |
Important: The region must be set to
garage(or whatever region is configured in yourgarage.toml). Using a different region (likeus-east-1) will result in an "invalid_credentials" error with the message:Authorization header malformed, unexpected scope.If you see errors like
unexpected scope: 20251215/us-east-1/s3/aws4_request, change the region fromus-east-1togarage.
Synology C2 offers S3-compatible cloud storage.
- Login to Synology C2 Object Storage
- Create a new bucket
- Note the endpoint URL for your region
- Go to your account settings
- Create new access credentials
- Save the Access Key and Secret Key
| Field | Value |
|---|---|
| Access Key ID | your-c2-access-key |
| Secret Access Key | your-c2-secret-key |
| Bucket Name | your-bucket-name |
| Endpoint URL | https://eu-002.s3.synologyc2.net |
| Region | eu-002 |
Synology C2 Endpoints:
| Region | Endpoint URL |
|---|---|
| Europe | https://eu-002.s3.synologyc2.net |
| North America | https://us-001.s3.synologyc2.net |
| Taiwan | https://tw-001.s3.synologyc2.net |
For all providers, your access credentials need these permissions:
| Permission | Purpose |
|---|---|
s3:PutObject |
Upload backup files |
s3:GetObject |
Download/restore backups |
s3:DeleteObject |
Delete old backups |
s3:ListBucket |
List available backups |
s3:AbortMultipartUpload |
Cancel failed uploads |
s3:ListMultipartUploadParts |
Resume multipart uploads |
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "HomeAssistantBackupPermissions",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListBucket",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::YOUR-BUCKET-NAME",
"arn:aws:s3:::YOUR-BUCKET-NAME/*"
]
}
]
}Security Best Practice: Always use the minimum required permissions. Create a dedicated user/key for Home Assistant backups.
- Verify your Access Key ID and Secret Access Key are correct
- Check that the credentials have the required permissions
- Ensure the credentials are not expired or revoked
- Verify the Endpoint URL is correct and accessible
- Check your network connection and firewall rules
- For self-hosted solutions (MinIO), ensure the server is running
- Bucket names must be lowercase
- Bucket names must be between 3-63 characters
- Bucket names can only contain letters, numbers, and hyphens
- The bucket must already exist (this integration does not create buckets)
- Ensure the URL starts with
http://orhttps:// - Check for typos in the endpoint URL
- Verify the region matches the endpoint
- Wait up to 5 minutes (backup list is cached)
- Check the Home Assistant logs for errors
- Verify the bucket contains
.metadata.jsonfiles
Add to configuration.yaml:
logger:
default: info
logs:
custom_components.bauergroup_s3compatiblebackup: debug
aiobotocore: debug
botocore: debugYou can test your S3 connection using the AWS CLI:
# Install AWS CLI
pip install awscli
# Configure credentials
aws configure --profile homeassistant
# Enter your Access Key ID, Secret Access Key, and region
# Test listing the bucket
aws s3 ls s3://your-bucket-name --profile homeassistant --endpoint-url https://your-endpoint-url
# Test uploading a file
echo "test" > test.txt
aws s3 cp test.txt s3://your-bucket-name/test.txt --profile homeassistant --endpoint-url https://your-endpoint-url
# Test downloading a file
aws s3 cp s3://your-bucket-name/test.txt test-download.txt --profile homeassistant --endpoint-url https://your-endpoint-url
# Clean up
aws s3 rm s3://your-bucket-name/test.txt --profile homeassistant --endpoint-url https://your-endpoint-url
rm test.txt test-download.txt| Provider | Free Tier | Egress Fees | Min. Storage Fee | S3 Compatible |
|---|---|---|---|---|
| AWS S3 | 5 GB (12 months) | Yes | No | Native |
| Backblaze B2 | 10 GB | Free up to 3x storage | No | Yes |
| Wasabi | No | No | Yes (1 TB min) | Yes |
| Cloudflare R2 | 10 GB | No | No | Yes |
| DigitalOcean Spaces | No | Yes | $5/month | Yes |
| MinIO | Self-hosted | N/A | N/A | Yes |
| Garage | Self-hosted | N/A | N/A | Yes |
MIT License - see LICENSE file for details.
Contributions are welcome! Please open an issue or pull request.