Skip to content

Commit 9650e4b

Browse files
DOC-427 (#118)
* Added SeaweedFS * Replaced MinIO example with SeaweedFS example * Removed one minio reference * Added S3 client information * Modified the admonition block * Updated admonition block * Adjusted scripts for SeaweedFS
1 parent 4c228d4 commit 9650e4b

8 files changed

Lines changed: 108 additions & 105 deletions

File tree

docs/includes/hot-backup-file-storage-minio.md

Lines changed: 0 additions & 30 deletions
This file was deleted.
Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
```bash
2+
#!/bin/bash
3+
4+
# TheHive attachment variables
5+
SEAWEEDFS_ARCHIVE_PATH=/mnt/backup/seaweedfs/
6+
7+
# SeaweedFS variables
8+
SEAWEEDFS_BUCKET="thehive"
9+
SEAWEEDFS_ALIAS=th_seaweedfs
10+
SEAWEEDFS_SNAPSHOT_NAME="seaweedfs_$(date +%Y%m%d_%Hh%Mm%Ss)"
11+
12+
# Check if SeaweedFS is accessible
13+
if ! mcli ls ${SEAWEEDFS_ALIAS} > /dev/null 2>&1; then
14+
echo "Error: Cannot connect to SeaweedFS server"
15+
exit 1
16+
fi
17+
18+
# Mirror SeaweedFS bucket content to local backup folder
19+
mcli mirror ${SEAWEEDFS_ALIAS}/${SEAWEEDFS_BUCKET} ${SEAWEEDFS_ARCHIVE_PATH}/${SEAWEEDFS_SNAPSHOT_NAME}
20+
21+
tar cvf ${SEAWEEDFS_ARCHIVE_PATH}/${SEAWEEDFS_SNAPSHOT_NAME}.tar -C "${SEAWEEDFS_ARCHIVE_PATH}" ${SEAWEEDFS_SNAPSHOT_NAME}
22+
23+
# Display the location of the backup
24+
echo ""
25+
echo "TheHive attachment files backup done! Keep the following backup archive safe:"
26+
echo "${SEAWEEDFS_ARCHIVE_PATH}/${SEAWEEDFS_SNAPSHOT_NAME}.tar"
27+
28+
rm -rf ${SEAWEEDFS_ARCHIVE_PATH}/${SEAWEEDFS_SNAPSHOT_NAME}
29+
```

docs/includes/hot-restore-file-storage-minio.md

Lines changed: 0 additions & 42 deletions
This file was deleted.
Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
```bash
2+
#!/bin/bash
3+
4+
# TheHive attachment variables
5+
SEAWEEDFS_ARCHIVE_PATH=/mnt/backup/seaweedfs/
6+
7+
# SeaweedFS variables
8+
SEAWEEDFS_BUCKET="thehive"
9+
SEAWEEDFS_ALIAS=th_seaweedfs
10+
11+
# Check if SeaweedFS is accessible
12+
if ! mcli ls ${SEAWEEDFS_ALIAS} > /dev/null 2>&1; then
13+
echo "Error: Cannot connect to SeaweedFS server"
14+
exit 1
15+
fi
16+
17+
# Look for the latest backup snapshot in SeaweedFS
18+
SEAWEEDFS_BACKUP_LIST=(${SEAWEEDFS_ARCHIVE_PATH}/seaweedfs_????????_??h??m??s.tar)
19+
SEAWEEDFS_LATEST_BACKUP_NAME=$(basename ${SEAWEEDFS_BACKUP_LIST[-1]})
20+
if [ -z "${SEAWEEDFS_LATEST_BACKUP_NAME}" ]; then
21+
echo "Error: No backup snapshots found in ${SEAWEEDFS_ARCHIVE_PATH}"
22+
exit 1
23+
fi
24+
25+
echo "Latest attachment files backup snapshot found is ${SEAWEEDFS_ARCHIVE_PATH}/${SEAWEEDFS_LATEST_BACKUP_NAME}"
26+
27+
tar xvf "${SEAWEEDFS_ARCHIVE_PATH}/${SEAWEEDFS_LATEST_BACKUP_NAME}" -C ${SEAWEEDFS_ARCHIVE_PATH} > /dev/null
28+
29+
if [ ! -d "${SEAWEEDFS_ARCHIVE_PATH}/${SEAWEEDFS_LATEST_BACKUP_NAME%.tar}" ]; then
30+
echo "Error: Extracted folder not found"
31+
exit 1
32+
fi
33+
34+
echo "Latest SeaweedFS backup archive extracted in ${SEAWEEDFS_ARCHIVE_PATH}/${SEAWEEDFS_LATEST_BACKUP_NAME%.tar}"
35+
36+
# Restore attachments from SeaweedFS
37+
echo "Restoring attachments from SeaweedFS snapshot ${SEAWEEDFS_LATEST_BACKUP_NAME}..."
38+
mcli mirror ${SEAWEEDFS_ARCHIVE_PATH}/${SEAWEEDFS_LATEST_BACKUP_NAME%.tar} ${SEAWEEDFS_ALIAS}/${SEAWEEDFS_BUCKET}/
39+
40+
# Clean up extracted folder
41+
rm -rf "${SEAWEEDFS_ARCHIVE_PATH}/${SEAWEEDFS_LATEST_BACKUP_NAME%.tar}"
42+
43+
# Display completion message
44+
echo ""
45+
echo "Attachment files data restoration done!"
46+
echo "Restored from: ${SEAWEEDFS_ARCHIVE_PATH}/${SEAWEEDFS_LATEST_BACKUP_NAME}"
47+
```
Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
!!! warning "Required S3 client"
2+
You need an S3-compatible client to sync or mirror the bucket locally. The example uses `mc`. [Install it](https://github.com/minio/mc){target=_blank} before continuing. If you prefer another tool, adapt the commands accordingly.

docs/thehive/installation/deploying-a-cluster.md

Lines changed: 5 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -418,16 +418,9 @@ To set up shared file storage for TheHive in a clustered environment, several op
418418

419419
=== "S3-compatible object storage"
420420

421-
TheHive can store files in object storage that implements the Amazon S3 API. This includes [Amazon S3](https://aws.amazon.com/s3/){target=_blank} itself, as well as many S3-compatible services, whether managed or self-hosted.
421+
TheHive can store files in object storage that implements the Amazon S3 API. This includes [Amazon S3](https://aws.amazon.com/s3/){target=_blank} itself, as well as many S3-compatible services, whether managed by a cloud provider or self-hosted.
422422

423-
Commonly used S3-compatible options include:
424-
425-
* [Cloudflare R2](https://developers.cloudflare.com/r2/){target=_blank}
426-
* [DigitalOcean Spaces](https://www.digitalocean.com/products/spaces){target=_blank}
427-
* [Wasabi](https://wasabi.com/){target=_blank}
428-
* [Backblaze B2](https://www.backblaze.com/cloud-storage){target=_blank}
429-
* [MinIO](https://www.min.io/){target=_blank}
430-
* [Ceph Object Gateway](https://docs.ceph.com/en/reef/radosgw/){target=_blank}
423+
Several object storage solutions are compatible with TheHive. For example, the [SeaweedFS](https://github.com/seaweedfs/seaweedfs){target=_blank} S3-compatible storage system has been tested and works well with TheHive. You can also use object storage provided by your cloud provider or any other service implementing the S3 API.
431424

432425
!!! note "Endpoint and availability"
433426
From TheHive perspective, you configure a single S3 endpoint. If you self-host object storage, ensure the endpoint is highly available, for example via the storage platform itself or a correctly configured load balancer.
@@ -578,7 +571,7 @@ File storage contains [attachments](../user-guides/analyst-corner/cases/attachme
578571
* An existing bucket
579572
* An access key and secret key (or equivalent credentials for your storage service)
580573
* The S3-compatible endpoint URL
581-
* The region configured for your S3 service (if applicable)
574+
* The region configured for your S3 service (if it doesn't define one, use `us-east-1`)
582575

583576
To enable S3 file storage in a TheHive cluster, configure the storage section in `/etc/thehive/application.conf` on each TheHive node, using the same bucket and endpoint settings.
584577

@@ -603,8 +596,8 @@ File storage contains [attachments](../user-guides/analyst-corner/cases/attachme
603596
}
604597
```
605598

606-
!!! note "Access style and endpoint"
607-
Some S3-compatible providers require path-style access, while others support or prefer virtual-hosted style. If you encounter addressing issues, adjust `access-style` accordingly.
599+
!!! note "Access style"
600+
Some S3-compatible providers require path-style access, while others support or prefer virtual-hosted style. SeaweedFS requires path-style access when used with TheHive.
608601

609602
!!! note "High availability"
610603
Managed services expose a single highly available endpoint. For self-hosted S3-compatible platforms, ensure your endpoint is highly available, for example via the storage platform itself or a properly configured load balancer.

docs/thehive/operations/backup-restore/backup/hot-backup/hot-backup-cluster.md

Lines changed: 15 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
In this tutorial, we're going to guide you through performing a hot backup of TheHive on a cluster using the provided scripts.
44

5-
By the end, you'll have created complete backups of your database and search index across all three nodes, plus your file storage
5+
By the end, you'll have created complete backups of your database and search index across all three nodes, plus your file storage.
66

77
Hot backups let you protect your data while keeping TheHive running, which means zero downtime for your security operations team.
88

@@ -212,7 +212,7 @@ For more details about snapshot management, refer to the official [Cassandra doc
212212
213213
Finally, we're going to back up TheHive file storage, which contains all the attachments and files.
214214

215-
The backup procedure depends on your storage backend—either NFS or an S3-compatible object storage service. The script below uses MinIO as an example, but you can adapt the same approach to any S3-compatible implementation.
215+
The backup procedure depends on your storage backend—either NFS or an S3-compatible object storage service. The script below uses SeaweedFS as an example, but you can adapt the same approach to any S3-compatible implementation.
216216

217217
=== "NFS"
218218

@@ -226,31 +226,33 @@ The backup procedure depends on your storage backend—either NFS or an S3-compa
226226

227227
After running the script, the backup archive is available at `/mnt/backup/storage`. Be sure to copy this archive to a separate server or storage location to safeguard against data loss if the TheHive server fails.
228228

229-
=== "S3-compatible object storage (MinIO example)"
229+
=== "S3-compatible object storage (SeaweedFS example)"
230+
231+
{% include-markdown "includes/s3-client-required.md" %}
230232

231233
### 1. Prepare the backup script
232234

233235
Before running the script, you'll need to update several values to match your environment:
234236
235-
* Update `MINIO_ENDPOINT` with your MinIO server URL.
236-
* Update `MINIO_ACCESS_KEY` with your MinIO access key.
237-
* Update `MINIO_SECRET_KEY` with your MinIO secret key.
238-
* Change `MINIO_BUCKET` if you want to use a different bucket name.
239-
* Change `MINIO_ALIAS` if you want to use a different alias name.
237+
* Update `SEAWEEDFS_ENDPOINT` with your SeaweedFS server URL.
238+
* Update `SEAWEEDFS_ACCESS_KEY` with your SeaweedFS access key.
239+
* Update `SEAWEEDFS_SECRET_KEY` with your SeaweedFS secret key.
240+
* Change `SEAWEEDFS_BUCKET` if you want to use a different bucket name.
241+
* Change `SEAWEEDFS_ALIAS` if you want to use a different alias name.
240242
241-
### 2. Configure the MinIO alias
243+
### 2. Configure the SeaweedFS alias
242244
243-
Run this command once to configure the MinIO alias using the same values you defined in the script:
245+
Run this command once to configure the SeaweedFS alias using the same values you defined in the script:
244246
245247
```bash
246-
mcli alias set <minio_alias> <minio_endpoint> <minio_access_key> <minio_secret_key>
248+
mcli alias set <th_seaweedfs> <seaweedfs_endpoint> <seaweedfs_access_key> <seaweedfs_secret_key>
247249
```
248250
249251
### 3. Run the backup script
250252
251-
{% include-markdown "includes/hot-backup-file-storage-minio.md" %}
253+
{% include-markdown "includes/hot-backup-file-storage-seaweedfs.md" %}
252254
253-
After running the script, the backup archive is available at `/mnt/backup/minio`. Be sure to copy this archive to a separate server or storage location to safeguard against data loss if the TheHive server fails.
255+
After running the script, the backup archive is available at `/mnt/backup/seaweedfs`. Be sure to copy this archive to a separate server or storage location to safeguard against data loss if the TheHive server fails.
254256
255257
You've completed the hot backup process for your TheHive cluster. We recommend verifying your backup archives are complete and accessible before relying on them for recovery.
256258

docs/thehive/operations/backup-restore/restore/hot-restore/restore-hot-backup-cluster.md

Lines changed: 10 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ For additional details, refer to the official [Cassandra documentation](https://
6666

6767
Finally, we're going to restore the file attachments that were backed up.
6868

69-
The restore procedure depends on your storage backend—either NFS or an S3-compatible object storage service. The script below uses MinIO as an example, but you can adapt the same approach to any S3-compatible implementation.
69+
The restore procedure depends on your storage backend—either NFS or an S3-compatible object storage service. The script below uses SeaweedFS as an example, but you can adapt the same approach to any S3-compatible implementation.
7070

7171
=== "NFS"
7272

@@ -78,21 +78,23 @@ The restore procedure depends on your storage backend—either NFS or an S3-comp
7878

7979
{% include-markdown "includes/hot-restore-file-storage-local-nfs.md" %}
8080

81-
=== "S3-compatible object storage (MinIO example)"
81+
=== "S3-compatible object storage (SeaweedFS example)"
82+
83+
{% include-markdown "includes/s3-client-required.md" %}
8284

8385
### 1. Prepare the restore script
8486

8587
Before running the script, you'll need to update several values to match your environment:
8688

87-
* Update `MINIO_ENDPOINT` with your MinIO server URL.
88-
* Update `MINIO_ACCESS_KEY` with your MinIO access key.
89-
* Update `MINIO_SECRET_KEY` with your MinIO secret key.
90-
* Change `MINIO_BUCKET` if you want to use a different bucket name.
91-
* Change `MINIO_ALIAS` if you want to use a different alias name.
89+
* Update `SEAWEEDFS_ENDPOINT` with your SeaweedFS server URL.
90+
* Update `SEAWEEDFS_ACCESS_KEY` with your SeaweedFS access key.
91+
* Update `SEAWEEDFS_SECRET_KEY` with your SeaweedFS secret key.
92+
* Change `SEAWEEDFS_BUCKET` if you want to use a different bucket name.
93+
* Change `SEAWEEDFS_ALIAS` if you want to use a different alias name.
9294

9395
### 2. Run the restore script
9496

95-
{% include-markdown "includes/hot-restore-file-storage-minio.md" %}
97+
{% include-markdown "includes/hot-restore-file-storage-seaweedfs.md" %}
9698

9799
## Step 4: Start TheHive on all nodes and verify
98100

0 commit comments

Comments
 (0)