Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
d79c0ed
test(orchestrator): maximize coverage for network_apply
tis24dev Feb 27, 2026
6e37862
ci: bump the actions-updates group across 1 directory with 2 updates …
dependabot[bot] Mar 4, 2026
e2ec610
backup/pve: clarify datastore skip logs
tis24dev Mar 5, 2026
fc0aed4
feat(webhook): optional Discord content fallback for embed-only messages
tis24dev Mar 5, 2026
55cad47
Revert "feat(webhook): optional Discord content fallback for embed-on…
tis24dev Mar 6, 2026
d46db6b
Update go.mod
tis24dev Mar 9, 2026
c6bfbb1
Harden restore decisions against untrusted backup metadata
tis24dev Mar 9, 2026
1f45f1b
Enforce staged backup checksum verification in restore/decrypt flows
tis24dev Mar 9, 2026
e5992cc
fix(orchestrator): preserve staged network symlinks
tis24dev Mar 9, 2026
dbf61c7
fix(orchestrator): harden restore path validation against broken syml…
tis24dev Mar 9, 2026
e612754
fix(backup): sanitize datastore and storage path segments
tis24dev Mar 9, 2026
036e573
fix(io): bound timed fs and input goroutine buildup
tis24dev Mar 9, 2026
74b5123
fix(orchestrator): preserve operational restore errors during path va…
tis24dev Mar 9, 2026
78dd8cb
Separate PBS override scan roots from real datastore identities.
tis24dev Mar 10, 2026
9e0fe52
fix(orchestrator): resolve symlinked base dirs before validating rela…
tis24dev Mar 10, 2026
86958be
fix(safefs): harden timeout test cleanup and limiter capture
tis24dev Mar 10, 2026
36a84d6
fix(input): preserve completed inflight reads across retries
tis24dev Mar 10, 2026
fbec12f
fix(pbs): disambiguate datastore output keys across CLI and overrides
tis24dev Mar 10, 2026
d8a0392
test(input): assert inflight cleanup after completed retry consumption
tis24dev Mar 10, 2026
ae39ceb
fix(safefs): return TimeoutError when deadline expires before run
tis24dev Mar 10, 2026
3539f19
fix(restore): validate compatibility before full restore fallback
tis24dev Mar 10, 2026
9f3d978
fix(orchestrator): bound restore metadata reads
tis24dev Mar 10, 2026
8cbef16
fix(orchestrator): validate staged network symlink targets
tis24dev Mar 10, 2026
5f910fe
fix(orchestrator): validate symlink targets in firewall staged sync
tis24dev Mar 10, 2026
5f22466
fix(orchestrator): recognize full Proxmox system names
tis24dev Mar 10, 2026
7d29083
fix(orchestrator): validate raw backup checksums during discovery
tis24dev Mar 10, 2026
f86c94c
test: add PBS datastore status filename collision regression coverage
tis24dev Mar 10, 2026
160a442
fix(pbs): always apply datastore path overrides without CLI discovery
tis24dev Mar 10, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ jobs:
# GORELEASER
########################################
- name: Run GoReleaser
uses: goreleaser/goreleaser-action@v6
uses: goreleaser/goreleaser-action@v7
with:
version: latest
workdir: ${{ github.workspace }}
Expand All @@ -82,6 +82,6 @@ jobs:
# ATTESTAZIONE PROVENIENZA BUILD
########################################
- name: Attest Build Provenance
uses: actions/attest-build-provenance@v3
uses: actions/attest-build-provenance@v4
with:
subject-path: build/proxsave_*
14 changes: 8 additions & 6 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
.PHONY: build test clean run build-release test-coverage lint fmt deps help coverage coverage-check

COVERAGE_THRESHOLD ?= 50.0
TOOLCHAIN_FROM_MOD := $(shell awk '/^toolchain /{print $$2}' go.mod 2>/dev/null)
COVER_GOTOOLCHAIN := $(if $(TOOLCHAIN_FROM_MOD),$(TOOLCHAIN_FROM_MOD)+auto,auto)

# Build del progetto
build:
Expand Down Expand Up @@ -60,20 +62,20 @@ test:

# Test con coverage
test-coverage:
go test -coverprofile=coverage.out ./...
go tool cover -html=coverage.out
GOTOOLCHAIN=$(COVER_GOTOOLCHAIN) go test -coverprofile=coverage.out ./...
GOTOOLCHAIN=$(COVER_GOTOOLCHAIN) go tool cover -html=coverage.out

# Full coverage report (all packages)
coverage:
@echo "Running coverage across all packages..."
@go test -coverpkg=./... -coverprofile=coverage.out ./...
@go tool cover -func=coverage.out | tail -n 1
@GOTOOLCHAIN=$(COVER_GOTOOLCHAIN) go test -coverpkg=./... -coverprofile=coverage.out ./...
@GOTOOLCHAIN=$(COVER_GOTOOLCHAIN) go tool cover -func=coverage.out | tail -n 1

# Enforce minimum coverage threshold
coverage-check:
@echo "Running coverage check (threshold $(COVERAGE_THRESHOLD)% )..."
@go test -coverpkg=./... -coverprofile=coverage.out ./...
@total=$$(go tool cover -func=coverage.out | grep total: | awk '{print $$3}' | sed 's/%//'); \
@GOTOOLCHAIN=$(COVER_GOTOOLCHAIN) go test -coverpkg=./... -coverprofile=coverage.out ./...
@total=$$(GOTOOLCHAIN=$(COVER_GOTOOLCHAIN) go tool cover -func=coverage.out | grep total: | awk '{print $$3}' | sed 's/%//'); \
echo "Total coverage: $$total%"; \
if awk -v total="$$total" -v threshold="$(COVERAGE_THRESHOLD)" 'BEGIN { exit !(total+0 >= threshold+0) }'; then \
echo "Coverage check passed."; \
Expand Down
2 changes: 2 additions & 0 deletions docs/CONFIGURATION.md
Original file line number Diff line number Diff line change
Expand Up @@ -1053,6 +1053,8 @@ VZDUMP_CONFIG_PATH=/etc/vzdump.conf

# PBS datastore paths (comma/space separated)
PBS_DATASTORE_PATH= # e.g., "/mnt/pbs1,/mnt/pbs2"
# Extra filesystem scan roots for datastore/PXAR discovery; these do not create
# real PBS datastore definitions and may use path-derived output keys.

# System root override (testing/chroot)
SYSTEM_ROOT_PREFIX= # Optional alternate root for system collection. Empty or "/" = real root.
Expand Down
3 changes: 2 additions & 1 deletion docs/RESTORE_GUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ API apply is automatic for supported PBS staged categories; ProxSave may fall ba
|----------|------|-------------|-------|
| `pbs_config` | PBS Config Export | **Export-only** copy of /etc/proxmox-backup (never written to system) | `./etc/proxmox-backup/` |
| `pbs_host` | PBS Host & Integrations | **Staged** node settings, ACME, proxy, metric servers and traffic control (API/file apply) | `./etc/proxmox-backup/node.cfg`<br>`./etc/proxmox-backup/proxy.cfg`<br>`./etc/proxmox-backup/acme/accounts.cfg`<br>`./etc/proxmox-backup/acme/plugins.cfg`<br>`./etc/proxmox-backup/metricserver.cfg`<br>`./etc/proxmox-backup/traffic-control.cfg`<br>`./var/lib/proxsave-info/commands/pbs/node_config.json`<br>`./var/lib/proxsave-info/commands/pbs/acme_accounts.json`<br>`./var/lib/proxsave-info/commands/pbs/acme_plugins.json`<br>`./var/lib/proxsave-info/commands/pbs/acme_account_*_info.json`<br>`./var/lib/proxsave-info/commands/pbs/acme_plugin_*_config.json`<br>`./var/lib/proxsave-info/commands/pbs/traffic_control.json` |
| `datastore_pbs` | PBS Datastore Configuration | **Staged** datastore definitions (incl. S3 endpoints) (API/file apply) | `./etc/proxmox-backup/datastore.cfg`<br>`./etc/proxmox-backup/s3.cfg`<br>`./var/lib/proxsave-info/commands/pbs/datastore_list.json`<br>`./var/lib/proxsave-info/commands/pbs/datastore_*_status.json`<br>`./var/lib/proxsave-info/commands/pbs/s3_endpoints.json`<br>`./var/lib/proxsave-info/commands/pbs/s3_endpoint_*_buckets.json`<br>`./var/lib/proxsave-info/commands/pbs/pbs_datastore_inventory.json` |
| `datastore_pbs` | PBS Datastore Configuration | **Staged** datastore definitions (incl. S3 endpoints) (API/file apply) | `./etc/proxmox-backup/datastore.cfg`<br>`./etc/proxmox-backup/s3.cfg`<br>`./var/lib/proxsave-info/commands/pbs/datastore_list.json`<br>`./var/lib/proxsave-info/commands/pbs/datastore_*_status.json`<br>`./var/lib/proxsave-info/commands/pbs/s3_endpoints.json`<br>`./var/lib/proxsave-info/commands/pbs/s3_endpoint_*_buckets.json`<br>`./var/lib/proxsave-info/commands/pbs/pbs_datastore_inventory.json`<br>Note: `PBS_DATASTORE_PATH` override scan roots are inventory context only and are not recreated as datastore definitions during restore. |
| `maintenance_pbs` | PBS Maintenance | Maintenance settings | `./etc/proxmox-backup/maintenance.cfg` |
| `pbs_jobs` | PBS Jobs | **Staged** sync/verify/prune jobs (API/file apply) | `./etc/proxmox-backup/sync.cfg`<br>`./etc/proxmox-backup/verification.cfg`<br>`./etc/proxmox-backup/prune.cfg`<br>`./var/lib/proxsave-info/commands/pbs/sync_jobs.json`<br>`./var/lib/proxsave-info/commands/pbs/verification_jobs.json`<br>`./var/lib/proxsave-info/commands/pbs/prune_jobs.json`<br>`./var/lib/proxsave-info/commands/pbs/gc_jobs.json` |
| `pbs_remotes` | PBS Remotes | **Staged** remotes for sync/verify (may include credentials) (API/file apply) | `./etc/proxmox-backup/remote.cfg`<br>`./var/lib/proxsave-info/commands/pbs/remote_list.json` |
Expand Down Expand Up @@ -2384,6 +2384,7 @@ systemctl restart proxmox-backup proxmox-backup-proxy
**Restore behavior**:
- ProxSave detects this condition during staged apply.
- If `var/lib/proxsave-info/commands/pbs/pbs_datastore_inventory.json` is available in the backup, ProxSave will use its embedded snapshot of the original `datastore.cfg` to recover a valid configuration.
- Inventory entries that came only from `PBS_DATASTORE_PATH` scan roots are treated as diagnostic context and are excluded from regenerated `datastore.cfg`.
- If recovery is not possible, ProxSave will **leave the existing** `/etc/proxmox-backup/datastore.cfg` unchanged to avoid breaking PBS.

**Manual diagnosis**:
Expand Down
2 changes: 1 addition & 1 deletion go.mod
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ module github.com/tis24dev/proxsave

go 1.25

toolchain go1.25.7
toolchain go1.25.8

require (
filippo.io/age v1.3.1
Expand Down
38 changes: 33 additions & 5 deletions internal/backup/checksum.go
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,30 @@ type Manifest struct {
ClusterMode string `json:"cluster_mode,omitempty"`
}

// NormalizeChecksum validates and normalizes a SHA256 checksum string.
func NormalizeChecksum(value string) (string, error) {
checksum := strings.ToLower(strings.TrimSpace(value))
if checksum == "" {
return "", fmt.Errorf("checksum is empty")
}
if len(checksum) != sha256.Size*2 {
return "", fmt.Errorf("checksum must be %d hex characters, got %d", sha256.Size*2, len(checksum))
}
if _, err := hex.DecodeString(checksum); err != nil {
return "", fmt.Errorf("checksum is not valid hex: %w", err)
}
return checksum, nil
}

// ParseChecksumData extracts a SHA256 checksum from checksum file contents.
func ParseChecksumData(data []byte) (string, error) {
fields := strings.Fields(string(data))
if len(fields) == 0 {
return "", fmt.Errorf("checksum file is empty")
}
return NormalizeChecksum(fields[0])
}

// GenerateChecksum calculates SHA256 checksum of a file
func GenerateChecksum(ctx context.Context, logger *logging.Logger, filePath string) (string, error) {
logger.Debug("Generating SHA256 checksum for: %s", filePath)
Expand Down Expand Up @@ -105,16 +129,21 @@ func CreateManifest(ctx context.Context, logger *logging.Logger, manifest *Manif
func VerifyChecksum(ctx context.Context, logger *logging.Logger, filePath, expectedChecksum string) (bool, error) {
logger.Debug("Verifying checksum for: %s", filePath)

normalizedExpected, err := NormalizeChecksum(expectedChecksum)
if err != nil {
return false, fmt.Errorf("invalid expected checksum: %w", err)
}

actualChecksum, err := GenerateChecksum(ctx, logger, filePath)
if err != nil {
return false, fmt.Errorf("failed to generate checksum: %w", err)
}

matches := actualChecksum == expectedChecksum
matches := actualChecksum == normalizedExpected
if matches {
logger.Debug("Checksum verification passed")
} else {
logger.Warning("Checksum mismatch! Expected: %s, Got: %s", expectedChecksum, actualChecksum)
logger.Warning("Checksum mismatch! Expected: %s, Got: %s", normalizedExpected, actualChecksum)
}

return matches, nil
Expand Down Expand Up @@ -205,9 +234,8 @@ func parseLegacyMetadata(scanner *bufio.Scanner, legacy *Manifest) {
func loadLegacyChecksum(archivePath string, legacy *Manifest) {
// Attempt to load checksum from legacy .sha256 file
if shaData, err := os.ReadFile(archivePath + ".sha256"); err == nil {
fields := strings.Fields(string(shaData))
if len(fields) > 0 {
legacy.SHA256 = fields[0]
if checksum, parseErr := ParseChecksumData(shaData); parseErr == nil {
legacy.SHA256 = checksum
}
}
}
Expand Down
7 changes: 4 additions & 3 deletions internal/backup/checksum_legacy_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,8 @@ func TestLoadLegacyManifestWithShaAndFallbackEncryption(t *testing.T) {
t.Fatalf("write metadata: %v", err)
}

shaLine := "deadbeef " + filepath.Base(archive) + "\n"
expectedSHA := strings.Repeat("a", 64)
shaLine := expectedSHA + " " + filepath.Base(archive) + "\n"
if err := os.WriteFile(archive+".sha256", []byte(shaLine), 0o640); err != nil {
t.Fatalf("write sha256: %v", err)
}
Expand All @@ -50,8 +51,8 @@ func TestLoadLegacyManifestWithShaAndFallbackEncryption(t *testing.T) {
if m.EncryptionMode != "plain" {
t.Fatalf("expected fallback encryption mode plain, got %s", m.EncryptionMode)
}
if m.SHA256 != "deadbeef" {
t.Fatalf("expected sha256 deadbeef, got %s", m.SHA256)
if m.SHA256 != expectedSHA {
t.Fatalf("expected sha256 %s, got %s", expectedSHA, m.SHA256)
}
if time.Since(m.CreatedAt) > time.Minute {
t.Fatalf("unexpected CreatedAt too old: %v", m.CreatedAt)
Expand Down
5 changes: 3 additions & 2 deletions internal/backup/checksum_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -145,8 +145,9 @@ ENCRYPTION_MODE=age
if err := os.WriteFile(metadataPath, []byte(metadata), 0644); err != nil {
t.Fatalf("failed to write metadata: %v", err)
}
expectedSHA := "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
shaPath := archivePath + ".sha256"
if err := os.WriteFile(shaPath, []byte("deadbeef "+filepath.Base(archivePath)), 0644); err != nil {
if err := os.WriteFile(shaPath, []byte(expectedSHA+" "+filepath.Base(archivePath)), 0644); err != nil {
t.Fatalf("failed to write sha file: %v", err)
}

Expand All @@ -163,7 +164,7 @@ ENCRYPTION_MODE=age
if manifest.Hostname != "legacy-host" || manifest.ScriptVersion != "legacy-1.0" {
t.Fatalf("legacy metadata not parsed correctly: %+v", manifest)
}
if manifest.SHA256 != "deadbeef" {
if manifest.SHA256 != expectedSHA {
t.Fatalf("expected SHA256 from sidecar, got %q", manifest.SHA256)
}
if manifest.EncryptionMode != "age" {
Expand Down
17 changes: 17 additions & 0 deletions internal/backup/collector.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ package backup
import (
"bytes"
"context"
"crypto/sha256"
"encoding/hex"
"errors"
"fmt"
"io"
Expand Down Expand Up @@ -1180,6 +1182,21 @@ func sanitizeFilename(name string) string {
return clean
}

func collectorPathKey(name string) string {
trimmed := strings.TrimSpace(name)
if trimmed == "" {
return "entry"
}

safe := sanitizeFilename(trimmed)
if safe == trimmed {
return safe
}

sum := sha256.Sum256([]byte(trimmed))
return fmt.Sprintf("%s_%s", safe, hex.EncodeToString(sum[:4]))
}

// GetStats returns current collection statistics
func (c *Collector) GetStats() *CollectionStats {
c.statsMu.Lock()
Expand Down
27 changes: 26 additions & 1 deletion internal/backup/collector_helpers_extra_test.go
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
package backup

import "testing"
import (
"strings"
"testing"
)

func TestSummarizeCommandOutputText(t *testing.T) {
if got := summarizeCommandOutputText(""); got != "(no stdout/stderr)" {
Expand Down Expand Up @@ -38,3 +41,25 @@ func TestSanitizeFilenameExtra(t *testing.T) {
}
}
}

func TestCollectorPathKey(t *testing.T) {
if got := collectorPathKey("store1"); got != "store1" {
t.Fatalf("collectorPathKey(store1)=%q want %q", got, "store1")
}

unsafe := "../evil"
got := collectorPathKey(unsafe)
if got == unsafe {
t.Fatalf("collectorPathKey(%q) should not keep unsafe value", unsafe)
}
if got == sanitizeFilename(unsafe) {
t.Fatalf("collectorPathKey(%q) should add a disambiguating suffix", unsafe)
}
if !strings.HasPrefix(got, "__evil") {
t.Fatalf("collectorPathKey(%q)=%q should start with sanitized prefix", unsafe, got)
}

if a, b := collectorPathKey("a/b"), collectorPathKey("a_b"); a == b {
t.Fatalf("collectorPathKey should avoid collisions: %q == %q", a, b)
}
}
19 changes: 17 additions & 2 deletions internal/backup/collector_pbs.go
Original file line number Diff line number Diff line change
Expand Up @@ -347,6 +347,11 @@ func (c *Collector) collectPBSDirectories(ctx context.Context, root string) erro

// collectPBSCommands collects output from PBS commands
func (c *Collector) collectPBSCommands(ctx context.Context, datastores []pbsDatastore) error {
if len(datastores) > 0 {
datastores = clonePBSDatastores(datastores)
assignUniquePBSDatastoreOutputKeys(datastores)
}
Comment on lines +350 to +353
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use the assigned datastore output key for status snapshots.

assignUniquePBSDatastoreOutputKeys(datastores) is effectively ignored here because the status filename is still derived from ds.pathKey(). If two entries collapse to the same path key, the later datastore_*_status.json overwrites the earlier one, which drops datastore metadata and defeats the collision handling this PR adds.

Also applies to: 400-403

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/backup/collector_pbs.go` around lines 350 - 353, The status snapshot
filename is still generated from ds.pathKey(), so
assignUniquePBSDatastoreOutputKeys(datastores) has no effect and later entries
overwrite earlier ones; update the status snapshot generation (where you
currently call ds.pathKey()) to use the datastore's assigned output key set by
assignUniquePBSDatastoreOutputKeys (the field or accessor populated by that
function, e.g., datastore.OutputKey or datastore.outputKey()) when creating the
datastore_*_status.json filenames so collisions are avoided; apply the same
change to the other occurrences mentioned (lines ~400-403) and keep
clonePBSDatastores(datastores) + assignUniquePBSDatastoreOutputKeys(datastores)
as-is.


commandsDir := c.proxsaveCommandsDir("pbs")
if err := c.ensureDir(commandsDir); err != nil {
return fmt.Errorf("failed to create commands directory: %w", err)
Expand Down Expand Up @@ -383,9 +388,19 @@ func (c *Collector) collectPBSCommands(ctx context.Context, datastores []pbsData
// Datastore usage details
if c.config.BackupDatastoreConfigs && len(datastores) > 0 {
for _, ds := range datastores {
if ds.isOverride() {
c.logger.Debug("Skipping datastore status for %s (path=%s): no PBS datastore identity", ds.Name, ds.Path)
continue
}
cliName := ds.cliName()
if cliName == "" {
c.logger.Debug("Skipping datastore status for %s (path=%s): empty PBS datastore identity", ds.Name, ds.Path)
continue
}
dsKey := ds.pathKey()
c.safeCmdOutput(ctx,
fmt.Sprintf("proxmox-backup-manager datastore show %s --output-format=json", ds.Name),
filepath.Join(commandsDir, fmt.Sprintf("datastore_%s_status.json", ds.Name)),
fmt.Sprintf("proxmox-backup-manager datastore show %s --output-format=json", cliName),
filepath.Join(commandsDir, fmt.Sprintf("datastore_%s_status.json", dsKey)),
fmt.Sprintf("Datastore %s status", ds.Name),
false)
}
Expand Down
Loading
Loading