Skip to content

fix: use cpuset instead of cpusnano on synology devices#1782

Merged
kmendell merged 1 commit intomainfrom
fix/cpuset-synology
Feb 19, 2026
Merged

fix: use cpuset instead of cpusnano on synology devices#1782
kmendell merged 1 commit intomainfrom
fix/cpuset-synology

Conversation

@kmendell
Copy link
Copy Markdown
Member

@kmendell kmendell commented Feb 16, 2026

What This PR Implements

Related issue

Related Issue

Fixes # #1727

Changes Made

Testing Done

  • Development environment started: ./scripts/development/dev.sh start
  • Frontend verified at http://localhost:3000
  • Backend verified at http://localhost:3552
  • Manual testing completed (describe):
  • No linting errors (e.g., just lint all)
  • Backend tests pass: just test backend

Checklist

  • This PR is not opened from my fork’s main branch

AI Tool Used (if applicable)

AI Tool:
Assistance Level:
What AI helped with:
I reviewed and edited all AI-generated output:
I ran all required tests and manually verified changes:

Additional Context

Disclaimer Greptiles Reviews use AI, make sure to check over its work.

To better help train Greptile on our codebase, if the comment is useful and valid Like the comment, if its not helpful or invalid Dislike

Greptile Summary

This PR addresses Synology NAS compatibility (issue #1727) by replacing Docker's NanoCPUs resource limit with CpusetCpus on Synology hosts, where the NanoCPUs API is unsupported. Beyond the core fix, the PR significantly expands the vulnerability scanning infrastructure:

  • Synology cpuset workaround: Detects Synology Docker hosts via docker info and converts NanoCPU limits to cpuset-based CPU pinning.
  • Concurrent scan containers: Adds a new trivyConcurrentScanContainers setting (default 1) with slot-based concurrency control and per-slot cache directories to avoid Trivy fs-cache lock contention.
  • Scan phase tracking: Introduces in-memory ScanPhase tracking (creating_container, scanning_image, storing_results) surfaced to the frontend as a progress indicator.
  • Robust Trivy output parsing: Adds --output file-based result collection with tar extraction from containers, plus fallback JSON extraction that handles noisy/concatenated output from Docker stream variants.
  • Retry with fallback: Scan result persistence now retries on transient DB errors (e.g. SQLite lock contention) with exponential backoff and a minimal status-only fallback write.
  • Frontend improvements: Scan phase progress visualization, stale failure detection/stabilization to avoid flashing stale "failed" states during scan transitions, and a new concurrent scan containers setting in the security page.

The scope is substantially larger than the PR title suggests.

Confidence Score: 3/5

  • Generally safe to merge, but contains a Go concurrency bug in the retry loop that should be addressed.
  • The PR is well-structured with good test coverage for new utility functions. However, the break inside a select statement in saveScanResultWithRetryInternal is a known Go gotcha that will cause unnecessary retries on a cancelled context. The large scope of changes (~1750 lines added) touches critical scanning infrastructure and warrants careful testing on actual Synology devices.
  • backend/internal/services/vulnerability_service.go contains the retry loop bug and the bulk of the new infrastructure code.

Important Files Changed

Filename Overview
backend/internal/services/vulnerability_service.go Major rework: adds concurrent scan slots with per-slot cache dirs, scan phase tracking, Synology cpuset workaround, retry-with-fallback for scan saves, and robust Trivy output parsing. Contains a bug where break in select fails to exit the retry loop on context cancellation.
backend/internal/services/vulnerability_service_test.go Comprehensive new tests for cpuset building, slot channels, noisy Trivy output recovery, container wait responses, and updated function signatures.
types/vulnerability/vulnerability.go Adds ScanPhase type with constants and adds ScanPhase field to both ScanResult and ScanSummary types.
frontend/src/lib/components/vulnerability/vulnerability-scan-item.svelte Adds scan phase progress visualization with step indicators, stabilization logic for stale failures, and improved polling restart flow.
frontend/src/lib/utils/vulnerability-scan.util.ts Adds stale failure detection and stabilization utilities, centralizes scan-in-progress status check.
frontend/src/routes/(app)/settings/security/+page.svelte Adds concurrent scan containers setting to the security settings form with validation and change tracking.
frontend/src/routes/(app)/images/[imageId]/+page.svelte Adds scan phase tracking, stale failure stabilization on polling completion, and improved scan status handling.
frontend/src/routes/(app)/images/image-table.svelte Adds stale failure filtering in batch scan polling and scan request timestamp tracking per image.

Last reviewed commit: 852fa9f

@kmendell kmendell marked this pull request as ready for review February 16, 2026 21:15
@kmendell kmendell requested a review from a team February 16, 2026 21:15
Copy link
Copy Markdown
Member Author

This stack of pull requests is managed by Graphite. Learn more about stacking.

@github-actions
Copy link
Copy Markdown

github-actions Bot commented Feb 16, 2026

🔍 Deadcode Analysis

Found 3 unreachable functions in the backend.

View details
internal/services/auth_service.go:753:23: unreachable func: AuthService.GetAutoLoginConfig
internal/services/auth_service.go:787:23: unreachable func: AuthService.GetAutoLoginPassword
internal/utils/ws/hub.go:27:15: unreachable func: Hub.ClientCount

Only remove deadcode that you know is 100% no longer used.

Analysis from commit f8c64ec

@getarcaneappbot
Copy link
Copy Markdown
Contributor

getarcaneappbot commented Feb 16, 2026

Container images for this PR have been built successfully!

  • Manager: ghcr.io/getarcaneapp/arcane:pr-1782
  • Agent: ghcr.io/getarcaneapp/arcane-headless:pr-1782

Built from commit 467a933

Copy link
Copy Markdown
Contributor

@greptile-apps greptile-apps Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

4 files reviewed, 2 comments

Edit Code Review Agent Settings | Greptile

Comment thread frontend/src/routes/(app)/settings/security/+page.svelte
Comment thread frontend/src/routes/(app)/settings/security/+page.svelte
@kmendell kmendell force-pushed the fix/cpuset-synology branch 7 times, most recently from d616004 to 100fdec Compare February 18, 2026 00:15
@github-actions
Copy link
Copy Markdown

This pull request has merge conflicts. Please resolve the conflicts so the PR can stay up-to-date and reviewed.

@kmendell kmendell force-pushed the fix/cpuset-synology branch from 100fdec to 952ebfe Compare February 18, 2026 04:53
@kmendell kmendell force-pushed the fix/cpuset-synology branch 3 times, most recently from 794232c to 852fa9f Compare February 18, 2026 15:40
@kmendell
Copy link
Copy Markdown
Member Author

@greptileai

Copy link
Copy Markdown
Contributor

@greptile-apps greptile-apps Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

16 files reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

Comment thread backend/internal/services/vulnerability_service.go
@Profex
Copy link
Copy Markdown

Profex commented Feb 18, 2026

Just to let you know: depending on Synology model / kernel version the docker info output is slightly different

Synology DS220+ (Kernel 4.x)

 Kernel Version: 4.4.302+
 Operating System: Synology NAS

 OSType: linux
 Architecture: x86_64
 CPUs: 2

Synology DS925+ (Kernel 5.x)

 Kernel Version: 5.10.55+
 Operating System: Synology NAS
 (containerized)
 OSType: linux
 Architecture: x86_64
 CPUs: 8

@kmendell kmendell force-pushed the fix/cpuset-synology branch from 852fa9f to 467a933 Compare February 18, 2026 22:21
@kmendell kmendell merged commit c5cd422 into main Feb 19, 2026
15 checks passed
@kmendell kmendell deleted the fix/cpuset-synology branch February 19, 2026 02:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants