Skip to content

chore: tooling updates, dependency refresh, and security audit#168

Merged
unclesp1d3r merged 41 commits intomainfrom
todo_cleanups
Apr 4, 2026
Merged

chore: tooling updates, dependency refresh, and security audit#168
unclesp1d3r merged 41 commits intomainfrom
todo_cleanups

Conversation

@unclesp1d3r
Copy link
Copy Markdown
Member

@unclesp1d3r unclesp1d3r commented Apr 4, 2026

Summary

Housekeeping PR covering tooling updates, dependency refresh, and a comprehensive security audit of the full codebase. No Rust source code changes — only configuration, dependencies, documentation, and tooling.

Impact: 22 files changed (+880, -309) | Risk Level: Low | Review Time: ~15 minutes


What Changed

Configuration Changes

  • .mdformat.toml — Updated markdown formatting rules
  • .pre-commit-config.yaml — Refreshed pre-commit hook versions
  • .vscode/settings.json — Updated editor settings
  • .tessl/RULES.md — Updated tessl tile rule references
  • mise.toml / mise.lock — Refreshed developer toolchain versions
  • tessl.json — Updated tessl tile registry

Dependency Updates (Cargo.toml / Cargo.lock)

Crate Before After Notes
redb 3.1.1 4.0.0 Major version bump (embedded DB)
sha2 0.10.9 0.11.0 Minor version bump (crypto)
toml 0.9.8 1.1.2 Major version bump (config parser)
tokio 1.50.0 1.51.0 Patch bump (async runtime)
blake3 1.8.3 1.8.4 Patch bump (crypto)
insta 1.46.3 1.47.2 Patch bump (snapshot testing)
proptest 1.10.0 1.11.0 Minor bump (property testing)
uuid 1.22.0 1.23.0 Minor bump

.gitignore Consolidation

  • Removed 8 scattered .gitignore files from .agents/skills/, .claude/skills/, .codex/skills/, .cursor/rules/, .cursor/skills/, .gemini/skills/, .github/skills/, .tessl/
  • Consolidated ignore patterns into root .gitignore with .context/ directory pattern for local todo tracking

New Files

  • AI_POLICY.md — AI usage and contribution policy ("You own every line you submit")
  • SECURITY_AUDIT_2026-04-03.md — Comprehensive security audit report (3 Critical, 5 High, 6 Medium, 4 Low findings)

Documentation Fixes

  • .kiro/specs/daemoneye-core-monitoring/tasks.md — mdformat bracket escaping
  • .kiro/steering/structure.md — mdformat bracket escaping

Why These Changes

  1. Dependency freshness: Several crates had patch/minor updates available. redb 4.0.0, sha2 0.11.0, and toml 1.1.2 are major bumps that should be adopted early while the storage layer is still stubbed.
  2. Security audit: Full codebase review identified 18 findings that need tracking and remediation — the audit report documents each with CWE references, file locations, and remediation steps.
  3. AI policy: Establishes clear expectations for AI-assisted contributions before the project grows.
  4. Tooling hygiene: Scattered .gitignore files and stale tool configs create friction for contributors.

Risk Assessment

Overall Risk: Low

Factor Assessment
Source code No Rust source changes
Dependencies redb 4.0.0 is a major bump but storage layer is stubbed — no runtime impact yet
sha2 0.11.0 API compatible for current usage (hash computation)
toml 1.1.2 Major version but serde-based parsing unchanged
Build cargo clippy -- -D warnings passes, all tests pass

Test Plan

  • cargo test --workspace — all tests pass
  • cargo clippy --workspace -- -D warnings — zero warnings
  • cargo fmt --all --check — formatting clean
  • just ci-check — all pre-commit hooks pass (fmt, clippy, cargo-check, actionlint, mdformat, cargo-audit, toml-sort)
  • CI pipeline passes on all platforms (Linux, macOS, Windows)

Review Checklist

Configuration

  • No hardcoded values introduced
  • Dependency version bumps are intentional and reviewed
  • Cargo.lock changes match Cargo.toml version specifications
  • .gitignore consolidation doesn't accidentally ignore tracked files

Documentation

  • AI_POLICY.md tone and content appropriate for the project
  • SECURITY_AUDIT_2026-04-03.md findings are actionable
  • mdformat escaping changes are cosmetic only

Security

  • No sensitive data in committed files
  • Dependency updates don't introduce known vulnerabilities (cargo audit passes)
  • Security audit report doesn't expose exploitable details publicly

AI Disclosure

This PR was prepared with Claude Code (Opus 4.6). The security audit report was generated by specialized review agents analyzing the full workspace. All changes, findings, and dependency decisions were reviewed and approved by the maintainer before committing.

…dencies

Update mdformat to 1.0.0 (executablebooks), bump dev tool versions
(actionlint, bun, cargo-binstall, cargo-insta, cargo-nextest, etc.),
add good-oss-citizen and other tessl tiles, remove stale tessl-managed
.gitignore files, and add AI_POLICY.md for contributor transparency.

Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
Add comprehensive security audit report from full codebase review.
Fix mdformat auto-formatting of bracket references in steering docs.

Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
Copilot AI review requested due to automatic review settings April 4, 2026 03:38
@dosubot dosubot bot added the size:XL This PR changes 500-999 lines, ignoring generated files. label Apr 4, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 4, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Repo-wide infra/tooling and docs updates plus targeted Rust changes: formatter/pre-commit/tooling swaps and bumps, new AI policy and security audit, removal of several test file-level feature gates, and security/performance-oriented code changes in eventbus (Arc payloads + atomics), WAL, trigger handling, alerting, crypto hashing, storage stats, and config validation.

Changes

Cohort / File(s) Summary
Formatting & Pre-commit
\.mdformat.toml, .pre-commit-config.yaml, mise.toml
Switched mdformat vendor/rev, renamed/removed extensions (frontmatterfront_matters, removed tables/wikilink), adjusted mdformat plugin list and pipx args, and bumped multiple tool versions.
CI & Workflow
.github/workflows/ci.yml
Upgraded jdx/mise-action and codecov action pins, added explicit step names and job timeout-minutes.
Docs & Governance
AI_POLICY.md, SECURITY_AUDIT_2026-04-03.md, docs/..., .kiro/..., .github/steering/...
Added AI policy and security audit; edited architecture, docker/installation docs; escaped/normalized several markdown file reference syntaxes.
Repository Skills & Ignore
Skills ignore
.github/skills/.gitignore, Tessl config
tessl.json
Removed Tessl-managed ignore patterns in .github/skills/.gitignore; replaced/added/removed tessl dependencies and bumped a rust-skills version in tessl.json.
Workspace Dependencies
Cargo.toml, mise.toml
Bumped multiple crate versions (e.g., blake3, redb, sha2, tokio, toml, uuid) and tooling versions in mise.
Collector-core: event bus, routing & stats
collector-core/src/high_performance_event_bus.rs, collector-core/src/event_bus.rs, collector-core/src/daemoneye_event_bus.rs, collector-core/src/analysis_chain.rs
Changed subscription payloads to Arc<BusEvent> end-to-end; publisher wraps event once into Arc and clones arc for subscribers; replaced hot-path RwLock stats updates with atomic counters + flush path; router uses recv_timeout(10ms) instead of busy-spin try_recv; lowered default channel_capacity from 1_048_576 → 8192.
Collector-core: trigger & tests
collector-core/src/trigger.rs, collector-core/tests/*
Propagate mutex-poisoning errors (fail on lock poison) and changed dequeue_trigger to return Result<Option<...>, TriggerError>; removed file-level #![cfg(feature = "eventbus-integration")] from multiple test files and updated some test match ergonomics.
Procmond: eventbus connector, WAL, security
procmond/src/event_bus_connector.rs, procmond/src/wal.rs, procmond/src/security.rs
from_type_string now returns Result and adds UnknownEventType error; WAL replay treats missing type as legacy but skips unrecognized types (logs error); factored WAL write into write_entry helper; sanitize_command_line now redacts --flag=value forms case-insensitively.
Alerting & Benchmarks
daemoneye-lib/src/alerting.rs, daemoneye-lib/benches/alert_processing.rs, AGENTS.md
DeliveryResult removed success/error_message fields; AlertSink::send now returns Result<DeliveryResult, AlertingError> (and health_check returns Result<(), AlertingError> in docs); bench/mock updated to return Err(...) on failures.
Crypto: audit hashing
daemoneye-lib/src/crypto.rs
Added public v2 canonical hash-input constructor using RFC3339 timestamps (sub-second) and HASH_VERSION = 2; AuditLedger::verify_integrity first tries v2, then falls back to legacy v1 verification; added unit tests for collision, previous_hash inclusion, and legacy verification.
Storage: stats shape
daemoneye-lib/src/storage.rs
Removed legacy count fields (*_count) from DatabaseStats; tests adjusted to use canonical plural fields (processes, rules, alerts, etc.).
Config validation
daemoneye-lib/src/config.rs
Stricter validate_config: numeric min/max bounds, path-traversal rejection (..), socket null-byte/length checks, and expanded unit tests covering these validations.
WAL refactor (procmond)
procmond/src/wal.rs
New private async write_entry consolidates append/rotation while holding file_state lock; public write paths delegate to it.
Misc docs & formatting
assorted files (docs/**, .kiro/**, .github/**)
Minor markdown escaping, wording, and whitespace edits across docs and steering files.
Linting
deny.toml
Changed [bans].wildcards from allowdeny and normalized array formatting.

Sequence Diagram(s)

sequenceDiagram
    participant Publisher
    participant Router as Router/Bus
    participant Subscriber1 as Subscriber A
    participant SubscriberN as Subscriber N
    Publisher->>Router: publish(BusEvent)
    Note right of Router: allocate Arc<BusEvent> once
    Router->>Router: let arc = Arc::new(event)
    Router->>Subscriber1: send(Arc::clone(&arc))
    Router->>SubscriberN: send(Arc::clone(&arc))
    Router->>Router: events_published.fetch_add(1)
    Subscriber1-->>Router: ack (optional)
    SubscriberN-->>Router: ack (optional)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested labels

documentation, dependencies

Poem

Arc-wrapped events race the night,
Atomics tally without a fight.
WALs skip ghosts, locks now shout on error,
Configs guard paths from sly marauder.
🔒🦀

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed Title follows Conventional Commits spec with 'chore' type and descriptive scope covering tooling, dependencies, and audit.
Description check ✅ Passed Description comprehensively documents configuration changes, dependency updates, new files, security audit, and risk assessment—directly related to the changeset.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch todo_cleanups

Comment @coderabbitai help to get the list of available commands and usage tips.

@dosubot dosubot bot added configuration Configuration management and settings dependencies Pull requests that update a dependency file documentation Improvements or additions to documentation security Security-related issues and vulnerabilities labels Apr 4, 2026
@coderabbitai coderabbitai bot added the testing Related to test development and test infrastructure label Apr 4, 2026
@dosubot
Copy link
Copy Markdown
Contributor

dosubot bot commented Apr 4, 2026

Documentation Updates

6 document(s) were updated by changes in this PR:

API Quick Reference
View Changes
@@ -194,9 +194,9 @@
 ```rust
 #[async_trait]
 pub trait AlertSink: Send + Sync {
-    async fn send(&self, alert: &Alert) -> Result<DeliveryResult, DeliveryError>;
-    async fn health_check(&self) -> HealthStatus;
+    async fn send(&self, alert: &Alert) -> Result<DeliveryResult, AlertingError>;
     fn name(&self) -> &str;
+    async fn health_check(&self) -> Result<(), AlertingError>;
 }
 ```
 ## Configuration API
@@ -552,7 +552,7 @@
 ### Custom Alert Sink
 ```rust
 use async_trait::async_trait;
-use daemoneye_lib::alerting::{Alert, AlertSink, DeliveryError, DeliveryResult};
+use daemoneye_lib::alerting::{Alert, AlertSink, AlertingError, DeliveryResult};
 
 pub struct CustomSink {
     endpoint: String,
@@ -570,32 +570,34 @@
 
 #[async_trait]
 impl AlertSink for CustomSink {
-    async fn send(&self, alert: &Alert) -> Result<DeliveryResult, DeliveryError> {
+    async fn send(&self, alert: &Alert) -> Result<DeliveryResult, AlertingError> {
         let response = self
             .client
             .post(&self.endpoint)
             .json(alert)
             .send()
             .await
-            .map_err(|e| DeliveryError::Network(e.to_string()))?;
+            .map_err(|e| AlertingError::Network(e.to_string()))?;
 
         if response.status().is_success() {
             Ok(DeliveryResult::Success)
         } else {
-            Err(DeliveryError::Http(response.status().as_u16()))
-        }
-    }
-
-    async fn health_check(&self) -> HealthStatus {
-        match self.client.get(&self.endpoint).send().await {
-            Ok(response) if response.status().is_success() => HealthStatus::Healthy,
-            _ => HealthStatus::Unhealthy,
+            Err(AlertingError::Http(response.status().as_u16()))
         }
     }
 
     fn name(&self) -> &str {
         "custom_sink"
     }
+
+    async fn health_check(&self) -> Result<(), AlertingError> {
+        self.client
+            .get(&self.endpoint)
+            .send()
+            .await
+            .map_err(|e| AlertingError::Network(e.to_string()))?;
+        Ok(())
+    }
 }
 ```
 *This API reference provides comprehensive documentation for all DaemonEye APIs. For additional examples and usage patterns, consult the specific API documentation.*
CLI Reference
View Changes
@@ -84,7 +84,7 @@
 ### Configuration
 procmond is orchestrated by daemoneye-agent; collectors do not consume component-specific configuration files. When the binary is launched directly (for example during development or troubleshooting) it honours the following sources:
 1. Command-line flags (highest precedence)
-2. Environment variables (`PROCMOND_*`) typically injected by the agent
+2. Environment variables (`PROCMOND_*` — legacy, maintained for backward compatibility)
 3. System DaemonEye configuration file (`/etc/daemoneye/config.toml`)
 4. Embedded defaults (lowest precedence)
 Per-user configuration is not supported for collectors; only the operator-facing CLI honours user-scoped overrides when invoked directly.
@@ -183,8 +183,8 @@
 daemoneye-agent supports hierarchical configuration loading:
 1. Command-line flags (highest precedence)
 2. Environment variables (`DAEMONEYE_AGENT_*`)
-3. User configuration file (`~/.config/daemoneye-agent/config.yaml`)
-4. System configuration file (`/etc/daemoneye-agent/config.yaml`)
+3. User configuration file (`~/.config/daemoneye/config.toml`)
+4. System configuration file (`/etc/daemoneye/config.toml`)
 5. Embedded defaults (lowest precedence)
 ## daemoneye-cli
 The command-line interface for querying database statistics, health checks, and system management.
@@ -263,8 +263,8 @@
 daemoneye-cli supports hierarchical configuration loading:
 1. Command-line flags (highest precedence)
 2. Environment variables (`DAEMONEYE_CLI_*`)
-3. User configuration file (`~/.config/daemoneye-cli/config.yaml`)
-4. System configuration file (`/etc/daemoneye-cli/config.yaml`)
+3. User configuration file (`~/.config/daemoneye/config.toml`)
+4. System configuration file (`/etc/daemoneye/config.toml`)
 5. Embedded defaults (lowest precedence)
 ## Common Patterns
 ### Basic Monitoring Setup
@@ -390,17 +390,17 @@
 </tr>
 <tr>
 <td>`PROCMOND_DATABASE`</td>
-<td>Database path</td>
+<td>Database path (legacy — for backward compatibility)</td>
 <td>`/var/lib/daemoneye/processes.db`</td>
 </tr>
 <tr>
 <td>`PROCMOND_LOG_LEVEL`</td>
-<td>Log level</td>
+<td>Log level (legacy — for backward compatibility)</td>
 <td>`info`</td>
 </tr>
 <tr>
 <td>`PROCMOND_INTERVAL`</td>
-<td>Collection interval</td>
+<td>Collection interval (legacy — for backward compatibility)</td>
 <td>`30`</td>
 </tr>
 </table>
Configuration Guide
View Changes
@@ -9,15 +9,15 @@
 - **Hot-Reloadable**: Most settings can be updated without restart
 ### Configuration Precedence
 1. **Command-line flags** (highest precedence)
-2. **Environment variables** (DaemonEye_\*)
-3. **User configuration file** (\~/.config/daemoneye/config.yaml)
-4. **System configuration file** (/etc/daemoneye/config.yaml)
+2. **Environment variables** (component-namespaced: DAEMONEYE_AGENT_\*, DAEMONEYE_CLI_\*, PROCMOND_\*)
+3. **User configuration file** (\~/.config/daemoneye/config.toml)
+4. **System configuration file** (/etc/daemoneye/config.toml)
 5. **Embedded defaults** (lowest precedence)
 ## Configuration Sources
 ### Command-Line Flags
 ```bash
 # Basic configuration
-daemoneye-agent --config /path/to/config.yaml --log-level debug
+daemoneye-agent --config /path/to/config.toml --log-level debug
 
 # Override specific settings
 daemoneye-agent --scan-interval 30000 --batch-size 1000
@@ -27,259 +27,285 @@
 ```
 ### Environment Variables
 ```bash
-export DaemonEye_LOG_LEVEL=debug
-export DaemonEye_SCAN_INTERVAL_MS=30000
-export DaemonEye_DATABASE_PATH=/var/lib/daemoneye/processes.db
-export DaemonEye_ALERTING_SINKS_0_TYPE=syslog
-export DaemonEye_ALERTING_SINKS_0_FACILITY=daemon
+# Component-namespaced variables
+export DAEMONEYE_AGENT_LOG_LEVEL=debug
+export DAEMONEYE_AGENT_SCAN_INTERVAL_MS=30000
+export DAEMONEYE_AGENT_DATABASE_PATH=/var/lib/daemoneye/processes.db
+
+# CLI-specific configuration
+export DAEMONEYE_CLI_OUTPUT_FORMAT=json
+export DAEMONEYE_CLI_CONFIG_PATH=/etc/daemoneye/config.toml
+
+# Backward compatibility (PROCMOND_* still supported)
+export PROCMOND_LOG_LEVEL=debug
+
 daemoneye-agent
 ```
 ### Configuration Files
-**YAML Format** (recommended):
-```yaml
-app:
-  scan_interval_ms: 30000
-  batch_size: 1000
-  log_level: info
-  data_dir: /var/lib/daemoneye
-  log_dir: /var/log/daemoneye
-
-database:
-  path: /var/lib/daemoneye/processes.db
-  max_connections: 10
-  retention_days: 30
-
-alerting:
-  sinks:
-    - type: syslog
-      enabled: true
-      facility: daemon
-    - type: webhook
-      enabled: false
-      url: https://alerts.example.com/webhook
-      headers:
-        Authorization: Bearer ${WEBHOOK_TOKEN}
+**TOML Format**:
+```toml
+[app]
+scan_interval_ms = 30000
+batch_size = 1000
+log_level = "info"
+data_dir = "/var/lib/daemoneye"
+log_dir = "/var/log/daemoneye"
+
+[database]
+path = "/var/lib/daemoneye/processes.db"
+max_connections = 10
+retention_days = 30
+
+[[alerting.sinks]]
+type = "syslog"
+enabled = true
+facility = "daemon"
+
+[[alerting.sinks]]
+type = "webhook"
+enabled = false
+url = "https://alerts.example.com/webhook"
+
+[alerting.sinks.headers]
+Authorization = "Bearer ${WEBHOOK_TOKEN}"
 ```
 ## Complete Configuration Schema
-```yaml
+```toml
 # Application settings
-app:
-  scan_interval_ms: 30000
-  batch_size: 1000
-  log_level: info
-  data_dir: /var/lib/daemoneye
-  log_dir: /var/log/daemoneye
-  pid_file: /var/run/daemoneye.pid
-  user: daemoneye
-  group: daemoneye
-  max_memory_mb: 512
-  max_cpu_percent: 5.0
+[app]
+scan_interval_ms = 30000
+batch_size = 1000
+log_level = "info"
+data_dir = "/var/lib/daemoneye"
+log_dir = "/var/log/daemoneye"
+pid_file = "/var/run/daemoneye.pid"
+user = "daemoneye"
+group = "daemoneye"
+max_memory_mb = 512
+max_cpu_percent = 5.0
 
 # Database configuration
-database:
-  path: /var/lib/daemoneye/processes.db
-  max_connections: 10
-  retention_days: 30
-  vacuum_interval_hours: 24
-  wal_mode: true
-  synchronous: NORMAL
-  cache_size: -64000
-  temp_store: MEMORY
-  journal_mode: WAL
+[database]
+path = "/var/lib/daemoneye/processes.db"
+max_connections = 10
+retention_days = 30
+vacuum_interval_hours = 24
+wal_mode = true
+synchronous = "NORMAL"
+cache_size = -64000
+temp_store = "MEMORY"
+journal_mode = "WAL"
 
 # Alerting configuration
-alerting:
-  enabled: true
-  max_queue_size: 10000
-  delivery_timeout_ms: 5000
-  retry_attempts: 3
-  retry_delay_ms: 1000
-  circuit_breaker_threshold: 5
-  circuit_breaker_timeout_ms: 60000
-  sinks:
-    - type: syslog
-      enabled: true
-      facility: daemon
-      priority: info
-      tag: daemoneye
-    - type: webhook
-      enabled: false
-      url: https://alerts.example.com/webhook
-      method: POST
-      timeout_ms: 5000
-      retry_attempts: 3
-      headers:
-        Authorization: Bearer ${WEBHOOK_TOKEN}
-        Content-Type: application/json
-    - type: file
-      enabled: false
-      path: /var/log/daemoneye/alerts.log
-      format: json
-      rotation: daily
-      max_files: 30
+[alerting]
+enabled = true
+max_queue_size = 10000
+delivery_timeout_ms = 5000
+retry_attempts = 3
+retry_delay_ms = 1000
+circuit_breaker_threshold = 5
+circuit_breaker_timeout_ms = 60000
+
+[[alerting.sinks]]
+type = "syslog"
+enabled = true
+facility = "daemon"
+priority = "info"
+tag = "daemoneye"
+
+[[alerting.sinks]]
+type = "webhook"
+enabled = false
+url = "https://alerts.example.com/webhook"
+method = "POST"
+timeout_ms = 5000
+retry_attempts = 3
+
+[alerting.sinks.headers]
+Authorization = "Bearer ${WEBHOOK_TOKEN}"
+Content-Type = "application/json"
+
+[[alerting.sinks]]
+type = "file"
+enabled = false
+path = "/var/log/daemoneye/alerts.log"
+format = "json"
+rotation = "daily"
+max_files = 30
 
 # Security configuration
-security:
-  enable_privilege_dropping: true
-  drop_to_user: daemoneye
-  drop_to_group: daemoneye
-  enable_audit_logging: true
-  audit_log_path: /var/log/daemoneye/audit.log
-  enable_integrity_checking: true
-  hash_algorithm: blake3
-  enable_signature_verification: true
-  public_key_path: /etc/daemoneye/public.key
-  private_key_path: /etc/daemoneye/private.key
-  access_control:
-    allowed_users: []
-    allowed_groups: []
-    denied_users: []
-    denied_groups: []
-  network:
-    enable_tls: false
-    cert_file: /etc/daemoneye/cert.pem
-    key_file: /etc/daemoneye/key.pem
-    ca_file: /etc/daemoneye/ca.pem
-    verify_peer: true
+[security]
+enable_privilege_dropping = true
+drop_to_user = "daemoneye"
+drop_to_group = "daemoneye"
+enable_audit_logging = true
+audit_log_path = "/var/log/daemoneye/audit.log"
+enable_integrity_checking = true
+hash_algorithm = "blake3"
+enable_signature_verification = true
+public_key_path = "/etc/daemoneye/public.key"
+private_key_path = "/etc/daemoneye/private.key"
+
+[security.access_control]
+allowed_users = []
+allowed_groups = []
+denied_users = []
+denied_groups = []
+
+[security.network]
+enable_tls = false
+cert_file = "/etc/daemoneye/cert.pem"
+key_file = "/etc/daemoneye/key.pem"
+ca_file = "/etc/daemoneye/ca.pem"
+verify_peer = true
 
 # Process collection configuration
-collection:
-  enable_process_collection: true
-  enable_file_monitoring: false
-  enable_network_monitoring: false
-  enable_kernel_monitoring: false
-  process_collection:
-    include_children: true
-    include_threads: false
-    include_memory_maps: false
-    include_file_descriptors: false
-    max_processes: 10000
-    exclude_patterns:
-      - systemd*
-      - kthreadd*
-      - ksoftirqd*
+[collection]
+enable_process_collection = true
+enable_file_monitoring = false
+enable_network_monitoring = false
+enable_kernel_monitoring = false
+
+[collection.process_collection]
+include_children = true
+include_threads = false
+include_memory_maps = false
+include_file_descriptors = false
+max_processes = 10000
+exclude_patterns = [
+  "systemd*",
+  "kthreadd*",
+  "ksoftirqd*"
+]
 
 # Detection engine configuration
-detection:
-  enable_detection: true
-  rule_directory: /etc/daemoneye/rules
-  rule_file_pattern: '*.sql'
-  enable_hot_reload: true
-  reload_interval_ms: 5000
-  max_concurrent_rules: 10
-  rule_timeout_ms: 30000
-  enable_rule_caching: true
-  cache_ttl_seconds: 300
-  execution:
-    enable_parallel_execution: true
-    max_parallel_rules: 5
-    enable_rule_optimization: true
-    enable_query_planning: true
-  alert_generation:
-    enable_alert_deduplication: true
-    deduplication_window_ms: 60000
-    enable_alert_aggregation: true
-    aggregation_window_ms: 300000
-    max_alerts_per_rule: 1000
+[detection]
+enable_detection = true
+rule_directory = "/etc/daemoneye/rules"
+rule_file_pattern = "*.sql"
+enable_hot_reload = true
+reload_interval_ms = 5000
+max_concurrent_rules = 10
+rule_timeout_ms = 30000
+enable_rule_caching = true
+cache_ttl_seconds = 300
+
+[detection.execution]
+enable_parallel_execution = true
+max_parallel_rules = 5
+enable_rule_optimization = true
+enable_query_planning = true
+
+[detection.alert_generation]
+enable_alert_deduplication = true
+deduplication_window_ms = 60000
+enable_alert_aggregation = true
+aggregation_window_ms = 300000
+max_alerts_per_rule = 1000
 
 # Observability configuration
-observability:
-  enable_metrics: true
-  metrics_port: 9090
-  metrics_path: /metrics
-  enable_health_checks: true
-  health_check_port: 8080
-  health_check_path: /health
-  tracing:
-    enable_tracing: false
-    trace_endpoint: http://jaeger:14268/api/traces
-    trace_sampling_rate: 0.1
-    trace_service_name: daemoneye
-  logging:
-    enable_structured_logging: true
-    log_format: json
-    log_timestamp_format: rfc3339
-    enable_log_rotation: true
-    max_log_file_size_mb: 100
-    max_log_files: 10
+[observability]
+enable_metrics = true
+metrics_port = 9090
+metrics_path = "/metrics"
+enable_health_checks = true
+health_check_port = 8080
+health_check_path = "/health"
+
+[observability.tracing]
+enable_tracing = false
+trace_endpoint = "http://jaeger:14268/api/traces"
+trace_sampling_rate = 0.1
+trace_service_name = "daemoneye"
+
+[observability.logging]
+enable_structured_logging = true
+log_format = "json"
+log_timestamp_format = "rfc3339"
+enable_log_rotation = true
+max_log_file_size_mb = 100
+max_log_files = 10
 
 # Platform-specific configuration
-platform:
-  linux:
-    enable_ebpf: false
-    ebpf_program_path: /etc/daemoneye/ebpf/monitor.o
-    enable_audit: false
-    audit_rules_path: /etc/daemoneye/audit.rules
-  windows:
-    enable_etw: false
-    etw_session_name: DaemonEye
-    enable_wmi: false
-    wmi_namespace: root\cimv2
-  macos:
-    enable_endpoint_security: false
-    es_client_name: com.daemoneye.monitor
-    enable_system_events: false
+[platform.linux]
+enable_ebpf = false
+ebpf_program_path = "/etc/daemoneye/ebpf/monitor.o"
+enable_audit = false
+audit_rules_path = "/etc/daemoneye/audit.rules"
+
+[platform.windows]
+enable_etw = false
+etw_session_name = "DaemonEye"
+enable_wmi = false
+wmi_namespace = "root\\cimv2"
+
+[platform.macos]
+enable_endpoint_security = false
+es_client_name = "com.daemoneye.monitor"
+enable_system_events = false
 
 # Integration configuration
-integrations:
-  siem:
-    splunk:
-      enabled: false
-      hec_url: https://splunk.example.com:8088/services/collector
-      hec_token: ${SPLUNK_HEC_TOKEN}
-      index: daemoneye
-    elasticsearch:
-      enabled: false
-      url: https://elasticsearch.example.com:9200
-      index: daemoneye-processes
-    kafka:
-      enabled: false
-      brokers: [kafka1.example.com:9092]
-      topic: daemoneye.processes
-  export:
-    cef:
-      enabled: false
-      output_file: /var/log/daemoneye/cef.log
-    stix:
-      enabled: false
-      output_file: /var/log/daemoneye/stix.json
-    json:
-      enabled: false
-      output_file: /var/log/daemoneye/events.json
+[integrations.siem.splunk]
+enabled = false
+hec_url = "https://splunk.example.com:8088/services/collector"
+hec_token = "${SPLUNK_HEC_TOKEN}"
+index = "daemoneye"
+
+[integrations.siem.elasticsearch]
+enabled = false
+url = "https://elasticsearch.example.com:9200"
+index = "daemoneye-processes"
+
+[integrations.siem.kafka]
+enabled = false
+brokers = ["kafka1.example.com:9092"]
+topic = "daemoneye.processes"
+
+[integrations.export.cef]
+enabled = false
+output_file = "/var/log/daemoneye/cef.log"
+
+[integrations.export.stix]
+enabled = false
+output_file = "/var/log/daemoneye/stix.json"
+
+[integrations.export.json]
+enabled = false
+output_file = "/var/log/daemoneye/events.json"
 ```
 ## Performance Tuning
 ### Process Collection
-```yaml
+```toml
 # Reduce resource usage
-app:
-  scan_interval_ms: 60000
-  batch_size: 500
-  max_memory_mb: 256
-  max_cpu_percent: 3.0
+[app]
+scan_interval_ms = 60000
+batch_size = 500
+max_memory_mb = 256
+max_cpu_percent = 3.0
 ```
 ### Database Performance
-```yaml
-database:
-  max_connections: 20
-  cache_size: -128000   # 128MB cache
-  temp_store: MEMORY
-  wal_mode: true
-  synchronous: NORMAL
+```toml
+[database]
+max_connections = 20
+cache_size = -128000   # 128MB cache
+temp_store = "MEMORY"
+wal_mode = true
+synchronous = "NORMAL"
 ```
 ## Configuration Management
 ### Validation and Testing
 ```bash
-daemoneye-cli config validate /path/to/config.yaml
+daemoneye-cli config validate /path/to/config.toml
 daemoneye-cli config check
 daemoneye-cli config show --include-defaults
-daemoneye-agent --config /path/to/config.yaml --dry-run
+daemoneye-agent --config /path/to/config.toml --dry-run
 ```
 ### Hot Reload
 ```bash
 daemoneye-cli config reload
 daemoneye-cli config set app.scan_interval_ms 60000
-daemoneye-cli config backup --output /backup/daemoneye-config-$(date +%Y%m%d).yaml
-daemoneye-cli config restore --input /backup/daemoneye-config-20240101.yaml
+daemoneye-cli config backup --output /backup/daemoneye-config-$(date +%Y%m%d).toml
+daemoneye-cli config restore --input /backup/daemoneye-config-20240101.toml
 ```
 ### Environment-Specific Configs
 - **Development**: debug logging, 10s scan interval, 1 day retention, temp database
DaemonEye Security Design Overview
View Changes
@@ -184,9 +184,11 @@
 capabilities
 **Security Features**:
 - **SQL Injection Prevention**: AST-based query validation with
-	whitelist functions
+	whitelist functions [Implemented: rule load-time validation; SQL-based
+	rule execution enforcement planned — detection engine currently uses
+	pattern matching]
 - **Sandboxed Execution**: Read-only database connections for rule
-	execution
+	execution [Planned]
 - **Resource Limits**: Timeout and memory constraints on detection
 	rules
 - **Multi-Channel Alerting**: Circuit breaker pattern for reliable
@@ -194,12 +196,12 @@
 - **Audit Trail**: Comprehensive logging of all detection activities
 **SQL Security Implementation**:
 - **AST Validation**: Parse SQL queries using AST validation to prevent
-	injection attacks
+	injection attacks [Implemented: rule load-time validation]
 - **Function Whitelist**: Only allow SELECT statements with approved
 	functions (COUNT, SUM, AVG, MIN, MAX, LENGTH, SUBSTR, datetime
-	functions)
+	functions) [Implemented: enforced at rule load time]
 - **Prepared Statements**: Use prepared statements with read-only
-	database connections
+	database connections [Planned]
 - **Timeout Protection**: Complete within 30 seconds or timeout with
 	appropriate logging
 - **Audit Logging**: Reject forbidden constructs and log attempts for
@@ -246,6 +248,11 @@
 	indicators
 - **Large Dataset Support**: Streaming and pagination for result sets
 - **Audit Logging**: All queries and operations logged
+
+**Note on Database Access**: The current implementation grants
+daemoneye-cli full read/write database access. The architecture design
+calls for read-only access enforcement, which requires implementing a
+dedicated read-only database accessor.
 ## Cryptographic Security Framework
 ### Hash Function Selection
 **BLAKE3 for Audit Integrity**:
@@ -278,12 +285,13 @@
 - **Tamper Evidence**: Any modification to historical entries
 	invalidates the entire chain
 - **Inclusion Proofs**: Cryptographic proof that specific entries exist
-	in the ledger
+	in the ledger [In Progress — stub currently returns empty vec]
 - **Checkpoint Signatures**: Optional Ed25519 signatures for external
 	verification
 - **Forward Security**: New entries don't compromise historical integrity
 - **Append-Only**: Monotonic sequence numbers for all entries
 - **BLAKE3 Hashing**: Fast, cryptographically secure hash computation
+	[Implemented]
 - **Millisecond Precision**: Proper ordering and millisecond-precision timestamps
 **Implementation Details**:
 ```rust
@@ -373,23 +381,23 @@
 ### Business Tier Data Protection Features
 **Centralized Data Management**:
 - **Security Center**: Centralized aggregation and management of data from multiple agents
-- **mTLS Authentication**: Mutual TLS with certificate chain validation for secure agent connections
-- **Certificate Management**: Automated certificate provisioning and rotation
+- **mTLS Authentication**: Mutual TLS with certificate chain validation for secure agent connections [Planned]
+- **Certificate Management**: Automated certificate provisioning and rotation [Planned]
 - **Role-Based Access Control**: Granular permissions for different user roles
 **Enhanced Data Export**:
 - **Standard Format Support**: CEF (Common Event Format), structured JSON, and STIX-lite exports
 - **SIEM Integration**: Native connectors for Splunk, Elasticsearch, and Kafka
 - **Data Portability**: Comprehensive export capabilities for data migration and analysis
 **Code Signing and Integrity**:
-- **Signed Installers**: MSI installers for Windows and DMG packages for macOS with valid code signing certificates
-- **Enterprise Deployment**: Proper metadata for enterprise deployment tools
-- **Security Validation**: Operating system security validation without warnings
+- **Signed Installers**: MSI installers for Windows and DMG packages for macOS with valid code signing certificates [Planned]
+- **Enterprise Deployment**: Proper metadata for enterprise deployment tools [Planned]
+- **Security Validation**: Operating system security validation without warnings [Planned]
 ### Enterprise Tier Data Protection Features
 **Advanced Cryptographic Security**:
-- **SLSA Level 3 Provenance**: Complete software supply chain attestation
-- **Cosign Signatures**: Hardware security module-backed code signing
+- **SLSA Level 3 Provenance**: Complete software supply chain attestation [Planned]
+- **Cosign Signatures**: Hardware security module-backed code signing [Planned]
 - **Software Bill of Materials (SBOM)**: Complete dependency and component inventory
-- **Signature Verification**: Mandatory signature verification before execution
+- **Signature Verification**: Mandatory signature verification before execution [Planned]
 **Federated Data Architecture**:
 - **Multi-Tier Security Centers**: Hierarchical data aggregation across geographic regions
 - **Federated Storage**: Distributed data storage with local and global aggregation
@@ -428,7 +436,7 @@
 - **Centralized Audit Logs**: Aggregated audit logs from multiple agents
 - **Automated Compliance Reporting**: Scheduled compliance reports and dashboards
 - **Data Retention Management**: Centralized data retention policy enforcement
-- **Audit Trail Integrity**: Cryptographic verification of audit log integrity across the fleet
+- **Audit Trail Integrity**: Cryptographic verification of audit log integrity across the fleet [Implemented: BLAKE3 hash-chaining; Merkle tree inclusion proofs in progress]
 **Enterprise Integration Compliance**:
 - **SIEM Integration**: Native compliance with major SIEM platforms (Splunk, Elasticsearch, QRadar)
 - **Standard Format Support**: CEF, STIX-lite, and other compliance-standard formats
@@ -445,10 +453,10 @@
 - **Advanced SIEM Integration**: Full STIX/TAXII support with compliance mappings
 - **Quarterly Threat Updates**: Automated deployment of curated threat intelligence rule packs
 **Hardened Security and Supply Chain**:
-- **SLSA Level 3 Provenance**: Complete software supply chain attestation
-- **Cosign Signatures**: Hardware security module-backed code signing
+- **SLSA Level 3 Provenance**: Complete software supply chain attestation [Planned]
+- **Cosign Signatures**: Hardware security module-backed code signing [Planned]
 - **Software Bill of Materials (SBOM)**: Complete dependency and component inventory
-- **Supply Chain Security**: End-to-end supply chain security verification
+- **Supply Chain Security**: End-to-end supply chain security verification [Planned]
 **FISMA Compliance**:
 - NIST SP 800-53 security controls implementation
 - Risk assessment and authorization processes
Security Design Overview
View Changes
@@ -132,15 +132,15 @@
 ### daemoneye-agent (Detection Orchestrator)
 **Security Role**: User-space detection engine with network alerting capabilities
 **Security Features**:
-- **SQL Injection Prevention**: AST-based query validation with whitelist functions
-- **Sandboxed Execution**: Read-only database connections for rule execution
+- **SQL Injection Prevention**: [Implemented] AST-based query validation at rule load time with whitelist functions
+- **Sandboxed Execution**: [Planned] Read-only database connections for rule execution (detection engine currently uses pattern matching, not SQL-based execution)
 - **Resource Limits**: Timeout and memory constraints on detection rules
 - **Multi-Channel Alerting**: Circuit breaker pattern for reliable delivery
 - **Audit Trail**: Comprehensive logging of all detection activities
 **SQL Security Implementation**:
-- AST Validation: Parse SQL queries using AST validation to prevent injection attacks
-- Function Whitelist: Only allow SELECT statements with approved functions (COUNT, SUM, AVG, MIN, MAX, LENGTH, SUBSTR, datetime functions)
-- Prepared Statements: Use prepared statements with read-only database connections
+- AST Validation: [Implemented] Parse SQL queries at rule load time using sqlparser to prevent injection attacks
+- Function Whitelist: [Implemented] Only allow SELECT statements with approved functions (COUNT, SUM, AVG, MIN, MAX, LENGTH, SUBSTR, datetime functions) during rule validation
+- Prepared Statements: [Planned] Use prepared statements with read-only database connections (execution-time enforcement not yet implemented — detection engine currently uses pattern matching)
 - Timeout Protection: Complete within 30 seconds or timeout with appropriate logging
 ```rust
 pub struct SqlValidator {
@@ -168,7 +168,7 @@
 ### daemoneye-cli (Operator Interface)
 **Security Role**: Secure query interface with no direct system access
 **Security Features**:
-- No Direct Database Access: All queries routed through daemoneye-agent
+- Full Read/Write Database Access: [Current] CLI currently has full read/write database access via DatabaseManager::new() (architecture violation — planned transition to read-only access)
 - Input Sanitization: Comprehensive validation of all user inputs
 - Safe SQL Execution: Prepared statements with parameter binding
 - Output Formats: Support JSON, human-readable table, and CSV output
@@ -197,7 +197,7 @@
 ### Merkle Tree Audit Ledger
 **Cryptographic Properties**:
 - **Tamper Evidence**: Any modification to historical entries invalidates the entire chain
-- **Inclusion Proofs**: Cryptographic proof that specific entries exist in the ledger
+- **Inclusion Proofs**: [In Progress] Cryptographic proof that specific entries exist in the ledger — BLAKE3 hash-chained audit ledger implemented; Merkle tree inclusion proof generation stubbed (returns empty vec in crypto.rs)
 - **Checkpoint Signatures**: Optional Ed25519 signatures for external verification
 - **Forward Security**: New entries don't compromise historical integrity
 - **Append-Only**: Monotonic sequence numbers for all entries
@@ -265,25 +265,25 @@
 **Access Controls** (All Tiers): Role-based access to different data classifications, audit logging of all data access, principle of least privilege.
 ### Business Tier Data Protection Features
 - **Security Center**: Centralized aggregation and management
-- **mTLS Authentication**: Mutual TLS with certificate chain validation
-- **Certificate Management**: Automated certificate provisioning and rotation
+- **mTLS Authentication**: [Planned] Mutual TLS with certificate chain validation
+- **Certificate Management**: [Planned] Automated certificate provisioning and rotation
 - **Standard Format Support**: CEF, structured JSON, and STIX-lite exports
 - **SIEM Integration**: Native connectors for Splunk, Elasticsearch, and Kafka
-- **Signed Installers**: MSI (Windows) and DMG (macOS) with valid code signing certificates
+- **Signed Installers**: [Planned] MSI (Windows) and DMG (macOS) with valid code signing certificates
 ### Enterprise Tier Data Protection Features
-- **SLSA Level 3 Provenance**: Complete software supply chain attestation
-- **Cosign Signatures**: Hardware security module-backed code signing
-- **Software Bill of Materials (SBOM)**: Complete dependency and component inventory
-- **Multi-Tier Security Centers**: Hierarchical data aggregation across geographic regions
-- **Data Sovereignty**: Regional data residency compliance
-- **STIX/TAXII Integration**: Automated threat intelligence feed consumption
+- **SLSA Level 3 Provenance**: [Planned] Complete software supply chain attestation
+- **Cosign Signatures**: [Planned] Hardware security module-backed code signing
+- **Software Bill of Materials (SBOM)**: [Planned] Complete dependency and component inventory
+- **Multi-Tier Security Centers**: [Planned] Hierarchical data aggregation across geographic regions
+- **Data Sovereignty**: [Planned] Regional data residency compliance
+- **STIX/TAXII Integration**: [Planned] Automated threat intelligence feed consumption
 ### Compliance Features
 **Core Compliance** (All Tiers):
 - **GDPR**: Data minimization, right to erasure, data portability, privacy by design
 - **SOC 2 Type II**: Comprehensive audit logging, access control documentation, incident response
 - **NIST Cybersecurity Framework**: Identify, Protect, Detect, Respond, Recover
 **Business Tier**: Centralized audit logs, automated compliance reporting, data retention management, SIEM integration compliance
-**Enterprise Tier**: NIST SP 800-53, ISO 27001, CIS Controls, FedRAMP, FISMA compliance, quarterly threat updates, SLSA Level 3 provenance
+**Enterprise Tier**: NIST SP 800-53, ISO 27001, CIS Controls, FedRAMP, FISMA compliance, quarterly threat updates, SLSA Level 3 provenance [Planned]
 ## Audit and Compliance Features
 ### Comprehensive Audit Logging
 **Structured Logging**: JSON format with consistent field naming, correlation IDs, millisecond-precision timestamps, configurable log levels, Prometheus-compatible metrics, HTTP health endpoints.
security_design_overview
View Changes
@@ -102,7 +102,7 @@
 
 - procmond runs in isolated process space with minimal privileges
 - daemoneye-agent operates in user space with restricted database access
-- daemoneye-cli has no direct system access, only IPC communication
+- daemoneye-cli has direct database access for queries (architecture constraint: read-only access planned but not yet enforced)[^46]
 
 **Network Isolation**:
 
@@ -186,17 +186,17 @@
 
 **Security Features**:
 
-- **SQL Injection Prevention**: AST-based query validation with whitelist functions[^14]
-- **Sandboxed Execution**: Read-only database connections for rule execution[^15]
+- **SQL Injection Prevention [Implemented]**: AST-based query validation with whitelist functions at rule load time[^14]
+- **Sandboxed Execution [Planned]**: Read-only database connections for rule execution[^15]
 - **Resource Limits**: Timeout and memory constraints on detection rules[^16]
 - **Multi-Channel Alerting**: Circuit breaker pattern for reliable delivery[^17]
 - **Audit Trail**: Comprehensive logging of all detection activities[^18]
 
 **SQL Security Implementation**:
 
-- **AST Validation**: Parse SQL queries using AST validation to prevent injection attacks[^14]
-- **Function Whitelist**: Only allow SELECT statements with approved functions (COUNT, SUM, AVG, MIN, MAX, LENGTH, SUBSTR, datetime functions)[^19]
-- **Prepared Statements**: Use prepared statements with read-only database connections[^15]
+- **AST Validation [Implemented]**: Parse SQL queries using AST validation to prevent injection attacks at rule load time[^14]
+- **Function Whitelist [Implemented]**: Only allow SELECT statements with approved functions (COUNT, SUM, AVG, MIN, MAX, LENGTH, SUBSTR, datetime functions) at rule validation[^19]
+- **Prepared Statements [Planned]**: Use prepared statements with read-only database connections for rule execution[^15]
 - **Timeout Protection**: Complete within 30 seconds or timeout with appropriate logging[^16]
 - **Audit Logging**: Reject forbidden constructs and log attempts for audit purposes[^18]
 
@@ -235,9 +235,9 @@
 
 **Security Features**:
 
-- **No Direct Database Access**: All queries routed through daemoneye-agent
+- **No Direct Database Access**: All queries routed through daemoneye-agent[^46]
 - **Input Sanitization**: Comprehensive validation of all user inputs
-- **Safe SQL Execution**: Prepared statements with parameter binding[^20]
+- **Safe SQL Execution [Planned]**: Prepared statements with parameter binding for rule execution[^20]
 - **Output Formats**: Support JSON, human-readable table, and CSV output[^21]
 - **Rule Management**: List, validate, test, and import/export detection rules[^22]
 - **Health Monitoring**: Display component status with color-coded indicators[^23]
@@ -283,11 +283,11 @@
 **Cryptographic Properties**:
 
 - **Tamper Evidence**: Any modification to historical entries invalidates the entire chain[^26]
-- **Inclusion Proofs**: Cryptographic proof that specific entries exist in the ledger[^26]
+- **Inclusion Proofs [In Progress]**: Cryptographic proof that specific entries exist in the ledger — `generate_inclusion_proof()` currently returns empty vec[^47]
 - **Checkpoint Signatures**: Optional Ed25519 signatures for external verification
 - **Forward Security**: New entries don't compromise historical integrity
 - **Append-Only**: Monotonic sequence numbers for all entries[^27]
-- **BLAKE3 Hashing**: Fast, cryptographically secure hash computation[^25]
+- **BLAKE3 Hashing [Implemented]**: Fast, cryptographically secure hash computation[^25]
 - **Millisecond Precision**: Proper ordering and millisecond-precision timestamps[^5]
 
 **Implementation Details**:
@@ -309,6 +309,7 @@
         self.merkle_tree.insert(leaf_hash).commit();
 
         // Generate inclusion proof
+        // Note: This is a stub — generate_inclusion_proof() returns empty vec in crypto.rs
         let proof = self
             .merkle_tree
             .proof(&[self.merkle_tree.leaves().len() - 1]);
@@ -428,12 +429,12 @@
 
 ### Enterprise Tier Data Protection Features
 
-**Advanced Cryptographic Security**:
-
-- **SLSA Level 3 Provenance**: Complete software supply chain attestation
-- **Cosign Signatures**: Hardware security module-backed code signing
-- **Software Bill of Materials (SBOM)**: Complete dependency and component inventory
-- **Signature Verification**: Mandatory signature verification before execution
+**Advanced Cryptographic Security [Planned]**:
+
+- **SLSA Level 3 Provenance [Planned]**: Complete software supply chain attestation
+- **Cosign Signatures [Planned]**: Hardware security module-backed code signing
+- **Software Bill of Materials (SBOM) [Planned]**: Complete dependency and component inventory
+- **Signature Verification [Planned]**: Mandatory signature verification before execution
 
 **Federated Data Architecture**:
 
@@ -513,12 +514,12 @@
 - **Advanced SIEM Integration**: Full STIX/TAXII support with compliance mappings
 - **Quarterly Threat Updates**: Automated deployment of curated threat intelligence rule packs
 
-**Hardened Security and Supply Chain**:
-
-- **SLSA Level 3 Provenance**: Complete software supply chain attestation
-- **Cosign Signatures**: Hardware security module-backed code signing
-- **Software Bill of Materials (SBOM)**: Complete dependency and component inventory
-- **Supply Chain Security**: End-to-end supply chain security verification
+**Hardened Security and Supply Chain [Planned]**:
+
+- **SLSA Level 3 Provenance [Planned]**: Complete software supply chain attestation
+- **Cosign Signatures [Planned]**: Hardware security module-backed code signing
+- **Software Bill of Materials (SBOM) [Planned]**: Complete dependency and component inventory
+- **Supply Chain Security [Planned]**: End-to-end supply chain security verification
 
 **FISMA Compliance**:
 
@@ -1585,6 +1586,10 @@
 
 ## Footnotes
 
+[^46]: CLI Database Access: The CLI currently has direct read/write database access via `DatabaseManager::new()`. Read-only database access enforcement is planned but not yet implemented (see SECURITY_AUDIT_2026-04-03.md CRITICAL-03).
+
+[^47]: Merkle Tree Inclusion Proofs: The `generate_inclusion_proof()` function is currently a stub that returns an empty `Vec` (see SECURITY_AUDIT_2026-04-03.md MEDIUM-03). The system implements BLAKE3 hash-chained audit logging, but full Merkle tree inclusion proof generation is in progress.
+
 ## Conclusion
 
 DaemonEye's security design provides a comprehensive framework for secure process monitoring and threat detection. The three-component architecture with strict privilege separation, cryptographic integrity verification, and comprehensive audit logging ensures that the system meets enterprise security requirements while maintaining high performance and operational efficiency.

How did I do? Any feedback?

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Warning

CodeRabbit couldn't request changes on this pull request because it doesn't have sufficient GitHub permissions.

Please grant CodeRabbit Pull requests: Read and write permission and re-run the review.

👉 Steps to fix this

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.mdformat.toml:
- Around line 16-23: The project installs the mdformat-wikilink plugin via
mise.toml but .mdformat.toml’s extensions array does not include "wikilink",
causing a local/CI mismatch; to fix, remove the unused "--with
mdformat-wikilink" argument from mise.toml (or alternatively add "wikilink" to
the extensions list in .mdformat.toml) so the installed plugins match the
"extensions" configuration and prevent formatting drift.

In `@Cargo.toml`:
- Line 54: Update the Cargo.toml dependency entries to explicitly control
features for cryptographic hardening: change the blake3 entry to explicitly
disable default features and enable the std feature (e.g., blake3 with version
"1.8.4", default-features = false, features = ["std"]) so intent is explicit;
fix the sha2 entry (version 0.11.0) to use the correct alloc feature instead of
std (e.g., sha2 = { version = "0.11.0", default-features = false, features =
["alloc"] }) so compilation and security expectations match the 0.11.x API.

In `@SECURITY_AUDIT_2026-04-03.md`:
- Line 59: Update CRITICAL-02 in SECURITY_AUDIT_2026-04-03.md to clarify that
the workspace setting `panic = "deny"` does not prevent runtime panics caused by
invalid UTF‑8 slicing (e.g., splitting a multibyte correlation ID at byte offset
256) and therefore the finding should focus on slicing safety and lint
inheritance; explicitly state that UTF‑8 boundary panics are runtime errors not
caught by that lint, call out the affected component (`eventbus` /
`daemoneye-agent`), and recommend concrete mitigations such as validating
correlation ID byte/char boundaries, using safe APIs (e.g., checked slicing via
get(..) or char-based truncation), or normalizing input before slicing so the
documentation is technically precise and operationally actionable.
- Around line 338-340: The fenced code block containing the CLI example "app
--password=secret123 --verbose" is unlabeled; change the opening backticks from
``` to ```text (i.e., add the language identifier "text") so the block is
properly labeled for Markdown linting and hygiene.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 21715898-3825-466a-b6e4-440789e333e7

📥 Commits

Reviewing files that changed from the base of the PR and between 5d81bcc and f0da742.

⛔ Files ignored due to path filters (12)
  • .agents/skills/.gitignore is excluded by none and included by none
  • .claude/skills/.gitignore is excluded by none and included by none
  • .codex/skills/.gitignore is excluded by none and included by none
  • .cursor/rules/.gitignore is excluded by none and included by none
  • .cursor/skills/.gitignore is excluded by none and included by none
  • .gemini/skills/.gitignore is excluded by none and included by none
  • .gitignore is excluded by none and included by none
  • .tessl/.gitignore is excluded by none and included by none
  • .tessl/RULES.md is excluded by none and included by none
  • .vscode/settings.json is excluded by none and included by none
  • Cargo.lock is excluded by !**/*.lock and included by none
  • mise.lock is excluded by !**/*.lock and included by none
📒 Files selected for processing (10)
  • .github/skills/.gitignore
  • .kiro/specs/daemoneye-core-monitoring/tasks.md
  • .kiro/steering/structure.md
  • .mdformat.toml
  • .pre-commit-config.yaml
  • AI_POLICY.md
  • Cargo.toml
  • SECURITY_AUDIT_2026-04-03.md
  • mise.toml
  • tessl.json
💤 Files with no reviewable changes (1)
  • .github/skills/.gitignore

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates the project’s developer tooling and Rust dependencies, adds a repository AI usage policy, consolidates ignore rules, and checks in a comprehensive security audit report to document current findings and remediation guidance.

Changes:

  • Updated tooling configs and versions (mise, pre-commit, mdformat, tessl) and added VS Code bun runtime setting.
  • Refreshed Rust workspace dependencies via Cargo.toml/Cargo.lock updates (e.g., tokio, redb, sha2, proptest, uuid).
  • Added SECURITY_AUDIT_2026-04-03.md plus small markdown escaping fixes in steering/spec docs and .gitignore consolidation.

Reviewed changes

Copilot reviewed 19 out of 22 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
tessl.json Refresh Tessl dependencies/tiles configuration.
SECURITY_AUDIT_2026-04-03.md Adds a full security audit report with prioritized findings and remediations.
mise.toml Bumps pinned tooling versions and updates mdformat tool invocation args.
mise.lock Regenerates tool lockfile with updated versions/checksums/provenance.
Cargo.toml Updates workspace dependency versions (notably major bump for redb).
Cargo.lock Updates resolved dependency graph to match workspace bumps.
AI_POLICY.md Adds an AI usage/accountability policy for contributions.
.vscode/settings.json Adds a Bun runtime path for editor tooling.
.tessl/RULES.md Updates Tessl rules references to new tile(s).
.tessl/.gitignore Removes now-redundant Tessl tiles ignore (covered at root).
.pre-commit-config.yaml Switches mdformat hook repo/version and updates hook configuration/deps.
.mdformat.toml Updates mdformat extensions list for the new plugin set.
.kiro/steering/structure.md Fixes bracket escaping for mdformat compatibility.
.kiro/specs/daemoneye-core-monitoring/tasks.md Fixes bracket escaping for mdformat compatibility.
.gitignore Adds broader local-file ignores and a .context ignore rule.
.github/skills/.gitignore Removes per-directory Tessl-managed ignore file as part of consolidation.
.gemini/skills/.gitignore Removes per-directory Tessl-managed ignore file as part of consolidation.
.cursor/skills/.gitignore Removes per-directory Tessl-managed ignore file as part of consolidation.
.cursor/rules/.gitignore Removes per-directory Tessl-managed ignore file as part of consolidation.
.codex/skills/.gitignore Removes per-directory Tessl-managed ignore file as part of consolidation.
.claude/skills/.gitignore Removes per-directory Tessl-managed ignore file as part of consolidation.
.agents/skills/.gitignore Removes per-directory Tessl-managed ignore file as part of consolidation.

- Reduce HPEB default channel capacity from 1M to 8192 (#15)
- Replace broker sequence counter Mutex with AtomicU64 (#39)
- Remove redundant duplicate DatabaseStats fields (#47)
- Add timeout-minutes to CI jobs (#59)

Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
…uctions

Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
- Clarify collector-core role and remove unused eventbus-integration
  feature flag, ungating integration tests (#17)
- Fix architecture doc claiming single crate instead of 6-member
  workspace (#57)
- Mark Docker docs as aspirational with build-from-source guide (#58)
- Set wildcards = 'deny' in deny.toml (#65)

Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
- Deduplicate WAL write() and write_with_type() into shared
  write_entry() method (#20)
- Fix silent event type fallback to Start — unknown types now return
  error instead of misclassifying events (#11)
- Fix swallowed lock poisoning errors in TriggerManager — all silent
  if-let-Ok patterns replaced with lock_or_err! + error propagation (#50)
- Remove redundant DeliveryResult success/error_message fields —
  Result wrapper is the sole success/failure signal (#75)

Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
…ndencies

Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
- Fix UTF-8 byte slicing panic in CorrelationMetadata — use
  char-boundary-safe truncation with tests for emoji/CJK (#1)
- Set restricted permissions (0o600) on eventbus Unix socket after
  creation, parent dir 0o700 (#3)
- Fix audit hash timestamp collision — use RFC3339 for sub-second
  precision (#32)
- Fix command sanitizer to parse --flag=value syntax (#35)
- Deduplicate hash computation into shared
  compute_entry_hash_input() method (#48)

Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
@coderabbitai coderabbitai bot removed documentation Improvements or additions to documentation dependencies Pull requests that update a dependency file labels Apr 4, 2026
@dosubot dosubot bot added the size:XXL This PR changes 1000+ lines, ignoring generated files. label Apr 4, 2026
@coderabbitai coderabbitai bot added documentation Improvements or additions to documentation dependencies Pull requests that update a dependency file labels Apr 4, 2026
Add documented solutions for vulnerabilities and performance issues
found during workspace-wide code review: UTF-8 panic (CWE-135),
socket permissions (CWE-732), hash ambiguity (CWE-345), config
validation (CWE-20), mutex contention, deep clone fan-out, and
silent event type fallback.

Add docs/solutions/ to AGENTS.md workspace structure for
discoverability.

Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 59 out of 62 changed files in this pull request and generated 5 comments.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Warning

CodeRabbit couldn't request changes on this pull request because it doesn't have sufficient GitHub permissions.

Please grant CodeRabbit Pull requests: Read and write permission and re-run the review.

👉 Steps to fix this

Actionable comments posted: 11

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (10)
daemoneye-lib/src/lib.rs (1)

16-23: 🧹 Nitpick | 🔵 Trivial

Suggest documenting core modules for consistency.

Feature-gated modules now have clear documentation, but core modules (config, crypto, ipc, models, proto, storage, telemetry) remain undocumented. Adding similar module-level docs would improve discoverability and maintain consistency across the public API surface.

As per coding guidelines: "Write comprehensive rustdoc for all public APIs with examples."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@daemoneye-lib/src/lib.rs` around lines 16 - 23, Public core modules (config,
crypto, ipc, models, proto, storage, telemetry) lack module-level rustdoc; add
module-level documentation comments (e.g., //! ... or /// at the top of each
module file) that describe purpose, public types/functions, and a short usage
example to match the existing feature-gated docs and the guideline "Write
comprehensive rustdoc for all public APIs with examples." Ensure each module
(config, crypto, ipc, models, proto, storage, telemetry) gets a brief summary
sentence, one or two example snippets demonstrating common usage, and any
important notes about public types or invariants.
procmond/src/wal.rs (3)

623-656: ⚠️ Potential issue | 🔴 Critical

Do not acknowledge WAL write success before durability is enforced.

write_all only guarantees data reaches the OS buffer. Returning success at Line 656 without sync_data/sync_all can lose acknowledged events on crash/power loss, which breaks WAL recovery guarantees.

Suggested durability fix
         state
             .file
             .write_all(&serialized)
             .await
             .map_err(WalError::Io)?;
+
+        // Ensure acknowledged WAL writes survive process/OS crashes.
+        state.file.sync_data().await.map_err(WalError::Io)?;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@procmond/src/wal.rs` around lines 623 - 656, The code currently returns
success after write_all (which only reaches the OS buffer); to enforce
durability, call an async file sync (e.g.,
state.file.sync_data().await.map_err(WalError::Io) or sync_all if metadata must
be durable) after the writes and before returning the sequence number (and
before any ack to callers); ensure this sync is performed in the same critical
section that updates state.size/min_sequence/max_sequence so ordering is
preserved, and keep the rotation call via rotate_file_internal(&mut
state).await? after the sync (or ensure rotate_file_internal itself syncs the
closed file) so no acknowledged entry can be lost.

610-617: ⚠️ Potential issue | 🟠 Major

Guard frame-length conversion instead of narrowing cast.

Line 611 narrows serialized.len() to u32 with as. If payload ever exceeds u32::MAX, length prefix truncates and WAL framing becomes corrupted.

Safe conversion with explicit overflow handling
-        #[allow(clippy::as_conversions)] // Safe: serialized len is bounded by frame size
-        let length = serialized.len() as u32;
+        let length = u32::try_from(serialized.len()).map_err(|_| {
+            WalError::Serialization(format!(
+                "WAL entry too large: {} bytes exceeds u32 length prefix",
+                serialized.len()
+            ))
+        })?;
         let length_bytes = length.to_le_bytes();
 
-        // Calculate size increment safely
-        let size_increment = length_bytes.len().saturating_add(serialized.len());
-        #[allow(clippy::as_conversions)] // Safe: total size is bounded by max frame size
-        let size_increment_u64 = size_increment as u64;
+        let size_increment = length_bytes
+            .len()
+            .checked_add(serialized.len())
+            .ok_or_else(|| WalError::Serialization("WAL entry size overflow".to_string()))?;
+        let size_increment_u64 = u64::try_from(size_increment)
+            .map_err(|_| WalError::Serialization("WAL entry size overflow".to_string()))?;

As per coding guidelines, "Use checked_*, saturating_*, or explicit wrapping_* operations for security-sensitive integer calculations."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@procmond/src/wal.rs` around lines 610 - 617, The narrowing cast of
serialized.len() to u32 (the variable length) can truncate huge payloads;
replace the unchecked "as" with an explicit checked conversion (e.g. use
u32::try_from(serialized.len()) or serialized.len().try_into()) and
propagate/return an error if the conversion fails instead of truncating;
likewise compute the size increment using wider types (cast lengths to u64 first
or use checked_add) so size_increment_u64 is derived from checked/64-bit-safe
math; update references to length, length_bytes, size_increment and
size_increment_u64 accordingly and remove the unsafe narrowing cast and its
clippy allow.

599-654: ⚠️ Potential issue | 🟠 Major

Fix await_holding_lock lint violation in WAL write path.

Line 620 acquires file_state mutex then awaits multiple I/O operations—file writes (lines 625–634) and rotation (line 653), which itself performs open() and restrict_permissions() syscalls—while holding the lock. This violates workspace lints configured as await_holding_lock = "deny" and would fail the zero-warnings policy.

The #[allow(clippy::significant_drop_tightening)] is masking a different lint; the actual violation (await_holding_lock) is not suppressed. Either:

  • Refactor to release the lock before I/O and accept the atomicity trade-off for rotation
  • Use a channel-based approach or write to a temporary path with atomic swap
  • Add explicit #[allow(clippy::await_holding_lock)] with detailed justification (audit-worthy decision)

The current pattern causes head-of-line blocking under concurrent write load and violates explicit workspace configuration.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@procmond/src/wal.rs` around lines 599 - 654, The write_entry function
currently holds self.file_state across await points (writes and a potential call
to rotate_file_internal), violating the await_holding_lock lint; fix by
releasing the mutex before performing async I/O: serialize the entry and compute
length_bytes and size_increment outside the lock, then temporarily take
ownership of the file handle from state (e.g., std::mem::take on state.file or
move the file into a local variable) so you can perform file.write_all awaits
without holding self.file_state, then re-lock self.file_state to update
state.size, state.min_sequence/state.max_sequence and to decide/perform rotation
(calling rotate_file_internal while holding the lock only if necessary), or
alternatively implement a background writer channel that accepts serialized
frames and updates state atomically; if you cannot refactor now, add a scoped
#[allow(clippy::await_holding_lock)] on write_entry with a concise, audit-ready
justification referencing write_entry, file_state, rotate_file_internal and
rotation_threshold.
docs/src/deployment/installation.md (3)

417-417: ⚠️ Potential issue | 🟠 Major

Fix systemd service config path to match code defaults.

The systemd service file references /etc/daemoneye/config.yaml, but the code expects configs in /var/lib/evilbitlabs/daemoneye/configs/. This path mismatch will cause service startup to fail.

🔧 Update systemd service config path
 [Service]
 Type=notify
 User=daemoneye
 Group=daemoneye
-ExecStart=/usr/local/bin/daemoneye-agent --config /etc/daemoneye/config.yaml
+ExecStart=/usr/local/bin/daemoneye-agent --config /var/lib/evilbitlabs/daemoneye/configs/config.yaml
 ExecReload=/bin/kill -HUP $MAINPID

Also update line 451 to set ownership on the correct directory:

-sudo chown -R daemoneye:daemoneye /etc/daemoneye
+sudo chown -R daemoneye:daemoneye /var/lib/evilbitlabs/daemoneye
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/src/deployment/installation.md` at line 417, Update the systemd service
ExecStart to point to the config path the code expects (replace
/etc/daemoneye/config.yaml with the code-default directory under
/var/lib/evilbitlabs/daemoneye/configs/, e.g., the appropriate config file
inside that configs/ directory) and also change the ownership command (the
chown/chmod step referenced later) to set ownership on
/var/lib/evilbitlabs/daemoneye/configs/ instead of /etc/daemoneye so the service
and filesystem permissions match the code defaults.

122-124: ⚠️ Potential issue | 🔴 Critical

CRITICAL: cargo build creates different binary names than documented.

Line 122-124 build --bin daemoneye-agent and --bin daemoneye-cli, but without explicit [[bin]] sections in their Cargo.toml files, the actual binaries are daemoneye_agent and daemoneye_cli (underscores).

The copy command at line 134-135 references the hyphenated names and will fail with "No such file or directory."

🔧 Fix binary names in build instructions
 # Build in release mode
 cargo build --release

 # Install built binaries
-sudo cp target/release/procmond target/release/daemoneye-agent target/release/daemoneye-cli /usr/local/bin/
-sudo chmod +x /usr/local/bin/procmond /usr/local/bin/daemoneye-agent /usr/local/bin/daemoneye-cli
+sudo cp target/release/procmond /usr/local/bin/procmond
+sudo cp target/release/daemoneye_agent /usr/local/bin/daemoneye-agent
+sudo cp target/release/daemoneye_cli /usr/local/bin/daemoneye-cli
+sudo chmod +x /usr/local/bin/procmond /usr/local/bin/daemoneye-agent /usr/local/bin/daemoneye-cli

Apply the same fix to all build-from-source sections:

  • Lines 230-233 (Ubuntu/Debian)
  • Lines 261-263 (RHEL/CentOS)
  • Lines 285-288 (Arch Linux)
  • Lines 315-318 (macOS)
  • Lines 370-372 (Windows)

Also applies to: 134-135

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/src/deployment/installation.md` around lines 122 - 124, The docs
currently reference hyphenated binary names (daemoneye-agent, daemoneye-cli) but
Cargo produces underscored binaries (daemoneye_agent, daemoneye_cli) unless
[[bin]] is set; update the build and copy instructions to use the actual binary
names (daemoneye_agent and daemoneye_cli) instead of
daemoneye-agent/daemoneye-cli, and apply this change consistently in the
specified build-from-source sections (the Clone and Build block and the
Ubuntu/Debian, RHEL/CentOS, Arch Linux, macOS, and Windows build sections) so
the copy commands and --bin references match Cargo output.

90-97: ⚠️ Potential issue | 🔴 Critical

Fix directory paths—installation creates wrong locations, service startup will fail.

The code expects configs in /var/lib/evilbitlabs/daemoneye/configs and the socket in /var/lib/evilbitlabs/daemoneye/daemoneye-eventbus.sock (both via unidirs::ServiceDirs). The installation doc creates /etc/daemoneye for configs and /var/lib/daemoneye for the socket, causing service failures.

The database path /var/lib/daemoneye/processes.db is correct.

Fix: Update directory structure
 # Create system directories
-sudo mkdir -p /etc/daemoneye
-sudo mkdir -p /var/lib/daemoneye
+sudo mkdir -p /var/lib/evilbitlabs/daemoneye/configs
+sudo mkdir -p /var/lib/daemoneye
 sudo mkdir -p /var/log/daemoneye

 # Set ownership
-sudo chown -R $USER:$USER /etc/daemoneye
+sudo chown -R $USER:$USER /var/lib/evilbitlabs/daemoneye
 sudo chown -R $USER:$USER /var/lib/daemoneye
 sudo chown -R $USER:$USER /var/log/daemoneye

Apply same fix to lines 235, 266, 291, 321 (Ubuntu/Debian, RHEL/CentOS, Arch, macOS).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/src/deployment/installation.md` around lines 90 - 97, The docs create
the wrong directories; update the installation steps to create and chown the
paths expected by unidirs::ServiceDirs: replace /etc/daemoneye and
/var/lib/daemoneye with /var/lib/evilbitlabs/daemoneye and ensure a configs
subdir (/var/lib/evilbitlabs/daemoneye/configs) is created, plus the log dir
(/var/log/evilbitlabs/daemoneye); change the mkdir -p and chown -R commands
accordingly and apply the same edits wherever similar blocks appear (lines noted
in the review) so the socket path
(/var/lib/evilbitlabs/daemoneye/daemoneye-eventbus.sock) and database path
continue to be correct for ServiceDirs.
collector-core/tests/eventbus_performance_comparison.rs (1)

309-311: ⚠️ Potential issue | 🟠 Major

Keep this perf suite out of the default Unix test path.

With the file-level feature gate gone, these are now ordinary #[tokio::test] cases with 10–25s timeouts, real socket setup, and performance-sensitive thresholds. That will make routine cargo test runs slow and noisy, and failures will reflect host load as much as regressions. Put this suite back behind a feature or mark it #[ignore] and run it from a dedicated perf lane.

Also applies to: 426-428, 510-512, 583-585, 814-816, 893-895

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@collector-core/tests/eventbus_performance_comparison.rs` around lines 309 -
311, The performance tests (e.g., the async test function
test_throughput_comparison) are currently ordinary #[tokio::test] cases and
should not run on every cargo test; update each perf-oriented test (including
test_throughput_comparison and the other perf tests mentioned) to either be
behind a cargo feature gate (e.g., #[cfg(feature = "perf_tests")] with
corresponding #[cfg_attr(..., tokio::test)]) or mark them #[ignore] (e.g.,
#[tokio::test] #[ignore]) so they are excluded from default test runs; pick one
approach and apply it consistently to the tests referenced in the comment to
keep routine test runs fast and stable.
collector-core/src/high_performance_event_bus.rs (2)

279-280: 🧹 Nitpick | 🔵 Trivial

Remove unused local counters instead of silencing.

delivered and dropped (lines 279-280) are computed but only suppressed at line 369. Since atomic counters handle all tracking, remove these locals to reduce noise:

♻️ Proposed cleanup
-                    let mut delivered = 0;
-                    let mut dropped = 0;
-
                     // Now process subscribers without holding the lock
                     for (subscriber_id, sender, backpressure_strategy) in subscribers_to_notify {
                         // Send to subscriber respecting backpressure strategy
                         match backpressure_strategy {
                             BackpressureStrategy::Blocking => {
                                 let mut sent = false;
                                 let mut retries = 0;
                                 let mut backoff_delay = Duration::from_micros(10);

                                 while !sent && retries < config_clone.max_blocking_retries {
                                     match sender.try_send(Arc::clone(&arc_bus_event)) {
                                         Ok(_) => {
-                                            delivered += 1;
                                             delivery_counter_clone.fetch_add(1, Ordering::Relaxed);
                                             sent = true;
                                         }
                                         Err(TrySendError::Full(_)) => {
                                             retries += 1;
                                             if shutdown_signal_clone.load(Ordering::Relaxed) {
-                                                dropped += 1;
                                                 drop_counter_clone.fetch_add(1, Ordering::Relaxed);
                                                 // ... rest unchanged

(Apply similar removal to all delivered += 1 / dropped += 1 increments in both Blocking and DropNewest branches, then remove lines 365-369 entirely.)

As per coding guidelines, "Unused code (dead code, commented-out code, debug artifacts)" should be removed.

Also applies to: 365-369

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@collector-core/src/high_performance_event_bus.rs` around lines 279 - 280,
Remove the unused local counters delivered and dropped and all their increments
in the Blocking and DropNewest branches of the event dispatch logic: delete the
let-mut declarations for delivered/dropped, remove every delivered += 1 /
dropped += 1, and drop the final suppression/assignment that references them
(the let _ = (delivered, dropped) at the end). The atomic counters already
perform all tracking, so ensure only the atomics remain and no other code
depends on delivered/dropped.

529-544: 🧹 Nitpick | 🔵 Trivial

Clarify atomic counter semantics vs. flush pattern.

flush_atomic_counters resets counters and accumulates into stats, but it's dead code. Meanwhile, get_statistics reads counters without reset and overwrites stats values.

If both patterns coexist, calling flush_atomic_counters followed by get_statistics would return stale zeros. Either:

  1. Remove flush_atomic_counters if truly unused, or
  2. Make get_statistics aware of flushed totals if periodic flushing is planned.

Current implementation works for read-only stats queries but will break if flushing is enabled later.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@collector-core/src/high_performance_event_bus.rs` around lines 529 - 544, The
code currently has two conflicting patterns: the unused flush_atomic_counters
function that resets and accumulates atomic counters, and get_statistics (async
fn get_statistics) which simply reads the atomics and overwrites the in-memory
EventBusStatistics (stats), so if flushing is enabled later get_statistics would
return zeros; either remove the dead flush_atomic_counters function and its
related state to keep a single read-only atomic model, or update get_statistics
to incorporate flushed totals by reading/adding any accumulated totals that
flush_atomic_counters stores before zeroing (i.e., merge the
persisted/accumulated counters with the current atomics and update
stats.events_published/events_delivered/events_dropped and last_updated
accordingly), and ensure references to flush_atomic_counters, the atomic
counters (event_counter, delivery_counter, drop_counter) and EventBusStatistics
are updated consistently.
♻️ Duplicate comments (3)
daemoneye-lib/src/crypto.rs (1)

210-236: ⚠️ Potential issue | 🟠 Major

Bind verification to an explicit per-entry hash version to prevent format-downgrade acceptance.

At Line 223, any mismatch falls back to v1. Because AuditEntry does not carry a hash-format marker, untrusted/deserialized entries can still be validated with the ambiguous legacy scheme indefinitely. Prefer explicit versioned verification (with a controlled legacy path) instead of unconditional fallback.

🔧 Suggested hardening (version-gated verification)
 pub struct AuditEntry {
+    #[serde(default)]
+    pub hash_version: Option<u8>, // None => legacy imported entry
     pub sequence: u64,
     ...
 }

 pub fn new(...) -> Self {
     ...
     Self {
+        hash_version: Some(HASH_VERSION),
         sequence,
         ...
     }
 }

 pub fn verify_integrity(&self) -> Result<(), CryptoError> {
     for (i, entry) in self.entries.iter().enumerate() {
-        let entry_data_v2 = AuditEntry::compute_entry_hash_input(...);
-        let expected_v2 = Blake3Hasher::hash_string(&entry_data_v2);
-        if entry.entry_hash != expected_v2 {
-            let entry_data_v1 = AuditEntry::compute_entry_hash_input_v1(...);
-            let expected_v1 = Blake3Hasher::hash_string(&entry_data_v1);
-            if entry.entry_hash != expected_v1 {
-                return Err(CryptoError::Hash(format!("Hash mismatch at entry {i}")));
-            }
-        }
+        let version = entry.hash_version.unwrap_or(1);
+        let matches = match version {
+            2 => {
+                let data = AuditEntry::compute_entry_hash_input(...);
+                entry.entry_hash == Blake3Hasher::hash_string(&data)
+            }
+            1 => {
+                let data = AuditEntry::compute_entry_hash_input_v1(...);
+                entry.entry_hash == Blake3Hasher::hash_string(&data)
+            }
+            other => return Err(CryptoError::Hash(format!("Unsupported hash version {other} at entry {i}"))),
+        };
+        if !matches {
+            return Err(CryptoError::Hash(format!("Hash mismatch at entry {i}")));
+        }
         ...
     }
     Ok(())
 }

As per coding guidelines, "Validate all external inputs early and reject with actionable errors; treat external inputs as untrusted."

Cargo.toml (1)

111-111: ⚠️ Potential issue | 🟠 Major

Make sha2 features explicit for crypto hardening policy compliance.

Line 111 still relies on implicit default features (sha2 = "0.11.0"). For security-critical dependencies, this should be explicitly hardened and feature-scoped.

Proposed manifest hardening
-sha2 = "0.11.0"
+sha2 = { version = "0.11.0", default-features = false, features = ["alloc"] }

As per coding guidelines, Pin security-critical dependencies and avoid wildcards; use default-features = false with explicit feature selection.

#!/bin/bash
set -euo pipefail

echo "Current sha2 declaration:"
rg -n '^\s*sha2\s*=' Cargo.toml

echo
echo "sha2 usage sites to validate required features (alloc/oid):"
rg -n --type rust -C2 '\bsha2::|Sha(224|256|384|512)|Digest|AssociatedOid|ObjectIdentifier|oid\b'
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Cargo.toml` at line 111, The sha2 dependency is declared with implicit
defaults; change it to disable default features and explicitly enable only the
features your code needs (e.g., replace sha2 = "0.11.0" with a scoped
declaration using default-features = false and the minimal features such as
"std" or "alloc" as required), after scanning call sites that reference types
like Sha256, Sha512, Digest, AssociatedOid, ObjectIdentifier to determine
whether you need "std" or just "alloc" and then list those exact features in the
Cargo.toml entry for sha2.
docs/src/deployment/docker.md (1)

127-148: ⚠️ Potential issue | 🟠 Major

Add USER directive to runtime stages for privilege separation.

The runtime stages run as root by default, expanding the attack surface. The docker-compose and Kubernetes examples properly set user: 1000:1000, but operators building from this Dockerfile template will get root containers unless they override.

For a security monitoring tool, this is a significant risk. Add USER directives to all runtime stages.

🛡️ Add non-root user to runtime stages
 # Runtime stage - procmond
 FROM debian:bookworm-slim AS procmond-runtime

 RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
 COPY --from=builder /app/target/release/procmond /usr/local/bin/procmond

+USER 1000:1000
+WORKDIR /data
 ENTRYPOINT ["procmond"]

 # Runtime stage - daemoneye-agent
 FROM debian:bookworm-slim AS agent-runtime

 RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
 COPY --from=builder /app/target/release/daemoneye_agent /usr/local/bin/daemoneye-agent

+USER 1000:1000
+WORKDIR /data
 ENTRYPOINT ["daemoneye-agent"]

 # Runtime stage - daemoneye-cli
 FROM debian:bookworm-slim AS cli-runtime

 RUN apt-get update && apt-get install -y ca-certificates && rm -rf /var/lib/apt/lists/*
 COPY --from=builder /app/target/release/daemoneye_cli /usr/local/bin/daemoneye-cli

+USER 1000:1000
+WORKDIR /data
 ENTRYPOINT ["daemoneye-cli"]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/src/deployment/docker.md` around lines 127 - 148, The runtime stages run
as root; create and switch to a non-root user in each runtime stage
(procmond-runtime, agent-runtime, cli-runtime) after copying the binary and
before ENTRYPOINT: add a non-root user/group (e.g., useradd/groupadd or use
UID/GID 1000), ensure the installed binary at /usr/local/bin/* is owned by that
user (chown) and set USER to that user (or USER 1000:1000) so the container does
not run as root; apply this change to all three stages referencing the binaries
procmond, daemoneye-agent, and daemoneye-cli.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/ci.yml:
- Around line 141-145: The Qlty upload step "Upload to Qlty" currently points to
files: target/lcov.info while the coverage job writes lcov.info at the repo
root, causing silent no-op uploads; update the Qlty action configuration to
reference the actual generated file (change files: target/lcov.info to files:
lcov.info) or modify the coverage generation step to place the lcov file into
target/lcov.info so the paths match and real coverage data is uploaded.

In `@collector-core/src/event_bus.rs`:
- Around line 45-48: The trait method subscribe currently returns
UnboundedReceiver and uses unbounded_channel; change its signature to return
tokio::sync::mpsc::Receiver<Arc<BusEvent>> and switch all implementations
(notably LocalEventBus::subscribe and daemoneye_event_bus handlers) to create a
bounded channel using the configured buffer_size (use
tokio::sync::mpsc::channel(buffer_size) or tokio::sync::mpsc::bounded
equivalent). Replace send() usages with try_send() and implement explicit
backpressure policy on full channels (e.g., drop-new, drop-oldest, or mark the
SubscriberInfo subscriber as unhealthy and evict it / trigger a circuit-breaker)
so slow subscribers cannot grow unlimited Arc<BusEvent> refs; update
SubscriberInfo to hold the bounded Receiver/Tx types and add state to track
unhealthy subscribers; remove the dead config field usage and update call sites
in result_aggregator.rs and daemoneye_event_bus.rs to use the new Receiver type
and backpressure handling.

In `@collector-core/src/high_performance_event_bus.rs`:
- Around line 233-241: The receiver already yields an Arc<BusEvent>, so remove
the redundant Arc::new(...) wrap at the recv_timeout handling: instead of
creating Arc::new(bus_event) produce a binding that reuses the received
Arc<BusEvent> (keep the name arc_bus_event used later for clarity). This ensures
the type matches subscribers_to_notify's Sender<Arc<BusEvent>> and that
subsequent try_send(Arc::clone(&arc_bus_event)) calls compile without type
mismatch.

In `@daemoneye-agent/src/main.rs`:
- Around line 257-261: The agent currently constructs an empty detection engine
with DetectionEngine::new() and then proceeds without loading rules; update
startup to call storage::DatabaseManager::get_all_rules (or the equivalent DB
loader) and pass the loaded rules into the detection engine (or call
DetectionEngine::load_rules/load_from_rules if available) so rules are actually
populated on startup, and if the resulting rules count is zero call the
orchestrator health/degraded signaling path (emit degraded-state / set status to
degraded) and log a clear warning including counts; use the detection_engine
variable and the DatabaseManager API to locate where to insert the
load-and-check logic and ensure error handling converts DB failures into a
degraded or failed startup state rather than silently proceeding.

In `@daemoneye-lib/src/config.rs`:
- Around line 760-769: The directory-traversal check in the socket_path
validation duplicates logic found in validate_path_no_traversal; extract a small
helper (e.g., fn path_has_parent_dir<P: AsRef<Path>>(p: P) -> bool or reuse
validate_path_no_traversal) and replace the inline loop that compares components
to std::path::Component::ParentDir with a call to that helper from the
socket_path validation and any other locations; ensure the helper accepts the
same input type (string/Path) and returns a bool or Result so existing error
handling (ConfigError::ValidationError) remains unchanged.
- Around line 748-752: The validation for Unix domain socket path length in
validate_socket_path is wrong for macOS; update the docstring to state Linux
uses a 108-byte sun_path (107 usable) while macOS uses a 104-byte sun_path (103
usable), and change the SOCKET_PATH_MAX_LEN constant from 107 to 103 so the
function enforces the safe cross-platform limit; ensure the inline comment
reflects "103 usable bytes" and adjust any accompanying comment that mentioned
the NUL terminator accordingly.

In `@daemoneye-lib/src/crypto.rs`:
- Around line 324-421: Add proptest-based property tests covering
AuditEntry::compute_entry_hash_input invariants: write proptest cases that (1)
assert determinism (same inputs produce identical output) using arbitrary
sequence numbers, timestamps (chrono::DateTime), actor/action/payload_hash
strings (including arbitrary UTF-8 and embedded ':') and optional previous_hash;
(2) assert unambiguity of colon placement by generating random strings and
checking that swapping a colon between actor and action yields different inputs;
(3) assert previous_hash, when Some, always appears verbatim in the input; and
(4) assert v1 vs v2 separation by constructing v1-style colon-delimited inputs
(using Blake3Hasher::hash_string or Blake3Hasher::hash) and verifying the ledger
verification path (AuditLedger::verify_integrity) still accepts v1-encoded
entries while v2 inputs differ; use proptest strategies for strings and
Option<String> and integrate predicates for assertions per project guidelines.

In `@docs/src/deployment/docker.md`:
- Around line 178-193: Add explicit binary-existence and data-path bind-mount
checks before running --version: run each image with an overridden entrypoint to
ls the expected binary paths (/usr/local/bin/procmond,
/usr/local/bin/daemoneye-agent, /usr/local/bin/daemoneye-cli) to ensure the
executables are present, then run the existing --version commands, and add a
bind-mount run (e.g., docker run --rm -v /tmp/test-data:/data
daemoneye/procmond:latest --help) to verify the container can access data paths;
update the verification section to include these ls and bind-mount checks
alongside docker images and docker inspect.
- Around line 122-124: The Docker build uses hyphenated binary names (--bin
daemoneye-agent, --bin daemoneye-cli) but Cargo will produce underscored names
(daemoneye_agent, daemoneye_cli) causing subsequent COPY steps (which expect
hyphenated names) to fail; update the Dockerfile build/COPY steps to reference
the actual artifact names (daemoneye_agent, daemoneye_cli) and then install or
rename them to the hyphenated form if you want hyphen consistency, and ensure
the procmond binary reference is correct as well; additionally, add USER
directives before each ENTRYPOINT to drop root privileges (create or switch to a
non-root user and set USER before ENTRYPOINT) to enforce privilege separation.

In `@docs/src/deployment/installation.md`:
- Line 298: The phrase "build from source" used as a compound modifier should be
hyphenated for clarity; update the sentence "Homebrew package support for
DaemonEye is coming soon. For now, please use the build from source or manual
installation methods below." to use "build-from-source" when used as a modifier
(or rephrase to avoid modifier form) and apply the identical hyphenation
correction to the Chocolatey section sentence referenced on line 349; ensure
both instances read consistently (e.g., "build-from-source method" or "build
from source" rephrased).

In `@mise.toml`:
- Line 24: The pipx entry "pipx:mdformat" with version 1.0.0 and the uvx_args
list was bumped to include new plugins; before merging, run
integration/formatting tests against real markdown corpora to validate output
(esp. table rendering) and load tests against the plugin set listed in uvx_args,
confirm cross-file consistency with .mdformat.toml and .pre-commit-config.yaml,
and either pin specific plugin versions (e.g., mdformat-front-matters and
mdformat-gfm-alerts) or revert/remove problematic plugins (previously used
mdformat-tables and mdformat-wikilink) if formatting/regressions are observed;
update the pipx:mdformat entry accordingly once tests pass.

---

Outside diff comments:
In `@collector-core/src/high_performance_event_bus.rs`:
- Around line 279-280: Remove the unused local counters delivered and dropped
and all their increments in the Blocking and DropNewest branches of the event
dispatch logic: delete the let-mut declarations for delivered/dropped, remove
every delivered += 1 / dropped += 1, and drop the final suppression/assignment
that references them (the let _ = (delivered, dropped) at the end). The atomic
counters already perform all tracking, so ensure only the atomics remain and no
other code depends on delivered/dropped.
- Around line 529-544: The code currently has two conflicting patterns: the
unused flush_atomic_counters function that resets and accumulates atomic
counters, and get_statistics (async fn get_statistics) which simply reads the
atomics and overwrites the in-memory EventBusStatistics (stats), so if flushing
is enabled later get_statistics would return zeros; either remove the dead
flush_atomic_counters function and its related state to keep a single read-only
atomic model, or update get_statistics to incorporate flushed totals by
reading/adding any accumulated totals that flush_atomic_counters stores before
zeroing (i.e., merge the persisted/accumulated counters with the current atomics
and update stats.events_published/events_delivered/events_dropped and
last_updated accordingly), and ensure references to flush_atomic_counters, the
atomic counters (event_counter, delivery_counter, drop_counter) and
EventBusStatistics are updated consistently.

In `@collector-core/tests/eventbus_performance_comparison.rs`:
- Around line 309-311: The performance tests (e.g., the async test function
test_throughput_comparison) are currently ordinary #[tokio::test] cases and
should not run on every cargo test; update each perf-oriented test (including
test_throughput_comparison and the other perf tests mentioned) to either be
behind a cargo feature gate (e.g., #[cfg(feature = "perf_tests")] with
corresponding #[cfg_attr(..., tokio::test)]) or mark them #[ignore] (e.g.,
#[tokio::test] #[ignore]) so they are excluded from default test runs; pick one
approach and apply it consistently to the tests referenced in the comment to
keep routine test runs fast and stable.

In `@daemoneye-lib/src/lib.rs`:
- Around line 16-23: Public core modules (config, crypto, ipc, models, proto,
storage, telemetry) lack module-level rustdoc; add module-level documentation
comments (e.g., //! ... or /// at the top of each module file) that describe
purpose, public types/functions, and a short usage example to match the existing
feature-gated docs and the guideline "Write comprehensive rustdoc for all public
APIs with examples." Ensure each module (config, crypto, ipc, models, proto,
storage, telemetry) gets a brief summary sentence, one or two example snippets
demonstrating common usage, and any important notes about public types or
invariants.

In `@docs/src/deployment/installation.md`:
- Line 417: Update the systemd service ExecStart to point to the config path the
code expects (replace /etc/daemoneye/config.yaml with the code-default directory
under /var/lib/evilbitlabs/daemoneye/configs/, e.g., the appropriate config file
inside that configs/ directory) and also change the ownership command (the
chown/chmod step referenced later) to set ownership on
/var/lib/evilbitlabs/daemoneye/configs/ instead of /etc/daemoneye so the service
and filesystem permissions match the code defaults.
- Around line 122-124: The docs currently reference hyphenated binary names
(daemoneye-agent, daemoneye-cli) but Cargo produces underscored binaries
(daemoneye_agent, daemoneye_cli) unless [[bin]] is set; update the build and
copy instructions to use the actual binary names (daemoneye_agent and
daemoneye_cli) instead of daemoneye-agent/daemoneye-cli, and apply this change
consistently in the specified build-from-source sections (the Clone and Build
block and the Ubuntu/Debian, RHEL/CentOS, Arch Linux, macOS, and Windows build
sections) so the copy commands and --bin references match Cargo output.
- Around line 90-97: The docs create the wrong directories; update the
installation steps to create and chown the paths expected by
unidirs::ServiceDirs: replace /etc/daemoneye and /var/lib/daemoneye with
/var/lib/evilbitlabs/daemoneye and ensure a configs subdir
(/var/lib/evilbitlabs/daemoneye/configs) is created, plus the log dir
(/var/log/evilbitlabs/daemoneye); change the mkdir -p and chown -R commands
accordingly and apply the same edits wherever similar blocks appear (lines noted
in the review) so the socket path
(/var/lib/evilbitlabs/daemoneye/daemoneye-eventbus.sock) and database path
continue to be correct for ServiceDirs.

In `@procmond/src/wal.rs`:
- Around line 623-656: The code currently returns success after write_all (which
only reaches the OS buffer); to enforce durability, call an async file sync
(e.g., state.file.sync_data().await.map_err(WalError::Io) or sync_all if
metadata must be durable) after the writes and before returning the sequence
number (and before any ack to callers); ensure this sync is performed in the
same critical section that updates state.size/min_sequence/max_sequence so
ordering is preserved, and keep the rotation call via rotate_file_internal(&mut
state).await? after the sync (or ensure rotate_file_internal itself syncs the
closed file) so no acknowledged entry can be lost.
- Around line 610-617: The narrowing cast of serialized.len() to u32 (the
variable length) can truncate huge payloads; replace the unchecked "as" with an
explicit checked conversion (e.g. use u32::try_from(serialized.len()) or
serialized.len().try_into()) and propagate/return an error if the conversion
fails instead of truncating; likewise compute the size increment using wider
types (cast lengths to u64 first or use checked_add) so size_increment_u64 is
derived from checked/64-bit-safe math; update references to length,
length_bytes, size_increment and size_increment_u64 accordingly and remove the
unsafe narrowing cast and its clippy allow.
- Around line 599-654: The write_entry function currently holds self.file_state
across await points (writes and a potential call to rotate_file_internal),
violating the await_holding_lock lint; fix by releasing the mutex before
performing async I/O: serialize the entry and compute length_bytes and
size_increment outside the lock, then temporarily take ownership of the file
handle from state (e.g., std::mem::take on state.file or move the file into a
local variable) so you can perform file.write_all awaits without holding
self.file_state, then re-lock self.file_state to update state.size,
state.min_sequence/state.max_sequence and to decide/perform rotation (calling
rotate_file_internal while holding the lock only if necessary), or alternatively
implement a background writer channel that accepts serialized frames and updates
state atomically; if you cannot refactor now, add a scoped
#[allow(clippy::await_holding_lock)] on write_entry with a concise, audit-ready
justification referencing write_entry, file_state, rotate_file_internal and
rotation_threshold.

---

Duplicate comments:
In `@Cargo.toml`:
- Line 111: The sha2 dependency is declared with implicit defaults; change it to
disable default features and explicitly enable only the features your code needs
(e.g., replace sha2 = "0.11.0" with a scoped declaration using default-features
= false and the minimal features such as "std" or "alloc" as required), after
scanning call sites that reference types like Sha256, Sha512, Digest,
AssociatedOid, ObjectIdentifier to determine whether you need "std" or just
"alloc" and then list those exact features in the Cargo.toml entry for sha2.

In `@docs/src/deployment/docker.md`:
- Around line 127-148: The runtime stages run as root; create and switch to a
non-root user in each runtime stage (procmond-runtime, agent-runtime,
cli-runtime) after copying the binary and before ENTRYPOINT: add a non-root
user/group (e.g., useradd/groupadd or use UID/GID 1000), ensure the installed
binary at /usr/local/bin/* is owned by that user (chown) and set USER to that
user (or USER 1000:1000) so the container does not run as root; apply this
change to all three stages referencing the binaries procmond, daemoneye-agent,
and daemoneye-cli.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: ebe34e18-6ea7-46f4-b548-f3ca7fdf3d4a

📥 Commits

Reviewing files that changed from the base of the PR and between 81596df and a8768e3.

⛔ Files ignored due to path filters (6)
  • daemoneye-eventbus/src/broker.rs is excluded by none and included by none
  • daemoneye-eventbus/src/lib.rs is excluded by none and included by none
  • daemoneye-eventbus/src/transport.rs is excluded by none and included by none
  • daemoneye-eventbus/tests/integration_tests.rs is excluded by none and included by none
  • daemoneye-eventbus/tests/rpc_integration_tests.rs is excluded by none and included by none
  • mise.lock is excluded by !**/*.lock and included by none
📒 Files selected for processing (21)
  • .github/workflows/ci.yml
  • AGENTS.md
  • Cargo.toml
  • collector-core/src/analysis_chain.rs
  • collector-core/src/daemoneye_event_bus.rs
  • collector-core/src/event_bus.rs
  • collector-core/src/high_performance_event_bus.rs
  • collector-core/tests/daemoneye_eventbus_integration.rs
  • collector-core/tests/daemoneye_eventbus_monitoring_integration.rs
  • collector-core/tests/eventbus_performance_comparison.rs
  • daemoneye-agent/src/main.rs
  • daemoneye-agent/tests/loading_state_integration.rs
  • daemoneye-agent/tests/rpc_lifecycle_integration.rs
  • daemoneye-lib/src/config.rs
  • daemoneye-lib/src/crypto.rs
  • daemoneye-lib/src/lib.rs
  • deny.toml
  • docs/src/deployment/docker.md
  • docs/src/deployment/installation.md
  • mise.toml
  • procmond/src/wal.rs

…ventbus (#2)

Add `[lints] workspace = true` to both crates, enforcing unsafe_code =
"forbid", panic = "deny", unwrap_used = "deny", and 50+ other
security/quality lints. Fix all resulting violations:

- Add #[non_exhaustive] to public enums
- Add #[must_use] to builder methods
- Replace bare `as` casts with saturating arithmetic
- Rename serialize/deserialize to to_bytes/from_bytes (serde conflict)
- Fix semicolon placement, shadowing, and mixed field visibility
- Add wildcard arms for non-exhaustive enum matches in tests
- Remove unsafe blocks from tests (unsafe_code = "forbid")

Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
Store System behind tokio::sync::Mutex instead of recreating per cycle.
Enables CPU delta tracking, reduces ~50-100MB memory churn per cycle,
and provides O(1) single-PID lookup via ProcessesToUpdate::Some.

Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
Tag each security feature claim with [Implemented], [In Progress], or
[Planned] to eliminate doc-vs-code divergences:
- SQL AST validation at load time [Implemented]; execution [Planned]
- BLAKE3 hash chain [Implemented]; Merkle proofs [In Progress]
- Enterprise features (mTLS, SLSA, sandboxing) [Planned]
- Standardize config namespace to daemoneye, format to .toml

Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
Fix borrow-vs-move in CollectorRpcService::with_config_manager test
calls to match updated API from lint inheritance fixes.

Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
Document string_slice, items_after_statements lint requirements, and
the 107-byte Unix socket path limit (sockaddr_un.sun_path) discovered
during batch cleanup.

Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
Copilot AI review requested due to automatic review settings April 4, 2026 13:29
Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 98 out of 101 changed files in this pull request and generated 4 comments.

Comments suppressed due to low confidence (1)

collector-core/tests/eventbus_performance_comparison.rs:20

  • Removing the feature gate makes this file’s performance-heavy integration tests run as part of default cargo test (e.g., publishes thousands of events over a Unix socket and asserts throughput). This is likely to increase CI runtime and flakiness; consider marking these as #[ignore], moving them to Criterion benches, or gating them behind an opt-in feature/env var.

Add #[allow] blocks to test modules, integration test files, benchmark
files, and example files in collector-core and daemoneye-eventbus to
suppress lints that are appropriate in non-production code (unwrap,
expect, println, as casts, arithmetic, shadowing, etc.).

Fix remaining lint violations in process_manager.rs (uninlined format
args, as cast annotation). Revert deny.toml wildcards to 'allow' since
workspace path deps use * version.

Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
- Remove unused mdformat-ruff from pre-commit (not in .mdformat.toml)
- Clarify panic=deny vs runtime panic distinction in security audit
- Add language identifier to fenced code block in security audit

Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
Copilot AI review requested due to automatic review settings April 4, 2026 16:19
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot wasn't able to review this pull request because it exceeds the maximum number of lines (20,000). Try reducing the number of changed lines and requesting a review from Copilot again.

coderabbitai[bot]
coderabbitai bot previously approved these changes Apr 4, 2026
Add comprehensive #![allow] blocks to all integration test, benchmark,
and example files in collector-core and daemoneye-eventbus. Fixes
Windows CI failure where dead_code and other lints were not suppressed
in test code after workspace lint inheritance was enabled.

Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
Test files with #[cfg(unix)] tests have imports only used inside those
blocks. On Windows, these become unused import errors. Add
unused_imports to the allow list for all affected test files.

Signed-off-by: UncleSp1d3r <unclesp1d3r@evilbitlabs.io>
Copilot AI review requested due to automatic review settings April 4, 2026 20:35
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot wasn't able to review this pull request because it exceeds the maximum number of lines (20,000). Try reducing the number of changed lines and requesting a review from Copilot again.

@unclesp1d3r unclesp1d3r enabled auto-merge (squash) April 4, 2026 20:37
@mergify
Copy link
Copy Markdown
Contributor

mergify bot commented Apr 4, 2026

Merge Protections

Your pull request matches the following merge protections and will not be merged until they are valid.

🟢 Enforce conventional commit

Wonderful, this rule succeeded.

Require conventional commit format per https://www.conventionalcommits.org/en/v1.0.0/. Skipped for dependabot and dosubot.

  • title ~= ^(fix|feat|docs|style|refactor|perf|test|build|ci|chore|revert)(?:\(.+\))?!?:

🟢 Full CI must pass

Wonderful, this rule succeeded.

All CI checks must pass. Activates for non-bot authors, or dependabot when files exist outside .github/workflows/.

  • check-success = DCO
  • check-success = coverage
  • check-success = quality
  • check-success = test
  • check-success = test-cross-platform (macos-15, macOS)
  • check-success = test-cross-platform (ubuntu-22.04, Linux)
  • check-success = test-cross-platform (windows-2022, Windows)

🟢 Do not merge outdated PRs

Wonderful, this rule succeeded.

Make sure PRs are within 3 commits of the base branch before merging

  • #commits-behind <= 3

@unclesp1d3r unclesp1d3r merged commit 085681d into main Apr 4, 2026
17 checks passed
@unclesp1d3r unclesp1d3r deleted the todo_cleanups branch April 4, 2026 22:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

configuration Configuration management and settings dependencies Pull requests that update a dependency file documentation Improvements or additions to documentation security Security-related issues and vulnerabilities size:XXL This PR changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants