Skip to content

feat(execution): Predictive Pipeline + CodeRabbit review fixes#579

Open
rafaelscosta wants to merge 7 commits intoSynkraAI:mainfrom
rafaelscosta:feat/predictive-pipeline
Open

feat(execution): Predictive Pipeline + CodeRabbit review fixes#579
rafaelscosta wants to merge 7 commits intoSynkraAI:mainfrom
rafaelscosta:feat/predictive-pipeline

Conversation

@rafaelscosta
Copy link
Copy Markdown
Contributor

@rafaelscosta rafaelscosta commented Mar 10, 2026

Summary

CodeRabbit Fixes Applied

  1. _enqueueWrite error propagation: Rethrows after _emitSafeError to keep promise chain rejected
  2. _recalculateModelStats after prune: Recalculates taskTypeStats, agentStats, strategyStats after splice(0, excess)
  3. _extractFeatures sanitization: Guards with Number.isFinite() for complexity, contextSize, agentExperience
  4. minSamplesForPrediction threshold: _stagePredict now checks neighbors.length < minSamplesForPrediction
  5. EWMA weight ordering: Reverses durationValues before EWMA so highest-similarity neighbors get largest weight
  6. _ensureLoaded guard: recordDecision (now async) and save call _ensureLoaded() before mutating state
  7. atomicWriteSync: Decision memory uses atomicWriteSync (write-to-tmp + rename) instead of fs.writeFileSync
  8. (Skipped — assertion would fail with seeded data)
  9. Test async updates: All decision-memory tests use async/await for recordDecision
  10. Concurrent test assertion: Verifies unique IDs via new Set(results.map(r => r.id))

Test Results

  • 89 predictive-pipeline tests: PASS
  • 37 decision-memory tests: PASS
  • 126 total tests passing

Supersedes #575

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Predictive forecasting: task outcome predictions, risk & anomaly detection, and agent/strategy recommendations.
    • Decision memory: persistent recording, outcome tracking, pattern detection, and context injection for tasks.
    • New starter design-system variants and theme token files (multiple brand/theme configurations and CSS token sets).
  • Tests

    • Extensive test suites covering predictive forecasting and decision memory behaviors and persistence.

nikolasdehor and others added 3 commits March 6, 2026 23:05
…ning

Story 9.5 of Epic 9 (Persistent Memory Layer). Implements Phase 2
of the Agent Immortality Protocol (SynkraAI#482) — Persistence layer.

Features:
- Record decisions with context, rationale, and alternatives
- Track outcomes (success/partial/failure) with confidence scoring
- Auto-detect categories from description keywords
- Find relevant past decisions for context injection (AC7)
- Pattern detection across recurring decisions (AC9)
- Time-based confidence decay for relevance scoring
- Persistence to .aiox/decisions.json

37 unit tests covering all features.
…ados de tasks

Pipeline preditivo que estima resultados antes da execução usando
padrões históricos. Implementa k-NN ponderado com vetores de features,
EWMA para estimativa de duração, detecção de anomalias, avaliação de
risco e motor de recomendação de agentes/estratégias.

89 testes unitários cobrindo todos os cenários.
…ion, atomic writes

- Fix _enqueueWrite to rethrow errors after emitting, keeping the chain rejected
- Add _recalculateModelStats after auto-prune splice to keep stats consistent
- Sanitize numeric features with Number.isFinite guards in _extractFeatures
- Honor minSamplesForPrediction threshold in _stagePredict
- Reverse durationValues before EWMA so highest-similarity samples get largest weight
- Add _ensureLoaded guard to recordDecision and save in decision-memory
- Use atomicWriteSync instead of fs.writeFileSync in decision-memory save
- Improve test assertions: verify unique IDs in concurrent test

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@vercel
Copy link
Copy Markdown

vercel bot commented Mar 10, 2026

@rafaelscosta is attempting to deploy a commit to the Pedro Valério Lopez's projects Team on Vercel.

A member of the Team first needs to authorize it.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 10, 2026

Walkthrough

Adds two new core modules: a PredictivePipeline for task outcome forecasting and a DecisionMemory for persistent decision recording and pattern detection. Includes a retrocompatibility wrapper, comprehensive tests, and manifest updates; many new starter variant design-system/site/token files were also added.

Changes

Cohort / File(s) Summary
PredictivePipeline (implementation & wrapper)
.aiox-core/core/execution/predictive-pipeline.js, .aios-core/core/execution/predictive-pipeline.js
New PredictivePipeline class (five-stage pipeline: preprocess → match → predict → score → recommend) with lazy persistence, k-NN similarity, statistics, risk/anomaly detection, events, and a retrocompatible wrapper re-exporting the canonical module.
DecisionMemory
.aiox-core/core/memory/decision-memory.js
New DecisionMemory class with persistent atomic writes, schema/versioning, decision recording/updating, keyword-based relevance, time-decay scoring, pattern detection, context injection, events, and stats APIs.
Tests
tests/core/execution/predictive-pipeline.test.js, tests/core/memory/decision-memory.test.js
Comprehensive unit tests covering constructors, persistence, prediction/decision flows, risk/anomaly detection, recommendations, pruning/retrain, pattern detection, utilities, and edge cases.
Manifest
.aiox-core/install-manifest.yaml
Manifest updated with new files, updated file count/timestamps, and size/hash bookkeeping changes.
Starter variants — configs & tokens
starter/variants/*/design-system.config.yaml, starter/variants/*/site.config.yaml, starter/variants/*/tokens.css
Many new design-system/site configuration YAMLs and generated tokens.css files added for multiple starter variants (adds theme metadata, app registrations, and CSS token sets).

Sequence Diagrams

sequenceDiagram
    actor Client
    participant Pipeline as PredictivePipeline
    participant Model as In-memory Model/Stats
    participant Persist as Persistence Layer
    Client->>Pipeline: predict(taskSpec)
    activate Pipeline
    Pipeline->>Model: extract features & lookup stats
    Pipeline->>Model: find k-NN / compute similarities
    Pipeline->>Pipeline: compute weighted estimates, EWMA, scores
    Pipeline->>Persist: enqueue persistence (lazy/sequenced)
    Persist-->>Pipeline: persist ack
    Pipeline-->>Client: prediction + recommendations
    Pipeline-->>Client: emit events (prediction / anomaly / high-risk)
    deactivate Pipeline
Loading
sequenceDiagram
    actor Client
    participant Memory as DecisionMemory
    participant Persist as Persistence Layer
    participant Detector as Pattern Detector
    Client->>Memory: recordDecision(decision)
    activate Memory
    Memory->>Memory: extract keywords & auto-detect category
    Memory->>Persist: atomic write decision file
    Persist-->>Memory: persisted
    Memory->>Detector: evaluate patterns
    alt pattern threshold met
        Detector-->>Memory: PATTERN_DETECTED
        Memory-->>Client: emit PATTERN_DETECTED
    end
    Memory-->>Client: DECISION_RECORDED event / result
    deactivate Memory
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested reviewers

  • Pedrovaleriolopez
🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 60.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title clearly describes the main change: adding the Predictive Pipeline feature and applying CodeRabbit review fixes. It is concise, specific, and accurately reflects the primary objectives.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
⚔️ Resolve merge conflicts
  • Resolve merge conflict in branch feat/predictive-pipeline

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Welcome to aiox-core! Thanks for your first pull request.

What happens next?

  1. Automated checks will run on your PR
  2. A maintainer will review your changes
  3. Once approved, we'll merge your contribution!

PR Checklist:

Thanks for contributing!

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Nitpick comments (2)
tests/core/execution/predictive-pipeline.test.js (2)

12-17: Use the repo’s absolute import form here.

This relative require(...) makes the test brittle to file moves and bypasses the project import convention. Switch it to the repository’s absolute module path.

As per coding guidelines **/*.{js,jsx,ts,tsx}: Use absolute imports instead of relative imports in all code.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/execution/predictive-pipeline.test.js` around lines 12 - 17, The
test imports core symbols using a brittle relative path; update the require call
that pulls in PredictivePipeline, PipelineStage, RiskLevel, and DEFAULTS to use
the repository’s absolute module path for the core execution predictive-pipeline
module (follow the project's absolute import convention) so tests use the
canonical module entry rather than a relative file path; locate the line
importing those symbols and replace the relative require with the repo's
absolute module specifier for the same module.

250-263: Assert the derived stats after auto-prune, not just the array length.

The regression fixed in this PR was stale taskType / agent / strategy aggregates after pruning. This test still goes green if outcomes is trimmed to 5 while those derived counters remain wrong. Seed at least one task/agent/strategy that should be fully pruned and assert getStats() drops it too.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/execution/predictive-pipeline.test.js` around lines 250 - 263, The
test currently only checks outcomes length but not that derived aggregates were
updated after pruning; update the test that creates PredictivePipeline (small)
to seed outcomes with at least one unique taskType/agent/strategy that will be
completely pruned when maxOutcomes is exceeded (use recordOutcome to create
multiple entries so older entries for that task/agent/strategy are removed),
then call getStats() and assert that the returned derived aggregates (the
taskType/agent/strategy counts or lists provided by getStats()) no longer
include the pruned task/agent/strategy; ensure you reference the same
PredictivePipeline instance (small), use recordOutcome to insert the
distinguishing entries, and assert on getStats() fields rather than only
stats.outcomes length.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.aiox-core/core/execution/predictive-pipeline.js:
- Around line 697-703: findSimilarTasks and assessRisk call _extractFeatures
directly and will throw a TypeError on null/undefined input; add an explicit
nullish guard at the start of each public method (findSimilarTasks and
assessRisk) to validate taskSpec (e.g., if (taskSpec == null) throw new
TypeError("taskSpec is required")); this ensures callers get a clear validation
error instead of a property-access crash and keeps _extractFeatures unchanged.
- Around line 300-314: Summary: recordOutcome() currently mutates in-memory
state before persistence, causing double-counting/inconsistent snapshots when
persistence fails; serialize the whole logical transaction and roll back on
failure. Fix: wrap the entire critical section in a mutex/serial lock (e.g.,
add/use this._transactionLock or a small async lock around recordOutcome),
perform the in-memory updates (push to this._outcomes, increment
this._stats.outcomesRecorded, call this._updateModelStats, prune via
this._recalculateModelStats when > this.maxOutcomes), then call
this._persistOutcomes and this._persistModel while still holding the lock; if
either persist rejects, revert the in-memory changes (pop the pushed records,
restore previous stats/model state) before releasing the lock and rethrow the
error; ensure the lock/serialization also prevents concurrent recordOutcome runs
to avoid race conditions.
- Around line 529-531: The prediction stage incorrectly uses `|| 3` which treats
an explicit minSamplesForPrediction = 0 as falsy and falls back to 3; update the
check inside the `_runStage(PipelineStage.PREDICT, ...)` callback so it uses a
nullish-coalescing style default (i.e., respect 0) when reading
`this.minSamplesForPrediction` before comparing to `neighbors.length`, and call
`_defaultPrediction(features)` only when the resolved minimum is >
neighbors.length.

In @.aiox-core/core/memory/decision-memory.js:
- Around line 252-254: updateOutcome currently reads this.decisions without
ensuring persisted decisions are loaded, causing valid IDs to return null for
fresh DecisionMemory instances; modify updateOutcome to await this.load() (same
guard used in recordDecision / save) before searching this.decisions, then
proceed to find the decision by id and update outcome/notes and call this.save()
as appropriate so cross-session behavior matches recordDecision/save.
- Around line 508-521: The recommendation for a new pattern is incorrectly set
to the "underperformed" message when all similar decisions are still
Outcome.PENDING because successCount and failureCount are both zero; update the
logic in the block that builds the pattern object (where similar, outcomes,
successCount, failureCount are computed and recommendation is assigned) to
detect the case outcomes.length === 0 and set a neutral recommendation like
"Insufficient resolved outcomes to recommend; monitor for more data." instead of
choosing success or failure branches.
- Around line 147-150: The loader currently only checks data.schemaVersion
before assigning this.decisions and this.patterns, which allows non-array types
to slip through and later crash; update the block that handles schemaVersion
(the code referencing schemaVersion, this.decisions, this.patterns) to validate
that data.decisions and data.patterns are arrays via Array.isArray; if either is
not an array, treat the file as corrupted by logging/warning (use the existing
logger if available), set the offending property to an empty array
(this.decisions = [] / this.patterns = []), and persist or mark the state
accordingly so downstream array-only operations are safe.

In `@tests/core/execution/predictive-pipeline.test.js`:
- Around line 324-335: Remove the test's registration of the spy on the shared
fixture "pipeline" and only attach it to the instance under test "p" (i.e., call
p.on('high-risk-detected', spy) only), then tighten the assertion to assert the
exact call count from the instance under test (e.g.,
expect(spy).toHaveBeenCalledTimes(1) or expect(spy.mock.calls.length).toBe(1))
after calling p.predict(...); this ensures only emissions from
PredictivePipeline p.predict() are measured.

---

Nitpick comments:
In `@tests/core/execution/predictive-pipeline.test.js`:
- Around line 12-17: The test imports core symbols using a brittle relative
path; update the require call that pulls in PredictivePipeline, PipelineStage,
RiskLevel, and DEFAULTS to use the repository’s absolute module path for the
core execution predictive-pipeline module (follow the project's absolute import
convention) so tests use the canonical module entry rather than a relative file
path; locate the line importing those symbols and replace the relative require
with the repo's absolute module specifier for the same module.
- Around line 250-263: The test currently only checks outcomes length but not
that derived aggregates were updated after pruning; update the test that creates
PredictivePipeline (small) to seed outcomes with at least one unique
taskType/agent/strategy that will be completely pruned when maxOutcomes is
exceeded (use recordOutcome to create multiple entries so older entries for that
task/agent/strategy are removed), then call getStats() and assert that the
returned derived aggregates (the taskType/agent/strategy counts or lists
provided by getStats()) no longer include the pruned task/agent/strategy; ensure
you reference the same PredictivePipeline instance (small), use recordOutcome to
insert the distinguishing entries, and assert on getStats() fields rather than
only stats.outcomes length.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 8fdd3a3a-e9c6-4b49-be5d-b101c6803f54

📥 Commits

Reviewing files that changed from the base of the PR and between fcfb757 and 2b9c3db.

📒 Files selected for processing (6)
  • .aios-core/core/execution/predictive-pipeline.js
  • .aiox-core/core/execution/predictive-pipeline.js
  • .aiox-core/core/memory/decision-memory.js
  • .aiox-core/install-manifest.yaml
  • tests/core/execution/predictive-pipeline.test.js
  • tests/core/memory/decision-memory.test.js

Comment on lines +300 to +314
this._outcomes.push(record);
this._stats.outcomesRecorded++;

// Update model stats
this._updateModelStats(record);

// Auto-prune if exceeding max
if (this._outcomes.length > this.maxOutcomes) {
const excess = this._outcomes.length - this.maxOutcomes;
this._outcomes.splice(0, excess);
this._recalculateModelStats();
}

await this._persistOutcomes();
await this._persistModel();
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Serialize the whole recordOutcome() transaction, not just the file writes.

If either persistence step fails, the promise rejects after _outcomes, _model, and _stats have already been changed. A retry will double-count the same logical outcome, and a first-write/second-write split can leave disk snapshots inconsistent on the next load.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.aiox-core/core/execution/predictive-pipeline.js around lines 300 - 314,
Summary: recordOutcome() currently mutates in-memory state before persistence,
causing double-counting/inconsistent snapshots when persistence fails; serialize
the whole logical transaction and roll back on failure. Fix: wrap the entire
critical section in a mutex/serial lock (e.g., add/use this._transactionLock or
a small async lock around recordOutcome), perform the in-memory updates (push to
this._outcomes, increment this._stats.outcomesRecorded, call
this._updateModelStats, prune via this._recalculateModelStats when >
this.maxOutcomes), then call this._persistOutcomes and this._persistModel while
still holding the lock; if either persist rejects, revert the in-memory changes
(pop the pushed records, restore previous stats/model state) before releasing
the lock and rethrow the error; ensure the lock/serialization also prevents
concurrent recordOutcome runs to avoid race conditions.

Comment on lines +529 to +531
return this._runStage(PipelineStage.PREDICT, () => {
if (neighbors.length < (this.minSamplesForPrediction || 3)) {
return this._defaultPrediction(features);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Respect an explicit minSamplesForPrediction: 0.

The constructor preserves 0 via ??, but || 3 converts it back to 3 here. Any caller that intentionally disables the sample threshold still gets default predictions.

Suggested fix
-      if (neighbors.length < (this.minSamplesForPrediction || 3)) {
+      if (neighbors.length < this.minSamplesForPrediction) {
         return this._defaultPrediction(features);
       }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
return this._runStage(PipelineStage.PREDICT, () => {
if (neighbors.length < (this.minSamplesForPrediction || 3)) {
return this._defaultPrediction(features);
return this._runStage(PipelineStage.PREDICT, () => {
if (neighbors.length < this.minSamplesForPrediction) {
return this._defaultPrediction(features);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.aiox-core/core/execution/predictive-pipeline.js around lines 529 - 531, The
prediction stage incorrectly uses `|| 3` which treats an explicit
minSamplesForPrediction = 0 as falsy and falls back to 3; update the check
inside the `_runStage(PipelineStage.PREDICT, ...)` callback so it uses a
nullish-coalescing style default (i.e., respect 0) when reading
`this.minSamplesForPrediction` before comparing to `neighbors.length`, and call
`_defaultPrediction(features)` only when the resolved minimum is >
neighbors.length.

Comment on lines +697 to +703
findSimilarTasks(taskSpec, opts = {}) {
this._ensureLoaded();

const limit = opts.limit ?? 10;
const minSimilarity = opts.minSimilarity ?? 0;
const features = this._extractFeatures(taskSpec);

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard nullish taskSpec in the public read APIs.

findSimilarTasks() and assessRisk() call _extractFeatures() directly, so null / undefined inputs currently explode with a property-access TypeError instead of a targeted validation error.

Suggested fix
  findSimilarTasks(taskSpec, opts = {}) {
    this._ensureLoaded();
+    if (!taskSpec || typeof taskSpec !== 'object') {
+      throw new Error('taskSpec must be an object');
+    }
 
     const limit = opts.limit ?? 10;
     const minSimilarity = opts.minSimilarity ?? 0;
     const features = this._extractFeatures(taskSpec);
@@
  assessRisk(taskSpec) {
    this._ensureLoaded();
+    if (!taskSpec || typeof taskSpec !== 'object') {
+      throw new Error('taskSpec must be an object');
+    }
 
     const features = this._extractFeatures(taskSpec);
     const neighbors = this._stageMatch(features);
As per coding guidelines, `Check for proper input validation on public API methods`.

Also applies to: 747-750

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.aiox-core/core/execution/predictive-pipeline.js around lines 697 - 703,
findSimilarTasks and assessRisk call _extractFeatures directly and will throw a
TypeError on null/undefined input; add an explicit nullish guard at the start of
each public method (findSimilarTasks and assessRisk) to validate taskSpec (e.g.,
if (taskSpec == null) throw new TypeError("taskSpec is required")); this ensures
callers get a clear validation error instead of a property-access crash and
keeps _extractFeatures unchanged.

Comment on lines +147 to +150
if (data.schemaVersion === this.config.schemaVersion) {
this.decisions = data.decisions || [];
this.patterns = data.patterns || [];
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Treat non-array persisted collections as corruption.

schemaVersion is the only shape check here. If decisions or patterns is an object/string in an otherwise matching file, later calls hit array-only operations and crash.

Suggested fix
-        if (data.schemaVersion === this.config.schemaVersion) {
-          this.decisions = data.decisions || [];
-          this.patterns = data.patterns || [];
-        }
+        if (data.schemaVersion === this.config.schemaVersion) {
+          this.decisions = Array.isArray(data.decisions) ? data.decisions : [];
+          this.patterns = Array.isArray(data.patterns) ? data.patterns : [];
+        }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (data.schemaVersion === this.config.schemaVersion) {
this.decisions = data.decisions || [];
this.patterns = data.patterns || [];
}
if (data.schemaVersion === this.config.schemaVersion) {
this.decisions = Array.isArray(data.decisions) ? data.decisions : [];
this.patterns = Array.isArray(data.patterns) ? data.patterns : [];
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.aiox-core/core/memory/decision-memory.js around lines 147 - 150, The loader
currently only checks data.schemaVersion before assigning this.decisions and
this.patterns, which allows non-array types to slip through and later crash;
update the block that handles schemaVersion (the code referencing schemaVersion,
this.decisions, this.patterns) to validate that data.decisions and data.patterns
are arrays via Array.isArray; if either is not an array, treat the file as
corrupted by logging/warning (use the existing logger if available), set the
offending property to an empty array (this.decisions = [] / this.patterns = []),
and persist or mark the state accordingly so downstream array-only operations
are safe.

Comment on lines +252 to +254
updateOutcome(decisionId, outcome, notes = '') {
const decision = this.decisions.find(d => d.id === decisionId);
if (!decision) return null;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Lazy-load before looking up the decision ID.

A fresh DecisionMemory instance with an existing .aiox/decisions.json will return null for valid IDs here until callers remember to await load() manually. This mutator needs the same load guard as recordDecision() / save() to preserve the cross-session behavior the module is adding.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.aiox-core/core/memory/decision-memory.js around lines 252 - 254,
updateOutcome currently reads this.decisions without ensuring persisted
decisions are loaded, causing valid IDs to return null for fresh DecisionMemory
instances; modify updateOutcome to await this.load() (same guard used in
recordDecision / save) before searching this.decisions, then proceed to find the
decision by id and update outcome/notes and call this.save() as appropriate so
cross-session behavior matches recordDecision/save.

Comment on lines +508 to +521
if (similar.length >= this.config.patternThreshold - 1) {
const outcomes = similar.map(d => d.outcome).filter(o => o !== Outcome.PENDING);
const successCount = outcomes.filter(o => o === Outcome.SUCCESS).length;
const failureCount = outcomes.filter(o => o === Outcome.FAILURE).length;

const pattern = {
id: `pattern-${this.patterns.length + 1}`,
category: newDecision.category,
description: `Recurring ${newDecision.category} decision: "${newDecision.description}"`,
occurrences: similar.length + 1,
successRate: outcomes.length > 0 ? successCount / outcomes.length : 0,
recommendation: successCount > failureCount
? 'This approach has historically worked well. Consider reusing.'
: 'This approach has historically underperformed. Consider alternatives.',
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use a neutral recommendation when no resolved outcomes exist yet.

When every similar decision is still pending, successCount and failureCount are both 0, so this falls into the "underperformed" branch and emits misleading advice with zero evidence.

Suggested fix
-        recommendation: successCount > failureCount
-          ? 'This approach has historically worked well. Consider reusing.'
-          : 'This approach has historically underperformed. Consider alternatives.',
+        recommendation: outcomes.length === 0
+          ? 'Pattern observed, but no completed outcomes exist yet.'
+          : successCount > failureCount
+            ? 'This approach has historically worked well. Consider reusing.'
+            : 'This approach has historically underperformed. Consider alternatives.',
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (similar.length >= this.config.patternThreshold - 1) {
const outcomes = similar.map(d => d.outcome).filter(o => o !== Outcome.PENDING);
const successCount = outcomes.filter(o => o === Outcome.SUCCESS).length;
const failureCount = outcomes.filter(o => o === Outcome.FAILURE).length;
const pattern = {
id: `pattern-${this.patterns.length + 1}`,
category: newDecision.category,
description: `Recurring ${newDecision.category} decision: "${newDecision.description}"`,
occurrences: similar.length + 1,
successRate: outcomes.length > 0 ? successCount / outcomes.length : 0,
recommendation: successCount > failureCount
? 'This approach has historically worked well. Consider reusing.'
: 'This approach has historically underperformed. Consider alternatives.',
if (similar.length >= this.config.patternThreshold - 1) {
const outcomes = similar.map(d => d.outcome).filter(o => o !== Outcome.PENDING);
const successCount = outcomes.filter(o => o === Outcome.SUCCESS).length;
const failureCount = outcomes.filter(o => o === Outcome.FAILURE).length;
const pattern = {
id: `pattern-${this.patterns.length + 1}`,
category: newDecision.category,
description: `Recurring ${newDecision.category} decision: "${newDecision.description}"`,
occurrences: similar.length + 1,
successRate: outcomes.length > 0 ? successCount / outcomes.length : 0,
recommendation: outcomes.length === 0
? 'Pattern observed, but no completed outcomes exist yet.'
: successCount > failureCount
? 'This approach has historically worked well. Consider reusing.'
: 'This approach has historically underperformed. Consider alternatives.',
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.aiox-core/core/memory/decision-memory.js around lines 508 - 521, The
recommendation for a new pattern is incorrectly set to the "underperformed"
message when all similar decisions are still Outcome.PENDING because
successCount and failureCount are both zero; update the logic in the block that
builds the pattern object (where similar, outcomes, successCount, failureCount
are computed and recommendation is assigned) to detect the case outcomes.length
=== 0 and set a neutral recommendation like "Insufficient resolved outcomes to
recommend; monitor for more data." instead of choosing success or failure
branches.

Comment on lines +324 to +335
it('should emit high-risk-detected for risky tasks', () => {
// No history → low confidence → potentially high risk
const spy = jest.fn();
pipeline.on('high-risk-detected', spy);

// Predict with no history, anomaly threshold set low
const p = new PredictivePipeline(tmpDir, { highRiskThreshold: 0.3 });
p.on('high-risk-detected', spy);
p.predict({ taskType: 'never-seen-before' });

// Should have been called at least once (either from pipeline or p)
expect(spy.mock.calls.length).toBeGreaterThanOrEqual(1);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Listen only on the instance under test.

Registering the same spy on both pipeline and p makes this assertion non-specific; a future emission from the shared fixture could satisfy it even if p.predict() stops emitting. Attach the spy only to p and assert the exact call count.

Suggested tightening
   it('should emit high-risk-detected for risky tasks', () => {
-    // No history → low confidence → potentially high risk
     const spy = jest.fn();
-    pipeline.on('high-risk-detected', spy);
-
-    // Predict with no history, anomaly threshold set low
     const p = new PredictivePipeline(tmpDir, { highRiskThreshold: 0.3 });
     p.on('high-risk-detected', spy);
     p.predict({ taskType: 'never-seen-before' });
-
-    // Should have been called at least once (either from pipeline or p)
-    expect(spy.mock.calls.length).toBeGreaterThanOrEqual(1);
+    expect(spy).toHaveBeenCalledTimes(1);
   });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/core/execution/predictive-pipeline.test.js` around lines 324 - 335,
Remove the test's registration of the spy on the shared fixture "pipeline" and
only attach it to the instance under test "p" (i.e., call
p.on('high-risk-detected', spy) only), then tighten the assertion to assert the
exact call count from the instance under test (e.g.,
expect(spy).toHaveBeenCalledTimes(1) or expect(spy.mock.calls.length).toBe(1))
after calling p.predict(...); this ensures only emissions from
PredictivePipeline p.predict() are measured.

@nikolasdehor
Copy link
Copy Markdown
Contributor

@Pedrovaleriolopez @oalanicolas, o módulo Predictive Pipeline já está no nosso PR #575 (aberto 10/mar, antes deste). Mesma feature, implementação completa com 89 testes unitários.

@nikolasdehor
Copy link
Copy Markdown
Contributor

Olá @rafaelscosta! Gostaria de pontuar que a feature de Predictive Pipeline já foi submetida no nosso PR #575, aberto em 10/mar, antes deste (#579, aberto em 10/mar porém horas depois).

O PR #575 foi fechado e reaberto como #589 para incorporar feedback, mas a implementação original é anterior. Ambos implementam o mesmo módulo: predictive-pipeline.js + decision-memory.js com a mesma arquitetura (KNN, EWMA, anomaly detection).

Entendo que este PR já incorpora fixes do CodeRabbit, o que é ótimo. Mas peço aos maintainers que avaliem a cronologia e considerem o trabalho prévio no #589. Talvez faça sentido consolidar as melhorias de ambos os PRs em um só, respeitando a autoria original.

@Pedrovaleriolopez, podem avaliar a situação?

@nikolasdehor
Copy link
Copy Markdown
Contributor

Oi @rafaelscosta!

Vi que esse PR implementa o Predictive Pipeline — nós temos o PR #589 com a mesma feature (branch original criada em 8 de março, PR recriado em 12 de março após problema de branch).

Seria legal alinharmos pra não ter trabalho duplicado. Nosso PR tem 89 testes passando e cobre k-NN prediction, EWMA duration estimation e risk assessment.

@Pedrovaleriolopez @oalanicolas — poderiam avaliar ambos os PRs e decidir qual seguir? 🙏

@github-actions github-actions bot added area: agents Agent system related area: workflows Workflow system related squad mcp type: test Test coverage and quality area: core Core framework (.aios-core/core/) area: installer Installer and setup (packages/installer/) area: synapse SYNAPSE context engine area: cli CLI tools (bin/, packages/aios-pro-cli/) area: pro Pro features (pro/) area: health-check Health check system area: docs Documentation (docs/) area: devops CI/CD, GitHub Actions (.github/) labels Mar 25, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

🧹 Nitpick comments (3)
starter/variants/natalia-tanaka/tokens.css (1)

1-6: Header comment references @layer that is not present.

Line 5 mentions @layer brandbook-bridge, but the stylesheet is not wrapped in that layer. Either add the layer wrapper or update the comment to prevent cascade-order confusion.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@starter/variants/natalia-tanaka/tokens.css` around lines 1 - 6, The header
comment mentions an `@layer` named "brandbook-bridge" but the stylesheet content
is not wrapped in that layer; either wrap the entire tokens.css contents inside
an `@layer` brandbook-bridge { ... } declaration to ensure the declared cascade
order, or update the header comment to remove the `@layer` reference so it
accurately reflects the file; locate the top-of-file comment referencing "@layer
brandbook-bridge" and choose one of those two fixes (add the `@layer` wrapper or
edit the comment) and keep the chosen approach consistent across this file.
starter/variants/egio_roberto/tokens.css (1)

201-205: Legacy alias values should map to active semantic tokens instead of fixed light colors.

Line 203 and Line 204 hardcode light values inside a native-dark token set, which can create contrast regressions for legacy consumers.

♻️ Suggested refactor
-    --color-background-base: `#F4F4F4`;
-    --color-surface-grey: `#D6D6D6`;
+    --color-background-base: var(--background);
+    --color-surface-grey: var(--surface-alt);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@starter/variants/egio_roberto/tokens.css` around lines 201 - 205, Lines
setting legacy aliases --color-background-base and --color-surface-grey hardcode
light hex colors inside the native-dark token set; replace those fixed values
with references to the active semantic tokens instead (e.g., use
var(--background-base) or var(--surface) / var(--surface-variant) as
appropriate) so legacy aliases map to the current semantic tokens; update the
assignments for --color-background-base and --color-surface-grey to use var(...)
of the matching semantic token names present in this file (keep
--color-surface-dark as var(--surface) which is already correct).
starter/variants/bruno_kosminsky/tokens.css (1)

14-15: Identical values for --bb-cream and --bb-cream-alt.

Both tokens resolve to rgb(248, 248, 248). If this is intentional for future differentiation, consider adding a brief comment. If unintentional, update the -alt variant.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@starter/variants/bruno_kosminsky/tokens.css` around lines 14 - 15, The two
CSS custom properties --bb-cream and --bb-cream-alt both have identical values
(rgb(248, 248, 248)); decide whether the duplication is intentional and either
(a) change --bb-cream-alt to a different color value representing the intended
alternate tone, or (b) leave it but add a brief comment above the declarations
indicating the duplication is deliberate for future overrides; update the
tokens.css file by editing the --bb-cream-alt value or inserting the comment
next to these symbols.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.claude/skills/enhance-workflow.md:
- Line 528: Replace the English loanword "badge" with the PT-BR native term
"distintivo" in the terminal indicators text (the line that currently reads
"**Terminal indicators:** Dashboard mostra badge sólido+glow para delegated,
badge tracejado para subagent") and any other occurrences in the document to
keep wording consistent with PT-BR locale conventions.
- Line 317: Update the five fenced code blocks in the enhance-workflow.md
content that currently start with bare ``` so each opening fence becomes
```text; specifically change the unnamed code fences shown in the diff to use
the language identifier "text" (e.g., replace ``` with ```text) for the blocks
containing the step examples so they pass markdownlint MD040 and render with
proper syntax highlighting.
- Around line 318-320: The Skill() invocations use the wrong namespace
"AIOS:agents:*"; update each Skill(...) call that references
"AIOS:agents:architect", "AIOS:agents:analyst", "AIOS:agents:pm", and
"AIOS:agents:qa" to use the canonical "AIOX:agents:*" namespace so the resolver
recognizes them; search for Skill(skill: "AIOS:agents:architect"), Skill(skill:
"AIOS:agents:analyst"), Skill(skill: "AIOS:agents:pm"), and Skill(skill:
"AIOS:agents:qa") and replace the "AIOS" prefix with "AIOX" in those string
literals.

In `@starter/variants/bruno_kosminsky/site.config.yaml`:
- Around line 49-51: The Google Fonts URL in fonts.external_stylesheets
duplicates the Manrope family parameter, increasing payload; edit the entry
under fonts.external_stylesheets to remove the repeated
"family=Manrope:wght@400;500;600;700;800" so the URL only includes a single
Manrope family declaration (keep the display=swap and weights intact) in the
existing external_stylesheets array.

In `@starter/variants/bruno_kosminsky/tokens.css`:
- Around line 216-222: Update the secondary theme block
(.brandbook-root[data-bb-theme="secondary"]) to include the missing accent
opacity CSS custom properties so components don't fallback to the primary green;
specifically add overrides for --bb-accent-02, --bb-accent-05, --bb-accent-08,
--bb-accent-20, and --bb-accent-30 using the same base color `#35a7bd` (expressed
as rgba with the appropriate opacity values: 0.02, 0.05, 0.08, 0.20, 0.30) so
all opacity variants consistently use the secondary teal.
- Around line 58-62: The CSS custom properties --font-bb-sans,
--font-bb-display, --font-sans, and --font-display are wrongly using the icon
family "webflow-icons"; replace that icon entry with an appropriate text
typeface stack (or the same stack used by other variants) and/or a named display
face, and ensure any custom font referenced has a matching `@font-face`; update
the values for those symbols (leave --font-bb-mono as-is) so text renders as a
readable UI/display font rather than icon glyphs and verify visually in the
variant.

In `@starter/variants/egio_roberto/tokens.css`:
- Around line 32-40: Several tokens in tokens.css are hardcoded to the primary
accent and do not follow the secondary theme override; update any color tokens
that should track the accent (examples: the variables at the top like --lime and
the tokens referenced at the affected spots including the ones updated around
--bb-accent) to reference the canonical accent variable instead of a literal
color. Replace hardcoded accent values with var(--bb-accent) or a derived token
(e.g., var(--bb-accent-foreground)) for all occurrences identified (the groups
around the original lines 32, 118–132, 181–199, and the block that sets
--bb-accent) so that changing --bb-accent in the secondary theme fully
propagates to downstream tokens. Ensure the secondary theme only needs to
override --bb-accent (and any derived accent tokens) to change the entire accent
palette.
- Line 83: The CSS custom property --motion-easing-spring is incorrectly using
the expo easing token (--bb-ease-expo) making spring animations behave like
expo; update the declaration of --motion-easing-spring to reference the correct
spring token (replace var(--bb-ease-expo, ...) with var(--bb-ease-spring,
<fallback-spring-bezier>)) and ensure --bb-ease-spring is defined elsewhere (or
provide a proper fallback cubic-bezier for spring) so the spring easing uses the
intended curve.

In `@starter/variants/karine_lago/tokens.css`:
- Line 83: The custom property --motion-easing-spring is incorrectly bound to
--bb-ease-expo; update the mapping so --motion-easing-spring uses
var(--bb-ease-spring, cubic-bezier(0.16, 1, 0.3, 1)) instead of
var(--bb-ease-expo, ...). Locate the line setting --motion-easing-spring and
replace the referenced variable name --bb-ease-expo with --bb-ease-spring,
keeping the existing fallback cubic-bezier value.
- Around line 216-221: The secondary theme block
.brandbook-root[data-bb-theme="secondary"] only redefines part of the accent
ramp, leaving variables like --bb-accent-02, --bb-accent-05, --bb-accent-08,
--bb-accent-20, and --bb-accent-30 pointing to the primary (red) values; update
the block to override the full accent scale by adding definitions for those
missing tokens (e.g., --bb-accent-02, --bb-accent-05, --bb-accent-08,
--bb-accent-20, --bb-accent-30) so all --bb-accent-* variables consistently use
the secondary accent color instead of mixed-brand values.
- Around line 203-204: The legacy CSS variables --color-background-base and
--color-surface-grey are set to light hex values causing low-contrast panels in
the dark-adapted theme; update these aliases to use dark-appropriate tokens or
map them to existing dark variables (e.g., reference the theme's dark
background/surface tokens) so legacy consumers inherit correct dark surfaces;
modify the definitions for --color-background-base and --color-surface-grey to
point to the dark theme equivalents instead of hardcoded light hex colors.

In `@starter/variants/natalia-tanaka/tokens.css`:
- Around line 201-205: The legacy aliases are hard-coded to light hex values;
update the two variables to reference the theme tokens instead: change
--color-background-base from `#F4F4F4` to use the theme background token (e.g.
var(--background)) and change --color-surface-grey from `#D6D6D6` to use an
appropriate surface token (e.g. var(--surface) or
var(--surface-variant)/--surface-2) so legacy components follow the dark
variant; keep --color-surface-dark as var(--surface) unchanged.
- Around line 215-222: The secondary theme rule
(.brandbook-root[data-bb-theme="secondary"]) only sets a subset of accent
variables; update that selector to override the full set of accent opacity steps
and related semantic tokens so consumers don't inherit primary values — include
missing opacity tokens like --bb-accent-05 and --bb-accent-20 (and any other
--bb-accent-* opacities used across the project), and override semantic tokens
such as --bb-primary, --bb-accent-foreground (or --accent-foreground), and
--bb-ring (or --ring) to point at the secondary accent values.

In `@starter/variants/wiliaquison_valentim/design-system.config.yaml`:
- Around line 30-51: Add a theme-specific CSS override in tokens.css to map the
secondary accent declared in design-system.config.yaml to the CSS variable used
by the system: add a [data-bb-theme="secondary"] rule that sets --bb-accent to
"#d2ddbb" (and any related variables you need, e.g., live badge or text-accent
variables) so the secondary theme uses the correct color instead of falling back
to the primary; reference the config's accent_hex and existing token name
(--bb-flare) to ensure consistency with current tokens.
- Around line 67-79: The YAML has incorrect indentation: make the keys under
component_dirs (atoms, molecules, organisms, chrome, pages) nested by indenting
them beneath component_dirs (e.g., two spaces) so component_dirs maps to that
mapping instead of null, and indent the token_files list items so they are
children of token_files (each list entry indented under token_files); update the
entries referenced as component_dirs, atoms, molecules, organisms, chrome,
pages, and token_files accordingly to restore proper YAML nesting.

---

Nitpick comments:
In `@starter/variants/bruno_kosminsky/tokens.css`:
- Around line 14-15: The two CSS custom properties --bb-cream and --bb-cream-alt
both have identical values (rgb(248, 248, 248)); decide whether the duplication
is intentional and either (a) change --bb-cream-alt to a different color value
representing the intended alternate tone, or (b) leave it but add a brief
comment above the declarations indicating the duplication is deliberate for
future overrides; update the tokens.css file by editing the --bb-cream-alt value
or inserting the comment next to these symbols.

In `@starter/variants/egio_roberto/tokens.css`:
- Around line 201-205: Lines setting legacy aliases --color-background-base and
--color-surface-grey hardcode light hex colors inside the native-dark token set;
replace those fixed values with references to the active semantic tokens instead
(e.g., use var(--background-base) or var(--surface) / var(--surface-variant) as
appropriate) so legacy aliases map to the current semantic tokens; update the
assignments for --color-background-base and --color-surface-grey to use var(...)
of the matching semantic token names present in this file (keep
--color-surface-dark as var(--surface) which is already correct).

In `@starter/variants/natalia-tanaka/tokens.css`:
- Around line 1-6: The header comment mentions an `@layer` named
"brandbook-bridge" but the stylesheet content is not wrapped in that layer;
either wrap the entire tokens.css contents inside an `@layer` brandbook-bridge {
... } declaration to ensure the declared cascade order, or update the header
comment to remove the `@layer` reference so it accurately reflects the file;
locate the top-of-file comment referencing "@layer brandbook-bridge" and choose
one of those two fixes (add the `@layer` wrapper or edit the comment) and keep the
chosen approach consistent across this file.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: b529f542-64ba-45cf-9520-1d302731189e

📥 Commits

Reviewing files that changed from the base of the PR and between 2b9c3db and bf76a54.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (66)
  • .aiox-core/data/entity-registry.yaml
  • .aiox-core/install-manifest.yaml
  • .claude/skills/enhance-workflow.md
  • starter/variants/academia_lendaria/design-system.config.yaml
  • starter/variants/academia_lendaria/site.config.yaml
  • starter/variants/academia_lendaria/tokens.css
  • starter/variants/andre_franco/design-system.config.yaml
  • starter/variants/andre_franco/site.config.yaml
  • starter/variants/andre_franco/tokens.css
  • starter/variants/bruno_kosminsky/design-system.config.yaml
  • starter/variants/bruno_kosminsky/site.config.yaml
  • starter/variants/bruno_kosminsky/tokens.css
  • starter/variants/daniel_tenorio/design-system.config.yaml
  • starter/variants/daniel_tenorio/site.config.yaml
  • starter/variants/daniel_tenorio/tokens.css
  • starter/variants/diogo_pereira/design-system.config.yaml
  • starter/variants/diogo_pereira/site.config.yaml
  • starter/variants/diogo_pereira/tokens.css
  • starter/variants/eduardo_colautti/design-system.config.yaml
  • starter/variants/eduardo_colautti/site.config.yaml
  • starter/variants/eduardo_colautti/tokens.css
  • starter/variants/egio_roberto/design-system.config.yaml
  • starter/variants/egio_roberto/site.config.yaml
  • starter/variants/egio_roberto/tokens.css
  • starter/variants/fabio_avelar/design-system.config.yaml
  • starter/variants/fabio_avelar/site.config.yaml
  • starter/variants/fabio_avelar/tokens.css
  • starter/variants/hugo_santos/design-system.config.yaml
  • starter/variants/hugo_santos/site.config.yaml
  • starter/variants/hugo_santos/tokens.css
  • starter/variants/italo_gustavo/design-system.config.yaml
  • starter/variants/italo_gustavo/site.config.yaml
  • starter/variants/italo_gustavo/tokens.css
  • starter/variants/joao_biggie_com_romulo_medeiros/design-system.config.yaml
  • starter/variants/joao_biggie_com_romulo_medeiros/site.config.yaml
  • starter/variants/joao_biggie_com_romulo_medeiros/tokens.css
  • starter/variants/junior_lorenzi/design-system.config.yaml
  • starter/variants/junior_lorenzi/site.config.yaml
  • starter/variants/junior_lorenzi/tokens.css
  • starter/variants/karine_lago/design-system.config.yaml
  • starter/variants/karine_lago/site.config.yaml
  • starter/variants/karine_lago/tokens.css
  • starter/variants/luca_cortez/design-system.config.yaml
  • starter/variants/luca_cortez/site.config.yaml
  • starter/variants/luca_cortez/tokens.css
  • starter/variants/marcel_scalcko/design-system.config.yaml
  • starter/variants/marcel_scalcko/site.config.yaml
  • starter/variants/marcel_scalcko/tokens.css
  • starter/variants/natalia-tanaka/design-system.config.yaml
  • starter/variants/natalia-tanaka/site.config.yaml
  • starter/variants/natalia-tanaka/tokens.css
  • starter/variants/oalanicolas/design-system.config.yaml
  • starter/variants/oalanicolas/site.config.yaml
  • starter/variants/oalanicolas/tokens.css
  • starter/variants/rafaela_mendes_com_michel_arruda/design-system.config.yaml
  • starter/variants/rafaela_mendes_com_michel_arruda/site.config.yaml
  • starter/variants/rafaela_mendes_com_michel_arruda/tokens.css
  • starter/variants/thiago_nishikata/design-system.config.yaml
  • starter/variants/thiago_nishikata/site.config.yaml
  • starter/variants/thiago_nishikata/tokens.css
  • starter/variants/vida_lendaria/design-system.config.yaml
  • starter/variants/vida_lendaria/site.config.yaml
  • starter/variants/vida_lendaria/tokens.css
  • starter/variants/wiliaquison_valentim/design-system.config.yaml
  • starter/variants/wiliaquison_valentim/site.config.yaml
  • starter/variants/wiliaquison_valentim/tokens.css
✅ Files skipped from review due to trivial changes (57)
  • starter/variants/eduardo_colautti/design-system.config.yaml
  • starter/variants/fabio_avelar/site.config.yaml
  • starter/variants/wiliaquison_valentim/site.config.yaml
  • starter/variants/egio_roberto/site.config.yaml
  • starter/variants/karine_lago/site.config.yaml
  • starter/variants/junior_lorenzi/site.config.yaml
  • starter/variants/andre_franco/site.config.yaml
  • starter/variants/hugo_santos/site.config.yaml
  • starter/variants/daniel_tenorio/tokens.css
  • starter/variants/joao_biggie_com_romulo_medeiros/design-system.config.yaml
  • starter/variants/thiago_nishikata/tokens.css
  • starter/variants/diogo_pereira/site.config.yaml
  • starter/variants/eduardo_colautti/site.config.yaml
  • starter/variants/joao_biggie_com_romulo_medeiros/site.config.yaml
  • starter/variants/oalanicolas/site.config.yaml
  • starter/variants/natalia-tanaka/site.config.yaml
  • starter/variants/daniel_tenorio/site.config.yaml
  • starter/variants/thiago_nishikata/site.config.yaml
  • starter/variants/italo_gustavo/site.config.yaml
  • starter/variants/marcel_scalcko/site.config.yaml
  • starter/variants/fabio_avelar/design-system.config.yaml
  • starter/variants/vida_lendaria/design-system.config.yaml
  • starter/variants/vida_lendaria/site.config.yaml
  • starter/variants/thiago_nishikata/design-system.config.yaml
  • starter/variants/karine_lago/design-system.config.yaml
  • starter/variants/natalia-tanaka/design-system.config.yaml
  • starter/variants/luca_cortez/design-system.config.yaml
  • starter/variants/luca_cortez/site.config.yaml
  • starter/variants/academia_lendaria/design-system.config.yaml
  • starter/variants/bruno_kosminsky/design-system.config.yaml
  • starter/variants/oalanicolas/tokens.css
  • starter/variants/egio_roberto/design-system.config.yaml
  • starter/variants/rafaela_mendes_com_michel_arruda/site.config.yaml
  • starter/variants/daniel_tenorio/design-system.config.yaml
  • starter/variants/diogo_pereira/design-system.config.yaml
  • starter/variants/italo_gustavo/design-system.config.yaml
  • starter/variants/academia_lendaria/site.config.yaml
  • starter/variants/joao_biggie_com_romulo_medeiros/tokens.css
  • starter/variants/rafaela_mendes_com_michel_arruda/design-system.config.yaml
  • starter/variants/marcel_scalcko/design-system.config.yaml
  • starter/variants/andre_franco/design-system.config.yaml
  • starter/variants/andre_franco/tokens.css
  • starter/variants/italo_gustavo/tokens.css
  • starter/variants/academia_lendaria/tokens.css
  • starter/variants/fabio_avelar/tokens.css
  • starter/variants/eduardo_colautti/tokens.css
  • starter/variants/luca_cortez/tokens.css
  • starter/variants/oalanicolas/design-system.config.yaml
  • starter/variants/hugo_santos/tokens.css
  • starter/variants/junior_lorenzi/tokens.css
  • starter/variants/junior_lorenzi/design-system.config.yaml
  • starter/variants/hugo_santos/design-system.config.yaml
  • starter/variants/marcel_scalcko/tokens.css
  • starter/variants/diogo_pereira/tokens.css
  • starter/variants/vida_lendaria/tokens.css
  • starter/variants/rafaela_mendes_com_michel_arruda/tokens.css
  • starter/variants/wiliaquison_valentim/tokens.css
🚧 Files skipped from review as they are similar to previous changes (1)
  • .aiox-core/install-manifest.yaml

**Two-step invocation** (full context, persona loaded):

**Spawn agent** com prompt incluindo Context Preamble do AIOX.
```
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add language identifiers to fenced code blocks.

Line 317, Line 338, Line 358, Line 384, and Line 403 open fenced blocks without a language, which triggers markdownlint MD040 and reduces editor rendering quality.

Suggested fix
-```
+```text
 Step 1: Skill(skill: "AIOS:agents:architect")
   → Aria loads with full persona + context
   → Session: invocationType: "delegated", parentAgentId: "enhance-workflow"

 Step 2: *analyze {project_context}
   → Aria executes discovery with full architectural judgment
   → Saves: outputs/enhance/{slug}/01-discovery.md

Apply the same ` ```text ` change to the other four fenced blocks at the cited lines.
</details>




Also applies to: 338-338, 358-358, 384-384, 403-403

<details>
<summary>🧰 Tools</summary>

<details>
<summary>🪛 markdownlint-cli2 (0.21.0)</summary>

[warning] 317-317: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

</details>

</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @.claude/skills/enhance-workflow.md at line 317, Update the five fenced code
blocks in the enhance-workflow.md content that currently start with bare so each opening fence becomestext; specifically change the unnamed code fences
shown in the diff to use the language identifier "text" (e.g., replace ``` with

MD040 and render with proper syntax highlighting.

Comment on lines +318 to +320
Step 1: Skill(skill: "AIOS:agents:architect")
→ Aria loads with full persona + context
→ Session: invocationType: "delegated", parentAgentId: "enhance-workflow"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify whether both namespaces are intentionally supported in repo docs/config.
rg -n -C2 'AIOS:agents:|AIOX/agents|AIOX:agents:' .

Repository: SynkraAI/aiox-core

Length of output: 49335


🏁 Script executed:

#!/bin/bash
# Check the file size first
wc -l .claude/skills/enhance-workflow.md

# Read the specific lines mentioned
echo "=== Lines 318-320 ==="
sed -n '315,325p' .claude/skills/enhance-workflow.md

# Also check the other ranges mentioned (339-343, 385-388, 404-408)
echo -e "\n=== Lines 339-343 ==="
sed -n '337,345p' .claude/skills/enhance-workflow.md

echo -e "\n=== Lines 385-388 ==="
sed -n '383,390p' .claude/skills/enhance-workflow.md

echo -e "\n=== Lines 404-408 ==="
sed -n '402,410p' .claude/skills/enhance-workflow.md

# Search for all AIOS and AIOX references in this file
echo -e "\n=== All AIOS references in file ==="
grep -n "AIOS" .claude/skills/enhance-workflow.md

echo -e "\n=== All AIOX references in file ==="
grep -n "AIOX" .claude/skills/enhance-workflow.md

Repository: SynkraAI/aiox-core

Length of output: 2284


Replace all AIOS:agents:* namespace references with AIOX:agents:* for consistency.

Lines 318, 339, 385, and 404 use the AIOS namespace in Skill() invocations, but the entire codebase (including other parts of this same file) uses AIOX as the canonical namespace. The resolver expects AIOX, so these mismatched references will break phase execution.

Replace:

  • Line 318: "AIOS:agents:architect""AIOX:agents:architect"
  • Line 339: "AIOS:agents:analyst""AIOX:agents:analyst"
  • Line 385: "AIOS:agents:pm""AIOX:agents:pm"
  • Line 404: "AIOS:agents:qa""AIOX:agents:qa"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/enhance-workflow.md around lines 318 - 320, The Skill()
invocations use the wrong namespace "AIOS:agents:*"; update each Skill(...) call
that references "AIOS:agents:architect", "AIOS:agents:analyst",
"AIOS:agents:pm", and "AIOS:agents:qa" to use the canonical "AIOX:agents:*"
namespace so the resolver recognizes them; search for Skill(skill:
"AIOS:agents:architect"), Skill(skill: "AIOS:agents:analyst"), Skill(skill:
"AIOS:agents:pm"), and Skill(skill: "AIOS:agents:qa") and replace the "AIOS"
prefix with "AIOX" in those string literals.

- Determinism check ANTES de gastar tokens
- Domain classification ANTES de roundtable
- QA validation ANTES de entregar
- **Terminal indicators:** Dashboard mostra badge sólido+glow para delegated, badge tracejado para subagent
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Prefer PT-BR native wording over “badge”.

Line 528 uses “badge”, which conflicts with PT-BR locale style checks. Consider replacing with “distintivo” for consistency with the rest of the document language.

🧰 Tools
🪛 LanguageTool

[locale-violation] ~528-~528: “badge” é um estrangeirismo. É preferível dizer “distintivo”.
Context: ...Terminal indicators:** Dashboard mostra badge sólido+glow para delegated, badge trace...

(PT_BARBARISMS_REPLACE_BADGE)


[locale-violation] ~528-~528: “badge” é um estrangeirismo. É preferível dizer “distintivo”.
Context: ...ostra badge sólido+glow para delegated, badge tracejado para subagent

(PT_BARBARISMS_REPLACE_BADGE)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/skills/enhance-workflow.md at line 528, Replace the English loanword
"badge" with the PT-BR native term "distintivo" in the terminal indicators text
(the line that currently reads "**Terminal indicators:** Dashboard mostra badge
sólido+glow para delegated, badge tracejado para subagent") and any other
occurrences in the document to keep wording consistent with PT-BR locale
conventions.

Comment on lines +49 to +51
fonts:
external_stylesheets:
- "https://fonts.googleapis.com/css2?family=Manrope:wght@400;500;600;700;800&family=Manrope:wght@400;500;600;700;800&display=swap"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Duplicate font family in Google Fonts URL.

The stylesheet URL loads Manrope twice with identical weights, which is redundant and increases payload size.

🔧 Proposed fix to remove duplicate
 fonts:
   external_stylesheets:
-    - "https://fonts.googleapis.com/css2?family=Manrope:wght@400;500;600;700;800&family=Manrope:wght@400;500;600;700;800&display=swap"
+    - "https://fonts.googleapis.com/css2?family=Manrope:wght@400;500;600;700;800&display=swap"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
fonts:
external_stylesheets:
- "https://fonts.googleapis.com/css2?family=Manrope:wght@400;500;600;700;800&family=Manrope:wght@400;500;600;700;800&display=swap"
fonts:
external_stylesheets:
- "https://fonts.googleapis.com/css2?family=Manrope:wght@400;500;600;700;800&display=swap"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@starter/variants/bruno_kosminsky/site.config.yaml` around lines 49 - 51, The
Google Fonts URL in fonts.external_stylesheets duplicates the Manrope family
parameter, increasing payload; edit the entry under fonts.external_stylesheets
to remove the repeated "family=Manrope:wght@400;500;600;700;800" so the URL only
includes a single Manrope family declaration (keep the display=swap and weights
intact) in the existing external_stylesheets array.

Comment on lines +58 to +62
--font-bb-sans: "webflow-icons", system-ui, sans-serif;
--font-bb-display: "webflow-icons", system-ui, sans-serif;
--font-bb-mono: "IBM Plex Mono", monospace;
--font-sans: "webflow-icons", system-ui, sans-serif;
--font-display: "webflow-icons", system-ui, sans-serif;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if other variant token files use similar font configuration
fd -e css -p 'tokens.css' starter/variants --exec grep -l 'webflow-icons' {} \;

# Check what fonts are intended from the design-system config
fd -e yaml -p 'design-system' starter/variants/bruno_kosminsky --exec cat {} \;

Repository: SynkraAI/aiox-core

Length of output: 2559


🏁 Script executed:

#!/bin/bash

# First, check the size of tokens.css to decide how to read it
wc -l starter/variants/bruno_kosminsky/tokens.css

# Read the full tokens.css file to see complete context
echo "=== Full tokens.css content ==="
cat starter/variants/bruno_kosminsky/tokens.css

# Check if there are `@font-face` definitions or references to webflow-icons
echo -e "\n=== Searching for font-face or webflow-icons definitions ==="
grep -n "font-face\|webflow-icons" starter/variants/bruno_kosminsky/tokens.css

Repository: SynkraAI/aiox-core

Length of output: 8146


🏁 Script executed:

#!/bin/bash

# Check other variant token files to see if webflow-icons is a pattern
echo "=== Other variants using webflow-icons ==="
fd -e css -p 'tokens.css' starter/variants --exec grep -c 'webflow-icons' {} + -exec echo {} \;

# Look for site.config.yaml or similar config files
echo -e "\n=== Looking for config files that mention fonts ==="
fd -e yaml -o -e json 'config\|site' starter/variants/bruno_kosminsky --exec grep -l 'Manrope\|font' {} \; 2>/dev/null || echo "No config files found with font references"

# Check if there's any documentation or README about fonts
echo -e "\n=== Looking for README or docs ==="
fd -e md 'README' starter/variants/bruno_kosminsky --exec cat {} \;

Repository: SynkraAI/aiox-core

Length of output: 4719


Icon font used as primary text font — almost certainly incorrect.

"webflow-icons" as the primary font for --font-bb-sans, --font-bb-display, --font-sans, and --font-display will cause text to render as icon glyphs or fall back entirely to system-ui. No @font-face definition exists for this font, confirming it's misconfigured. This issue is unique to this variant—all other variants omit this font.

Verify the intended text typeface and correct the font stack accordingly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@starter/variants/bruno_kosminsky/tokens.css` around lines 58 - 62, The CSS
custom properties --font-bb-sans, --font-bb-display, --font-sans, and
--font-display are wrongly using the icon family "webflow-icons"; replace that
icon entry with an appropriate text typeface stack (or the same stack used by
other variants) and/or a named display face, and ensure any custom font
referenced has a matching `@font-face`; update the values for those symbols (leave
--font-bb-mono as-is) so text renders as a readable UI/display font rather than
icon glyphs and verify visually in the variant.

Comment on lines +216 to +221
.brandbook-root[data-bb-theme="secondary"] {
--bb-accent: #a98822;
--bb-accent-10: rgba(169, 136, 34, 0.1);
--bb-accent-15: rgba(169, 136, 34, 0.15);
--bb-accent-25: rgba(169, 136, 34, 0.25);
--bb-accent-40: rgba(169, 136, 34, 0.4);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Secondary theme leaves part of accent ramp on primary color

Lines 216-221 override only part of the accent scale. Tokens like --bb-accent-02, --bb-accent-05, --bb-accent-08, --bb-accent-20, and --bb-accent-30 remain red, which can produce mixed-brand UI in secondary mode.

🎯 Suggested fix
 .brandbook-root[data-bb-theme="secondary"] {
     --bb-accent: `#a98822`;
+    --bb-accent-02: rgba(169, 136, 34, 0.02);
+    --bb-accent-05: rgba(169, 136, 34, 0.05);
+    --bb-accent-08: rgba(169, 136, 34, 0.08);
     --bb-accent-10: rgba(169, 136, 34, 0.1);
     --bb-accent-15: rgba(169, 136, 34, 0.15);
+    --bb-accent-20: rgba(169, 136, 34, 0.2);
     --bb-accent-25: rgba(169, 136, 34, 0.25);
+    --bb-accent-30: rgba(169, 136, 34, 0.3);
     --bb-accent-40: rgba(169, 136, 34, 0.4);
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@starter/variants/karine_lago/tokens.css` around lines 216 - 221, The
secondary theme block .brandbook-root[data-bb-theme="secondary"] only redefines
part of the accent ramp, leaving variables like --bb-accent-02, --bb-accent-05,
--bb-accent-08, --bb-accent-20, and --bb-accent-30 pointing to the primary (red)
values; update the block to override the full accent scale by adding definitions
for those missing tokens (e.g., --bb-accent-02, --bb-accent-05, --bb-accent-08,
--bb-accent-20, --bb-accent-30) so all --bb-accent-* variables consistently use
the secondary accent color instead of mixed-brand values.

Comment on lines +201 to +205
/* Legacy aliases */
--color-brand-primary: var(--lime);
--color-background-base: #F4F4F4;
--color-surface-grey: #D6D6D6;
--color-surface-dark: var(--surface);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Legacy aliases use light hard-coded colors inside a dark theme.

--color-background-base and --color-surface-grey are fixed to light values, which can break legacy components under this dark variant (Line 203 and Line 204). Map them to existing theme tokens instead.

💡 Suggested fix
-    --color-background-base: `#F4F4F4`;
-    --color-surface-grey: `#D6D6D6`;
+    --color-background-base: var(--background);
+    --color-surface-grey: var(--surface-alt);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/* Legacy aliases */
--color-brand-primary: var(--lime);
--color-background-base: #F4F4F4;
--color-surface-grey: #D6D6D6;
--color-surface-dark: var(--surface);
/* Legacy aliases */
--color-brand-primary: var(--lime);
--color-background-base: var(--background);
--color-surface-grey: var(--surface-alt);
--color-surface-dark: var(--surface);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@starter/variants/natalia-tanaka/tokens.css` around lines 201 - 205, The
legacy aliases are hard-coded to light hex values; update the two variables to
reference the theme tokens instead: change --color-background-base from `#F4F4F4`
to use the theme background token (e.g. var(--background)) and change
--color-surface-grey from `#D6D6D6` to use an appropriate surface token (e.g.
var(--surface) or var(--surface-variant)/--surface-2) so legacy components
follow the dark variant; keep --color-surface-dark as var(--surface) unchanged.

Comment on lines +215 to +222
/* Ensure accent opacities and semantic tokens also override globals.css secondary theme */
.brandbook-root[data-bb-theme="secondary"] {
--bb-accent: #90d437;
--bb-accent-10: rgba(144, 212, 55, 0.1);
--bb-accent-15: rgba(144, 212, 55, 0.15);
--bb-accent-25: rgba(144, 212, 55, 0.25);
--bb-accent-40: rgba(144, 212, 55, 0.4);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Secondary theme override is incomplete for accent consumers.

The block says semantic/accent tokens should override, but only a subset is updated. Consumers using --primary, --accent-foreground, --ring, or missing opacity steps (e.g., --bb-accent-05, --bb-accent-20) will still render the primary theme accent.

💡 Suggested fix
 .brandbook-root[data-bb-theme="secondary"] {
     --bb-accent: `#90d437`;
+    --primary: `#90d437`;
+    --accent-foreground: `#90d437`;
+    --ring: rgba(144, 212, 55, 0.4);
+    --color-brand: `#90d437`;
+    --bb-accent-02: rgba(144, 212, 55, 0.02);
+    --bb-accent-05: rgba(144, 212, 55, 0.05);
+    --bb-accent-08: rgba(144, 212, 55, 0.08);
     --bb-accent-10: rgba(144, 212, 55, 0.1);
     --bb-accent-15: rgba(144, 212, 55, 0.15);
+    --bb-accent-20: rgba(144, 212, 55, 0.2);
     --bb-accent-25: rgba(144, 212, 55, 0.25);
+    --bb-accent-30: rgba(144, 212, 55, 0.3);
     --bb-accent-40: rgba(144, 212, 55, 0.4);
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@starter/variants/natalia-tanaka/tokens.css` around lines 215 - 222, The
secondary theme rule (.brandbook-root[data-bb-theme="secondary"]) only sets a
subset of accent variables; update that selector to override the full set of
accent opacity steps and related semantic tokens so consumers don't inherit
primary values — include missing opacity tokens like --bb-accent-05 and
--bb-accent-20 (and any other --bb-accent-* opacities used across the project),
and override semantic tokens such as --bb-primary, --bb-accent-foreground (or
--accent-foreground), and --bb-ring (or --ring) to point at the secondary accent
values.

Comment on lines +30 to +51
secondary:
label: "Grupo WV Secondary"
role: "secondary"
status: "restricted"
description: "Secondary accent theme for Grupo WV"
allowed_use_cases:
- "premium_offer"
- "premium_event"
forbidden_use_cases:
- "default_site"
tokens:
primary: "tokens.css"
files: []
prefix: "--bb-"
brandbook:
picker_label: "Grupo WV Accent"
accent_name: "Grupo WV Accent"
accent_hex: "#d2ddbb"
accent_rgb: "RGB 210,221,187"
accent_text_color: "var(--bb-dark)"
live_badge_color: "#d2ddbb"
legacy_ids: []
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify if tokens.css contains a secondary theme override block
echo "=== Checking for secondary theme override in tokens.css ==="
rg -n 'data-bb-theme.*secondary|secondary.*theme' starter/variants/wiliaquison_valentim/tokens.css || echo "No secondary theme override found"

echo ""
echo "=== Checking if --bb-accent is redefined anywhere for secondary theme ==="
rg -n '#d2ddbb' starter/variants/wiliaquison_valentim/

Repository: SynkraAI/aiox-core

Length of output: 503


Secondary theme accent not implemented in tokens.css.

The secondary theme declares accent_hex: "#d2ddbb" (line 47), but tokens.css only defines primary theme tokens. The color #d2ddbb exists in the tokens file but is assigned to the --bb-flare variable, not as an override for --bb-accent when the secondary theme is active. Without a [data-bb-theme="secondary"] block setting --bb-accent: #d2ddbb``, the secondary theme will fall back to the primary accent color, making theme switching ineffective.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@starter/variants/wiliaquison_valentim/design-system.config.yaml` around lines
30 - 51, Add a theme-specific CSS override in tokens.css to map the secondary
accent declared in design-system.config.yaml to the CSS variable used by the
system: add a [data-bb-theme="secondary"] rule that sets --bb-accent to
"#d2ddbb" (and any related variables you need, e.g., live badge or text-accent
variables) so the secondary theme uses the correct color instead of falling back
to the primary; reference the config's accent_hex and existing token name
(--bb-flare) to ensure consistency with current tokens.

Comment on lines +67 to +79
component_dirs:
atoms: "brandbook/atoms"
molecules: "brandbook/molecules"
organisms: "brandbook/organisms"
chrome: "brandbook/chrome"
pages: "brandbook/pages"
token_files:
- "src/components/brandbook/styles/tokens.css"
- "src/components/brandbook/styles/keyframes.css"
- "src/components/brandbook/styles/patterns.css"
- "src/components/brandbook/styles/primitives.css"
- "src/components/brandbook/styles/components-lib.css"
- "src/components/brandbook/styles/editorial-lib.css" No newline at end of file
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Validate YAML syntax
echo "=== Validating YAML syntax ==="
python3 -c "
import yaml
import sys
try:
    with open('starter/variants/wiliaquison_valentim/design-system.config.yaml', 'r') as f:
        config = yaml.safe_load(f)
    # Check if component_dirs was parsed correctly
    if config and 'apps' in config and len(config['apps']) > 0:
        app = config['apps'][0]
        if 'component_dirs' in app:
            print('component_dirs value:', app.get('component_dirs'))
            if app['component_dirs'] is None:
                print('ERROR: component_dirs is null - indentation issue confirmed')
            elif isinstance(app['component_dirs'], dict):
                print('OK: component_dirs is a dict with keys:', list(app['component_dirs'].keys()))
        else:
            print('ERROR: component_dirs not found in app')
        # Check for misplaced keys
        for key in ['atoms', 'molecules', 'organisms', 'chrome', 'pages']:
            if key in app:
                print(f'ERROR: {key} is a sibling of component_dirs instead of child')
except yaml.YAMLError as e:
    print('YAML parse error:', e)
    sys.exit(1)
"

Repository: SynkraAI/aiox-core

Length of output: 493


YAML syntax error: missing indentation for nested mappings.

The component_dirs mapping (lines 68-72) and token_files list items (lines 74-79) are not properly indented under their parent keys. As written, atoms, molecules, organisms, chrome, and pages will be parsed as sibling keys of the app object rather than children of component_dirs, and component_dirs: will have a null value.

🐛 Proposed fix for YAML indentation
     component_dirs:
-    atoms: "brandbook/atoms"
-    molecules: "brandbook/molecules"
-    organisms: "brandbook/organisms"
-    chrome: "brandbook/chrome"
-    pages: "brandbook/pages"
+      atoms: "brandbook/atoms"
+      molecules: "brandbook/molecules"
+      organisms: "brandbook/organisms"
+      chrome: "brandbook/chrome"
+      pages: "brandbook/pages"
     token_files:
-    - "src/components/brandbook/styles/tokens.css"
-    - "src/components/brandbook/styles/keyframes.css"
-    - "src/components/brandbook/styles/patterns.css"
-    - "src/components/brandbook/styles/primitives.css"
-    - "src/components/brandbook/styles/components-lib.css"
-    - "src/components/brandbook/styles/editorial-lib.css"
+      - "src/components/brandbook/styles/tokens.css"
+      - "src/components/brandbook/styles/keyframes.css"
+      - "src/components/brandbook/styles/patterns.css"
+      - "src/components/brandbook/styles/primitives.css"
+      - "src/components/brandbook/styles/components-lib.css"
+      - "src/components/brandbook/styles/editorial-lib.css"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
component_dirs:
atoms: "brandbook/atoms"
molecules: "brandbook/molecules"
organisms: "brandbook/organisms"
chrome: "brandbook/chrome"
pages: "brandbook/pages"
token_files:
- "src/components/brandbook/styles/tokens.css"
- "src/components/brandbook/styles/keyframes.css"
- "src/components/brandbook/styles/patterns.css"
- "src/components/brandbook/styles/primitives.css"
- "src/components/brandbook/styles/components-lib.css"
- "src/components/brandbook/styles/editorial-lib.css"
component_dirs:
atoms: "brandbook/atoms"
molecules: "brandbook/molecules"
organisms: "brandbook/organisms"
chrome: "brandbook/chrome"
pages: "brandbook/pages"
token_files:
- "src/components/brandbook/styles/tokens.css"
- "src/components/brandbook/styles/keyframes.css"
- "src/components/brandbook/styles/patterns.css"
- "src/components/brandbook/styles/primitives.css"
- "src/components/brandbook/styles/components-lib.css"
- "src/components/brandbook/styles/editorial-lib.css"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@starter/variants/wiliaquison_valentim/design-system.config.yaml` around lines
67 - 79, The YAML has incorrect indentation: make the keys under component_dirs
(atoms, molecules, organisms, chrome, pages) nested by indenting them beneath
component_dirs (e.g., two spaces) so component_dirs maps to that mapping instead
of null, and indent the token_files list items so they are children of
token_files (each list entry indented under token_files); update the entries
referenced as component_dirs, atoms, molecules, organisms, chrome, pages, and
token_files accordingly to restore proper YAML nesting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area: agents Agent system related area: cli CLI tools (bin/, packages/aios-pro-cli/) area: core Core framework (.aios-core/core/) area: devops CI/CD, GitHub Actions (.github/) area: docs Documentation (docs/) area: health-check Health check system area: installer Installer and setup (packages/installer/) area: pro Pro features (pro/) area: synapse SYNAPSE context engine area: workflows Workflow system related mcp squad type: test Test coverage and quality

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants