Code review feedback: architecture, modularity & error handling #78
Replies: 3 comments 1 reply
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
-
|
Thanks for the detailed review! This feedback inspired the work done in PR #141. Here is what we addressed: Architecture BoundariesExtracted a services layer that cleanly separates orchestration from business logic. Activities are now thin wrappers that own heartbeats and error classification, while domain logic lives in focused services wired through a per-workflow DI container with no orchestration imports. Also broke some circular dependencies and consolidated the agent registry into a single source of truth. Error HandlingWent with the Result type approach — a typed error code enum and explicit Result types replace the scattered string-matching classification. Internal errors match on typed codes, with string matching only as a fallback for external SDK/network errors. Also consolidated duplicated detection logic (like spending cap checks) that had drifted out of sync across multiple locations. Configuration ValidationUpgraded from fail-on-first to collect-all-errors validation with human-readable messages. Kept JSON Schema rather than adding a new validation library — it serves as both validation and documentation. Contributor DocumentationHaven't set up formal contributing guidelines yet — that's still on the todo list. In the meantime, the project's CLAUDE.md covers architecture, key design patterns, and guides for adding agents and modifying prompts, so it's the best starting point for understanding the codebase. All of this landed in PR #141. External behavior (five-phase pipeline, orchestration, agent behavior) is unchanged. Appreciate the review that kicked this off. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi! I spent some time looking through the Shannon codebase and wanted to share a few observations and suggestions from a code quality and maintainability perspective.
What I liked:
Suggestions:
Architecture boundaries
Some parts of the code mix orchestration logic with lower-level implementation details. Introducing clearer interfaces between core scanning logic and supporting utilities could improve testability and extensibility.
Error handling consistency
In a few places, failures seem to be implicitly assumed to succeed. A more uniform error-handling strategy (explicit errors, structured results) would make debugging and long-term maintenance easier.
Configuration validation
Given the complexity of configs and prompts, validating configuration inputs early (with clear error messages) could prevent subtle runtime issues.
Documentation for contributors
An architecture overview (diagram or short section) explaining how the main components interact would help new contributors ramp up faster.
Overall, this is a very impressive project with a strong foundation. These are just minor suggestions that could help make it even more robust and contributor-friendly.
Beta Was this translation helpful? Give feedback.
All reactions