refactor(models): Refine MessageAgentThought SQLAlchemy typing#11
Conversation
Code Review by Qodo
1. AgentThoughtValidation allows extra fields
|
| class AgentThoughtValidation(BaseModel): | ||
| """ | ||
| Validation model for agent thought data before database persistence. | ||
| """ | ||
|
|
||
| message_id: str | ||
| position: int | ||
| thought: str | None = None | ||
| tool: str | None = None | ||
| tool_input: str | None = None | ||
| observation: str | None = None | ||
|
|
||
| class Config: | ||
| extra = "allow" # Pydantic v1 syntax - should use ConfigDict(extra='forbid') | ||
|
|
There was a problem hiding this comment.
1. Agentthoughtvalidation allows extra fields 📘 Rule violation ⛨ Security
• AgentThoughtValidation uses legacy Pydantic class Config and sets extra = "allow", which conflicts with the requirement to use Pydantic v2 configuration and forbid undeclared fields by default. • Allowing extra/unknown fields weakens validation guarantees and can permit unexpected/unvalidated payloads to be persisted or processed. • This increases the risk of accepting malformed or attacker-controlled fields without explicit schema review.
Agent prompt
## Issue description
`AgentThoughtValidation` is configured with legacy Pydantic `class Config` and `extra = "allow"`, which violates the requirement to use Pydantic v2 configuration and forbid undeclared fields by default.
## Issue Context
The compliance checklist requires DTO-style validation to use Pydantic v2 patterns (`ConfigDict`) and `extra='forbid'` to prevent unexpected fields.
## Fix Focus Areas
- api/core/agent/base_agent_runner.py[52-66]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| try: | ||
| tool_inputs = json.loads(tool_input_payload) | ||
| except Exception: | ||
| tool_inputs = {tool: {} for tool in tool_names} | ||
| else: | ||
| tool_inputs = {tool: {} for tool in tool_names} | ||
|
|
||
| observation_payload = agent_thought.observation | ||
| if observation_payload: | ||
| try: | ||
| tool_responses = json.loads(observation_payload) | ||
| except Exception: | ||
| tool_responses = dict.fromkeys(tool_names, observation_payload) | ||
| else: | ||
| tool_responses = dict.fromkeys(tool_names, observation_payload) |
There was a problem hiding this comment.
2. except exception hides json errors 📘 Rule violation ⛯ Reliability
• JSON parsing failures in organize_agent_history are caught with bare except Exception: and silently replaced with defaults, without logging any context about the malformed payload. • This can mask data corruption/serialization bugs and makes production debugging difficult because the failure is neither surfaced nor observable. • The current behavior also makes it harder to distinguish empty payloads from invalid JSON payloads at runtime.
Agent prompt
## Issue description
`organize_agent_history` swallows JSON parsing exceptions with `except Exception:` and provides no logging/context, reducing observability and making malformed data hard to diagnose.
## Issue Context
Compliance requires meaningful, context-rich error handling and avoids swallowed exceptions without logging.
## Fix Focus Areas
- api/core/agent/base_agent_runner.py[469-485]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| ) | ||
| tool_call_response.append( | ||
| ToolPromptMessage( | ||
| content=tool_responses.get(tool, agent_thought.observation), | ||
| content=str(tool_inputs.get(tool, agent_thought.observation)), | ||
| name=tool, | ||
| tool_call_id=tool_call_id, | ||
| ) |
There was a problem hiding this comment.
3. Tool outputs lost 🐞 Bug ✓ Correctness
• ToolPromptMessage content is populated from tool_inputs instead of tool_responses, so the agent history records the tool request as if it were the tool output. • Downstream agent runners treat ToolPromptMessage.content as the observation, so subsequent reasoning/tool-chaining will be incorrect and can break agent behavior.
Agent prompt
### Issue description
`BaseAgentRunner.organize_agent_history()` currently sets `ToolPromptMessage.content` using `tool_inputs` (tool request payload) instead of `tool_responses` (tool output / observation). Downstream runners (e.g., CoT runner) treat `ToolPromptMessage.content` as the observation, so this breaks agent reasoning and tool chaining.
### Issue Context
- `tool_responses` is already computed from `agent_thought.observation` but is unused.
- `ToolPromptMessage.content` should represent the tool execution output.
### Fix Focus Areas
- api/core/agent/base_agent_runner.py[458-506]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| message_unit_price=Decimal(0), | ||
| message_price_unit=Decimal("0.001"), | ||
| message_files=json.dumps(messages_ids) if messages_ids else "", | ||
| answer="", | ||
| observation="", | ||
| answer_token=0, | ||
| answer_unit_price=0, | ||
| answer_price_unit=0, | ||
| answer_unit_price=Decimal("0.001"), | ||
| answer_price_unit=Decimal(0), | ||
| tokens=0, | ||
| total_price=0, |
There was a problem hiding this comment.
4. Wrong answer price unit 🐞 Bug ✓ Correctness
• create_agent_thought initializes answer_price_unit to Decimal(0) while the model column
default/server_default is Decimal("0.001").
• If LLM usage isn’t later written (error paths / early abort), stored costs can be systematically
wrong (unit multiplier 0 yields computed total 0).
Agent prompt
### Issue description
`BaseAgentRunner.create_agent_thought()` initializes `answer_price_unit` to `Decimal(0)` despite the ORM column default/server_default being `0.001`. If the thought record isn’t later updated with real usage, cost reporting can be wrong (unit multiplier 0 => computed totals become 0).
### Issue Context
The pricing system multiplies by the unit multiplier (`price_config.unit`), so persisting 0 as the unit multiplier is almost never correct.
### Fix Focus Areas
- api/core/agent/base_agent_runner.py[308-334]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
Benchmark PR from agentic-review-benchmarks#11