Skip to content

Issue 7: Validator's structured-output call on Ollama is fragile #8

@sanjaigridsandguides

Description

@sanjaigridsandguides

Problem

  • validation_chain = VALIDATION_PROMPT | ollama_llm.with_structured_output(ValidationResult) uses Ollama with structured output.
  • with_structured_output for Ollama relies on the model supporting JSON mode / function calling — many Ollama models silently fall back to free-form output, which then fails Pydantic validation and throws an exception.

Impact

  • Random validation failures depending on which model the user happens to have pulled.
  • The catch block then treats them all as "invalid" and burns the retry budget on infrastructure errors.

Required Fix

  • Document the requirement that the validator model must support JSON output.
  • Pin a known-good model (e.g. llama3.1:8b-instruct-q4_K_M) in llm.py.
  • Or, fall back to manual JSON parsing with regex if structured-output fails, and only count "validator returned a parseable invalid verdict" as an iteration.

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions