You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
validation_chain = VALIDATION_PROMPT | ollama_llm.with_structured_output(ValidationResult) uses Ollama with structured output.
with_structured_output for Ollama relies on the model supporting JSON mode / function calling — many Ollama models silently fall back to free-form output, which then fails Pydantic validation and throws an exception.
Impact
Random validation failures depending on which model the user happens to have pulled.
The catch block then treats them all as "invalid" and burns the retry budget on infrastructure errors.
Required Fix
Document the requirement that the validator model must support JSON output.
Pin a known-good model (e.g. llama3.1:8b-instruct-q4_K_M) in llm.py.
Or, fall back to manual JSON parsing with regex if structured-output fails, and only count "validator returned a parseable invalid verdict" as an iteration.
Problem
validation_chain = VALIDATION_PROMPT | ollama_llm.with_structured_output(ValidationResult)uses Ollama with structured output.with_structured_outputfor Ollama relies on the model supporting JSON mode / function calling — many Ollama models silently fall back to free-form output, which then fails Pydantic validation and throws an exception.Impact
Required Fix
llama3.1:8b-instruct-q4_K_M) inllm.py.