feat(eval): add evaluate_full_response option to rubric-based evaluation#5216
feat(eval): add evaluate_full_response option to rubric-based evaluation#5216Siddhartha90 wants to merge 3 commits intogoogle:mainfrom
Conversation
When an agent emits text before a tool call (e.g. presenting a plan),
then calls a tool, then emits more text (e.g. an explanation), the
rubric_based_final_response_quality_v1 metric only sends the post-tool-call
text to the judge. The pre-tool-call text is stored in
intermediate_data.invocation_events but is never included in the judge prompt.
This means rubrics that check for content in the pre-tool-call text always
fail, even though the agent correctly produced that content.
This commit adds an `evaluate_full_response` boolean field to
`RubricsBasedCriterion` (following the pattern of `evaluate_intermediate_nl_responses`
on `HallucinationsCriterion`). When set to true, the evaluator concatenates
all text from invocation_events with the final_response before sending to
the judge, giving it the complete picture of the agent's output.
Usage:
```json
{
"rubric_based_final_response_quality_v1": {
"threshold": 0.8,
"evaluate_full_response": true,
"rubrics": [...]
}
}
```
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
|
Response from ADK Triaging Agent Hello @Siddhartha90, thank you for creating this PR! Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). Also, this PR is a new feature, could you please associate the github issue with this PR? If there is no existing issue, could you please create one? In addition, could you please include a This information will help reviewers to review your PR more efficiently. Thanks! |
|
Response from ADK Triaging Agent Hello @Siddhartha90, thank you for updating the pull request with the associated issue and a testing plan! It looks like the Contributor License Agreement (CLA) is still not signed. Before we can merge your contribution, we'll need you to sign the CLA. You can do so at https://cla.developers.google.com/. Thanks! |
The criterion may be deserialized as BaseCriterion (which accepts extra fields via extra="allow") rather than RubricsBasedCriterion, so isinstance check fails even when evaluate_full_response is present. Using getattr with a default handles both cases. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Hi @Siddhartha90 , Thank you for your contribution! It appears you haven't yet signed the Contributor License Agreement (CLA). Please visit https://cla.developers.google.com/ to complete the signing process. Once the CLA is signed, we'll be able to proceed with the review of your PR. Thank you! |
Fixes #5217
Summary
When an agent emits text before a tool call (e.g. presenting a plan), then calls a tool, then emits more text (e.g. an explanation),
rubric_based_final_response_quality_v1only sends the post-tool-call text to the judge asfinal_response. The pre-tool-call text is stored inintermediate_data.invocation_eventsbut is never included in the judge prompt.This means rubrics that check for content in the pre-tool-call text always fail, even though the agent correctly produced that content.
Changes
evaluate_full_response: bool = FalsetoRubricsBasedCriterion(following the pattern ofevaluate_intermediate_nl_responsesonHallucinationsCriterion)invocation_events+final_responsebefore sending to the judgeUsage
{ "rubric_based_final_response_quality_v1": { "threshold": 0.8, "evaluate_full_response": true, "rubrics": [...] } }Motivation
We have a resume improvement agent that:
submit_improved_resume)From the user's perspective this is one continuous response. But the rubric evaluator only judges step 3. Rubrics checking for the plan (step 1) always fail.
With
evaluate_full_response: true, the judge sees the complete agent output and can accurately evaluate all rubrics.Backwards compatible
The flag defaults to
false, so existing behavior is unchanged.Test plan
Scenario: Agent emits text before and after a tool call within a single invocation
rubric_based_final_response_quality_v1withoutevaluate_full_responseset. Confirm the judge only receives the post-tool-call text in<final_answer>. Rubrics checking for pre-tool-call content should fail. This validates no regression.evaluate_full_response: true: Run the same eval with the flag enabled. Confirm the judge receives the concatenated text from all invocation events + final_response in<final_answer>. Rubrics checking for pre-tool-call content should now pass.test_config.jsonfiles withoutevaluate_full_responsecontinue to work unchanged (field defaults tofalse).Pre-PR validation: We installed the lib from this PR (
uv pip install "google-adk[eval] @ git+https://github.com/Siddhartha90/adk-python.git@feat/evaluate-full-response")Then tested the core logic (concatenating text from
invocation_events+final_response) against a production agent that emits aplan text → calls a tool → emits explanation text.Without full-response concatenation, a couple of our rubrics -
presents_planandwarm_acknowledgmentwhich relied on pre-tool-call-content consistently scored 0.0. With full-response concatenation, all rubrics scored 1.0. The same logic is applied in this PR's changes toformat_auto_rater_prompt.🤖 Generated with Claude Code