Skip to content

fix(eval): make _EvalMetricResultWithInvocation.expected_invocation optional#5221

Open
giulio-leone wants to merge 1 commit intogoogle:mainfrom
giulio-leone:fix/eval-conversation-scenario-validation-error
Open

fix(eval): make _EvalMetricResultWithInvocation.expected_invocation optional#5221
giulio-leone wants to merge 1 commit intogoogle:mainfrom
giulio-leone:fix/eval-conversation-scenario-validation-error

Conversation

@giulio-leone
Copy link
Copy Markdown

Summary

Fixes #5214AgentEvaluator.evaluate() crashes with a pydantic ValidationError when evaluating conversation_scenario eval cases because _EvalMetricResultWithInvocation.expected_invocation is typed as required Invocation, while local_eval_service intentionally passes None for scenario-based cases.

Changes

src/google/adk/evaluation/agent_evaluator.py

  1. Make expected_invocation optional (line 93):

    expected_invocation: Optional[Invocation] = None

    This aligns with the public EvalMetricResultPerInvocation model in eval_metrics.py:323.

  2. Guard downstream attribute access in _print_details (lines 439–456):
    Added if per_invocation_result.expected_invocation else None guards for user_content, final_response, and intermediate_data accesses. Both _convert_content_to_text and _convert_tool_calls_to_text already handle None gracefully.

Tests

Added tests/unittests/evaluation/test_eval_metric_result_with_invocation.py with 5 regression tests:

  • Construction with None / omitted / real expected_invocation
  • _get_eval_metric_results_with_invocation propagates None
  • _print_details does not crash with None expected invocation

All 5 tests pass. Existing eval tests unaffected (2 pre-existing failures in test_custom_metric_evaluator.py confirmed on main).

@google-cla
Copy link
Copy Markdown

google-cla bot commented Apr 9, 2026

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@adk-bot adk-bot added the eval [Component] This issue is related to evaluation label Apr 9, 2026
@adk-bot
Copy link
Copy Markdown
Collaborator

adk-bot commented Apr 9, 2026

Response from ADK Triaging Agent

Hello @giulio-leone, thank you for your contribution!

Before we can merge this pull request, you'll need to sign the Contributor License Agreement (CLA). You can do so by following the instructions in the "cla/google" check at the bottom of the pull request.

Thanks!

…ptional

conversation_scenario eval cases intentionally pass expected_invocation=None
from local_eval_service (matching the public EvalMetricResultPerInvocation
model), but the private _EvalMetricResultWithInvocation required a non-None
Invocation, causing a pydantic ValidationError.

Changes:
- Make expected_invocation Optional[Invocation] with default None
- Guard attribute access in _print_details when expected_invocation is None

Fixes google#5214
@giulio-leone giulio-leone force-pushed the fix/eval-conversation-scenario-validation-error branch from e757949 to 6668a79 Compare April 9, 2026 03:01
@rohityan rohityan self-assigned this Apr 9, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

eval [Component] This issue is related to evaluation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

AgentEvaluator crashes with ValidationError when evaluating conversation_scenario eval cases

3 participants