Skip to content

added model quality gha#319

Merged
shreymodi1 merged 13 commits intomainfrom
shrey/modelquality
Nov 20, 2025
Merged

added model quality gha#319
shreymodi1 merged 13 commits intomainfrom
shrey/modelquality

Conversation

@shreymodi1
Copy link
Contributor

@shreymodi1 shreymodi1 commented Nov 6, 2025


name: Pull Request
about: Propose changes to the codebase
title: "Brief description of changes"
labels: ''
assignees: ''


Description

Please include a summary of the change and which issue is fixed or feature is implemented. Please also include relevant motivation and context. List any dependencies that are required for this change.

Fixes # (issue)
Implements # (issue)

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update
  • Refactoring/Code cleanup
  • Build/CI/CD related changes
  • Other (please describe):

How Has This Been Tested?

Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration.

  • Test A
  • Test B

Test Configuration:

  • Firmware version:
  • Hardware:
  • Toolchain:
  • SDK:

Checklist:

  • My code follows the style guidelines of this project (ran black ., isort ., flake8 .)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes
  • Any dependent changes have been merged and published in downstream modules
  • I have checked my code and corrected any misspellings

Screenshots (if applicable)

If applicable, add screenshots to help showcase your changes.

Additional context

Add any other context about the PR here.


Note

Adds comprehensive streaming-compliance tests (structured JSON, tools, reasoning, consistency) and updates rollout/metadata to capture finish_reason, reasoning_content, and tool call counts.

  • Benchmarks/Tests:
    • Add eval_protocol/benchmarks/test_glm_streaming_compliance.py with streaming and non-streaming compliance tests:
      • Structured JSON output, single/multi tool calls, complex args, parameter types, naming/array validation, recovery behavior.
      • Reasoning effort checks (none/low), tools+reasoning combos.
      • Streaming vs non-streaming output consistency shadow test.
  • Rollout Processor (eval_protocol/pytest/default_single_turn_rollout_process.py):
    • Capture finish_reason, serialize reasoning_content, and normalize tool_calls (with fallback conversion).
    • Populate execution_metadata.finish_reason and execution_metadata.tool_call_count.
    • Minor: add per-request no-cache, retain usage/duration logging.
  • Models (eval_protocol/models.py):
    • Extend ExecutionMetadata with finish_reason and tool_call_count fields.
    • Keep Message.reasoning_content and ChatCompletionContentPartTextParam support utilized by tests.

Written by Cursor Bugbot for commit fac4f37. This will update automatically on new commits. Configure here.

name: Streaming Compliance Benchmark

on:
push:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Unrestricted Push Triggers Cause Excess CI Runs

The workflow trigger on: push: without branch filters will run on every push to any branch, including feature branches and pull requests. This causes unnecessary CI runs and resource consumption. Other workflows in the repository like ci.yml and fireworks-tracing-tests.yml restrict pushes to specific branches (e.g., main) or use path filters to avoid this issue.

Fix in Cursor Fix in Web

],
rollout_processor=SingleTurnRolloutProcessor(),
aggregation_method="mean",
passed_threshold=0.0,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Require threshold to enforce failing on zero score

The passed_threshold=0.0 allows the test to pass even when all compliance checks fail (score=0.0). For a streaming compliance benchmark that validates tool call behavior, this threshold should be higher (likely 1.0) to ensure the test only passes when the model correctly handles streaming tool calls.

Fix in Cursor Fix in Web

"""Check whether the assistant retries tool calls when instructed to recover."""

assistant_msg = row.last_assistant_message()
print(f"assistant_msg: {assistant_msg}")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Debug print statement left in test code

A print() debug statement is left in the test_streaming_tool_retry_behavior function. This will clutter test output logs during CI/CD runs. The line print(f"assistant_msg: {assistant_msg}") should be removed as it appears to be temporary debugging code that was accidentally committed.

Fix in Cursor Fix in Web

@shreymodi1 shreymodi1 merged commit f10c29f into main Nov 20, 2025
3 checks passed
@shreymodi1 shreymodi1 deleted the shrey/modelquality branch November 20, 2025 00:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants