Skip to content

Add RTFx tracking and validation to all benchmark workflows#458

Merged
Alex-Wengg merged 2 commits intomainfrom
feature/add-rtfx-validation-to-benchmarks
Mar 28, 2026
Merged

Add RTFx tracking and validation to all benchmark workflows#458
Alex-Wengg merged 2 commits intomainfrom
feature/add-rtfx-validation-to-benchmarks

Conversation

@Alex-Wengg
Copy link
Copy Markdown
Member

@Alex-Wengg Alex-Wengg commented Mar 28, 2026

Summary

  • Add RTFx metric extraction to qwen3-asr-benchmark.yml
  • Add RTFx validation to ALL 6 benchmark workflows to fail if RTFx is 0
  • Fix PR comment posting with if: always() so comments post even when validation fails

Changes

1. RTFx Tracking (qwen3-asr-benchmark.yml)

Extract and display performance metrics:

  • medianRTFx - Median real-time factor across test files
  • overallRTFx - Overall real-time factor (total audio / total inference time)

2. RTFx Validation (all 6 benchmark workflows)

Add validation to fail workflows with exit 1 if RTFx is 0 or N/A, indicating silent benchmark failure:

  • qwen3-asr-benchmark.yml: Validate medianRTFx and overallRTFx
  • asr-benchmark.yml: Validate all 6 RTFx metrics (v2/v3 × clean/other/streaming)
  • diarizer-benchmark.yml: Validate RTFx
  • parakeet-eou-benchmark.yml: Validate RTFx
  • sortformer-benchmark.yml: Validate RTFx
  • vad-benchmark.yml: Validate MUSAN and VOiCES RTFx

3. Fix PR Comment Posting

  • Add if: always() to Comment PR steps in workflows that didn't have it
  • Without this, PR comments don't post when validation fails
  • Users need to see what went wrong even if the workflow fails

Why Fail on RTFx = 0?

If RTFx is 0 after benchmarking, it means:

  1. Benchmark didn't run properly
  2. Audio duration was 0
  3. Processing failed silently
  4. Metric extraction failed

Better to fail fast with clear error messages than report misleading zero metrics.

Fixes from Previous PR #454

This PR fixes the issues identified by Devin in #454:

  • ✅ No ModelNames.swift changes (avoiding cache path breakage)
  • ✅ Added if: always() to Comment PR steps
  • ✅ Clean branch from main (no unrelated commits)

Closes #454

🤖 Generated with Claude Code


Open with Devin

Changes:
1. Add RTFx metric extraction to qwen3-asr-benchmark.yml
   - Extract medianRTFx and overallRTFx from benchmark results
   - Display metrics in PR comments

2. Add RTFx validation to all 6 benchmark workflows
   - Fail with exit 1 if RTFx is 0 or N/A
   - Prevents silent benchmark failures from being reported as success

3. Fix PR comment posting with if: always()
   - Add "if: always()" to Comment PR steps
   - Ensures PR comments post even when validation fails
   - Allows users to see what went wrong

Workflows updated:
- qwen3-asr-benchmark.yml: Validate 2 RTFx metrics + add tracking
- asr-benchmark.yml: Validate 6 RTFx metrics (v2/v3 × clean/other/streaming)
- diarizer-benchmark.yml: Validate 1 RTFx metric
- parakeet-eou-benchmark.yml: Validate 1 RTFx metric
- sortformer-benchmark.yml: Validate 1 RTFx metric
- vad-benchmark.yml: Validate 2 RTFx metrics (MUSAN + VOiCES)

If RTFx is 0, it means:
- Benchmark didn't run properly
- Audio duration was 0
- Processing failed silently
- Metric extraction failed

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
@claude
Copy link
Copy Markdown
Contributor

claude bot commented Mar 28, 2026

Claude finished @Alex-Wengg's task —— View job


PR Review: RTFx Tracking and Validation ✅

Successfully reviewed the RTFx tracking and validation changes across all benchmark workflows.

Todo List

  • Create pending GitHub review
  • Analyze PR diff and changes
  • Review workflow YAML syntax and logic
  • Check RTFx validation consistency
  • Submit comprehensive review with feedback

Review Summary

Overall Assessment: Good to merge with minor suggestions for improvement

Key Strengths:

  • Comprehensive coverage: All 6 benchmark workflows now validate RTFx metrics
  • Proper error handling: Uses exit 1 to fail workflows when RTFx is invalid
  • Fixed PR commenting: Added if: always() ensures comments post even on failure
  • Performance tracking: qwen3 workflow now reports median/overall RTFx metrics
  • Clear validation logic: RTFx = 0 or N/A correctly indicates benchmark failure

Minor Issues Identified:

  1. Inconsistent validation patterns: Some workflows check for empty variables, others don't
  2. String comparison robustness: Hardcoded "0.00" might miss edge cases like "0" or "0.0"
  3. Error message consistency: Mix of ❌ CRITICAL: and ⚠️ prefixes

The core functionality is solid and effectively addresses the silent failure issue. The PR successfully fixes the problems from #454 and implements the RTFx validation as specified.

Copy link
Copy Markdown
Contributor

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review Summary

This PR successfully adds RTFx tracking and validation across all 6 benchmark workflows. The changes are well-implemented with appropriate failure handling and PR commenting fixes.

Key Strengths:

  • Comprehensive coverage: All benchmark workflows now validate RTFx metrics
  • Proper error handling: Uses exit 1 to fail workflows when RTFx is invalid
  • Fixed PR commenting: Added if: always() to ensure comments post even on failure
  • Clear validation logic: RTFx = 0 or N/A correctly indicates benchmark failure
  • Good performance reporting: qwen3 workflow now shows median/overall RTFx metrics

Minor Issues to Consider:

1. Inconsistent validation patterns (.github/workflows/asr-benchmark.yml:202):

  • Some workflows check [ -z "$RTF" ] for empty variables, others don't
  • Consider standardizing to: if [ "$RTFx" = "0.00" ] || [ "$RTFx" = "N/A" ] || [ -z "$RTFx" ]; then

2. String comparison robustness (multiple workflows):

  • Hardcoded "0.00" might miss edge cases like "0" or "0.0"
  • Consider using numeric comparison: if (( $(echo "$RTFx < 0.01" | bc -l) ))

3. Error message consistency:

  • Some workflows use ❌ CRITICAL: prefix, others use ⚠️
  • Consider standardizing critical validation failures to use ❌ CRITICAL:

Architecture Assessment:

The overall approach is sound - failing fast on RTFx = 0 prevents misleading zero metrics from being reported. The validation logic correctly identifies when benchmarks haven't run properly.

Recommendation: Approve with minor suggestions for consistency improvements. The core functionality is correct and addresses the silent failure issue effectively.

devin-ai-integration[bot]

This comment was marked as resolved.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 28, 2026

Speaker Diarization Benchmark Results

Speaker Diarization Performance

Evaluating "who spoke when" detection accuracy

Metric Value Target Status Description
DER 15.1% <30% Diarization Error Rate (lower is better)
JER 24.9% <25% Jaccard Error Rate
RTFx 30.48x >1.0x Real-Time Factor (higher is faster)

Diarization Pipeline Timing Breakdown

Time spent in each stage of speaker diarization

Stage Time (s) % Description
Model Download 6.665 19.4 Fetching diarization models
Model Compile 2.856 8.3 CoreML compilation
Audio Load 0.053 0.2 Loading audio file
Segmentation 10.325 30.0 Detecting speech regions
Embedding 17.209 50.0 Extracting speaker voices
Clustering 6.884 20.0 Grouping same speakers
Total 34.426 100 Full pipeline

Speaker Diarization Research Comparison

Research baselines typically achieve 18-30% DER on standard datasets

Method DER Notes
FluidAudio 15.1% On-device CoreML
Research baseline 18-30% Standard dataset performance

Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:

  • M2 MacBook Air (2022): Runs at 150 RTFx real-time
  • Performance scales with Apple Neural Engine capabilities

🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 34.4s diarization time • Test runtime: 1m 41s • 03/28/2026, 04:15 PM EST

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 28, 2026

VAD Benchmark Results

Performance Comparison

Dataset Accuracy Precision Recall F1-Score RTFx Files
MUSAN 92.0% 86.2% 100.0% 92.6% 785.4x faster 50
VOiCES 92.0% 86.2% 100.0% 92.6% 730.7x faster 50

Dataset Details

  • MUSAN: Music, Speech, and Noise dataset - standard VAD evaluation
  • VOiCES: Voices Obscured in Complex Environmental Settings - tests robustness in real-world conditions

✅: Average F1-Score above 70%

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 28, 2026

Parakeet EOU Benchmark Results ✅

Status: Benchmark passed
Chunk Size: 320ms
Files Tested: 100/100

Performance Metrics

Metric Value Description
WER (Avg) 7.03% Average Word Error Rate
WER (Med) 4.17% Median Word Error Rate
RTFx 9.98x Real-time factor (higher = faster)
Total Audio 470.6s Total audio duration processed
Total Time 48.5s Total processing time

Streaming Metrics

Metric Value Description
Avg Chunk Time 0.049s Average chunk processing time
Max Chunk Time 0.097s Maximum chunk processing time
EOU Detections 0 Total End-of-Utterance detections

Test runtime: 0m55s • 03/28/2026, 04:10 PM EST

RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 28, 2026

Qwen3-ASR int8 Smoke Test ✅

Check Result
Build
Model download
Model load
Transcription pipeline
Decoder size 571 MB (vs 1.1 GB f32)

Performance Metrics

Metric CI Value Expected on Apple Silicon
Median RTFx 0.06x ~2.5x
Overall RTFx 0.06x ~2.5x

Runtime: 3m3s

Note: CI VM lacks physical GPU — CoreML MLState (macOS 15) KV cache produces degraded results on virtualized runners. On Apple Silicon: ~1.3% WER / 2.5x RTFx.

Fixes issues identified in review:

1. Move EXECUTION_TIME calculation before RTFx validation (qwen3)
   - Previously exit 1 prevented EXECUTION_TIME from being set
   - Now PR comments show proper runtime even when validation fails

2. Standardize error messages to "❌ CRITICAL:" across all workflows
   - Changed "⚠️" to "❌ CRITICAL:" for consistency
   - All validation failures now use the same format

3. Add more zero format checks (0, 0.0, 0.00)
   - Catches edge cases like "0" or "0.0" in addition to "0.00"
   - More robust string comparison for RTFx validation

Workflows updated:
- qwen3-asr-benchmark.yml: Move EXECUTION_TIME before validation
- asr-benchmark.yml: Standardize error messages, add zero variants
- parakeet-eou-benchmark.yml: Add zero variants and empty check

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 28, 2026

Offline VBx Pipeline Results

Speaker Diarization Performance (VBx Batch Mode)

Optimal clustering with Hungarian algorithm for maximum accuracy

Metric Value Target Status Description
DER 14.5% <20% Diarization Error Rate (lower is better)
RTFx 3.35x >1.0x Real-Time Factor (higher is faster)

Offline VBx Pipeline Timing Breakdown

Time spent in each stage of batch diarization

Stage Time (s) % Description
Model Download 14.453 4.6 Fetching diarization models
Model Compile 6.194 2.0 CoreML compilation
Audio Load 0.073 0.0 Loading audio file
Segmentation 30.190 9.6 VAD + speech detection
Embedding 312.170 99.7 Speaker embedding extraction
Clustering (VBx) 0.819 0.3 Hungarian algorithm + VBx clustering
Total 313.201 100 Full VBx pipeline

Speaker Diarization Research Comparison

Offline VBx achieves competitive accuracy with batch processing

Method DER Mode Description
FluidAudio (Offline) 14.5% VBx Batch On-device CoreML with optimal clustering
FluidAudio (Streaming) 17.7% Chunk-based First-occurrence speaker mapping
Research baseline 18-30% Various Standard dataset performance

Pipeline Details:

  • Mode: Offline VBx with Hungarian algorithm for optimal speaker-to-cluster assignment
  • Segmentation: VAD-based voice activity detection
  • Embeddings: WeSpeaker-compatible speaker embeddings
  • Clustering: PowerSet with VBx refinement
  • Accuracy: Higher than streaming due to optimal post-hoc mapping

🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 343.2s processing • Test runtime: 5m 43s • 03/28/2026, 04:15 PM EST

@github-actions
Copy link
Copy Markdown

Sortformer High-Latency Benchmark Results

ES2004a Performance (30.4s latency config)

Metric Value Target Status
DER 33.4% <35%
Miss Rate 24.4% - -
False Alarm 0.2% - -
Speaker Error 8.8% - -
RTFx 12.8x >1.0x
Speakers 4/4 - -

Sortformer High-Latency • ES2004a • Runtime: 2m 42s • 2026-03-28T20:05:27.244Z

@github-actions
Copy link
Copy Markdown

PocketTTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (165.0 KB)

Runtime: 0m34s

Note: PocketTTS uses CoreML MLState (macOS 15) KV cache + Mimi streaming state. CI VM lacks physical GPU — audio quality may differ from Apple Silicon.

@github-actions
Copy link
Copy Markdown

ASR Benchmark Results ✅

Status: All benchmarks passed

Parakeet v3 (multilingual)

Dataset WER Avg WER Med RTFx Status
test-clean 0.57% 0.00% 5.10x
test-other 1.40% 0.00% 3.59x

Parakeet v2 (English-optimized)

Dataset WER Avg WER Med RTFx Status
test-clean 0.80% 0.00% 5.22x
test-other 1.56% 0.00% 3.46x

Streaming (v3)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.65x Streaming real-time factor
Avg Chunk Time 1.382s Average time to process each chunk
Max Chunk Time 1.736s Maximum chunk processing time
First Token 1.693s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming (v2)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.62x Streaming real-time factor
Avg Chunk Time 1.419s Average time to process each chunk
Max Chunk Time 1.599s Maximum chunk processing time
First Token 1.392s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming

25 files per dataset • Test runtime: 6m21s • 03/28/2026, 04:21 PM EST

RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time
Processing time includes: Model inference on Apple Neural Engine, audio preprocessing, state resets between files, token-to-text conversion, and file I/O
Example: RTFx of 2.0x means 10 seconds of audio processed in 5 seconds (2x faster than real-time)

Expected RTFx Performance on Physical M1 Hardware:

• M1 Mac: ~28x (clean), ~25x (other)
• CI shows ~0.5-3x due to virtualization limitations

Testing methodology follows HuggingFace Open ASR Leaderboard

@Alex-Wengg Alex-Wengg merged commit 7feaec8 into main Mar 28, 2026
14 checks passed
@Alex-Wengg Alex-Wengg deleted the feature/add-rtfx-validation-to-benchmarks branch March 28, 2026 20:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant