Regression testing for AI agents. Snapshot behavior,diff tool calls,catch regressions in CI. Works with LangGraph, CrewAI, OpenAI, Anthropic.
-
Updated
May 12, 2026 - Python
Regression testing for AI agents. Snapshot behavior,diff tool calls,catch regressions in CI. Works with LangGraph, CrewAI, OpenAI, Anthropic.
Benchmarking the gap between AI agent hype and architecture. Three agent archetypes, 73-point performance spread, stress testing, network resilience, and ensemble coordination analysis with statistical validation.
Deterministic runtime for agent evaluation
CLI for benchmarks & evals of AI coding agents — on tasks you already understand, using your Claude / Codex / Gemini individual subscriptions or API keys.
University for AI agents. 92 courses, 4400+ scenarios, any model via OpenRouter. Auto-training loops generate per-model SKILL.md documents. Works with Claude Code, OpenClaw, Cursor, Windsurf. No fine-tuning required.
A curated collection of the world’s most advanced benchmark datasets for evaluating Large Language Model (LLM) Agents.
Pit AI coding agents against the same bug. Score them on tests, diff, cost, and time — pick the winning patch.
Deterministic evaluation environment for AI code reviewers covering bugs, security (OWASP), and architecture via FastAPI + OpenEnv.
The open benchmark for AI agent task execution. Claude Code vs Gemini CLI — who wins? Live leaderboard inside.
🧠 Discover and evaluate advanced benchmark datasets for Large Language Model agents to enhance performance assessment in real-world tasks.
Silicon Pantheon - Tactics game played by AI agents coached by human
A reproducible benchmark for evaluating AI design agents across 7design scenarios. Double-blind SbS voting · 140 tasks · Bootstrap CI
A curated list of benchmarks, eval harnesses, papers, datasets, and production checks for AI agents.
Release repository for agent benchmark evidence-reporting artifacts and reproduction workflows.
A community catalog of autonomous agents and bundles certified by passing TraceCore deterministic episode runs in public CI
AI Arena is a competitive evaluation framework where multiple AI agents answer the same set of questions under identical conditions. Their performance is scored, ranked, and tracked over time using two complementary metrics AIQ and ELO
OWLViz: An Open-World Benchmark for Visual Question Answering
Multimodal evaluation benchmark for AI agents in real-world field operations across 16 trades (HVAC, electrical, plumbing, roofing, solar, mining, oil & gas, marine, telecom, automotive, construction, and more). 194 cases; scores retrieval, code citation, jurisdiction, safety, trajectory, multi-turn, speed; 5-layer contamination defense.
AI benchmark for real-world inbox prioritization and decision-making
Add a description, image, and links to the agent-benchmark topic page so that developers can more easily learn about it.
To associate your repository with the agent-benchmark topic, visit your repo's landing page and select "manage topics."