Skip to content

Latest commit

 

History

History
111 lines (78 loc) · 5.51 KB

File metadata and controls

111 lines (78 loc) · 5.51 KB

[ICLR 2026] AudioMCQ: Audio Multiple-Choice Question Dataset

Official repository for the paper "Measuring Audio's Impact on Correctness: Audio-Contribution-Aware Post-Training of Large Audio Language Models"

arXiv DCASE 2025 Base Dataset Gemini CoT Subset

News

  • [2026.04] Update on MMSU Metric of released models: Based on community feedback, we identified a flaw in our evaluation script that artificially inflated the MMSU scores of our released models by ignoring sequence order. We sincerely apologize for this oversight. Crucially, please note that our AudioMCQ training data, the paper's conclusions regarding audio-contribution, and the MMAR/MMAU metrics remain completely unaffected. When comparing against our work, we recommend reporting the MMAR/MMAU results or re-evaluating our published checkpoints using your own exact-match algorithm. We deeply apologize for any inconvenience this may have caused to the research community.
  • [2026.03] 🔥 We released AudioMCQ-StrongAC-GeminiCoT, a highly curated subset featuring native CoT reasoning from Gemini 3.1 Pro! It proudly serves as the official training set for DCASE 2026 Challenge Task 5.
  • [2026.02] Paper accepted by ICLR 2026.
  • [2025.09] Paper published on arXiv.
  • [2025.09] AudioMCQ dataset released with 571k samples!
  • [2025.07] We achieve 1st place in the DCASE 2025 Audio-Question-Answering challenge by using AudioMCQ!

Quick Links

Overview

AudioMCQ is a comprehensive audio multiple-choice question dataset with 571k samples designed for post-training Large Audio Language Models (LALMs). The dataset features dual chain-of-thought annotations and audio-contribution filtering, achieving state-of-the-art results in audio understanding tasks.

Overview of dataset construction pipeline.

Distribution analysis of AudioMCQ dataset.

Randomly sampled questions from four distinct question types.

Key Highlights

  • 571k high-quality samples across sound, music, speech, and temporal domains
  • Dual CoT annotations: Structured and unstructured reasoning paths
  • Audio-Contribution filtering: Weak (54.8%) and strong (45.2%) splits
  • Pre-trained models available: Weak-to-Strong and Mixed-to-Strong paradigms

Dataset Access

For complete dataset information, statistics, data format, and download instructions, please visit:

The Hugging Face repository contains:

  • Full dataset documentation
  • Detailed statistics and examples
  • Data format specifications
  • Download links for audio files
  • Usage instructions
  • Model checkpoints

Model Checkpoints

We provide trained model checkpoints for two post-training paradigms:

Training Paradigm Hugging Face Link
Weak-to-Strong inclusionAI/AudioMCQ-Weak-To-Strong
Mixed-to-Strong inclusionAI/AudioMCQ-Mixed-To-Strong

Training Scripts

All training code used for this project can be found in the /training_scripts directory.

Contact

Contributors

Citation

If you find AudioMCQ useful in your research, please cite:

@article{he2025measuring,
  title={Measuring Audio's Impact on Correctness: Audio-Contribution-Aware Post-Training of Large Audio Language Models},
  author={He, Haolin and Du, Xingjian and Sun, Renhe and Dai, Zheqi and Xiao, Yujia and Yang, Mingru and Zhou, Jiayi and Li, Xiquan and Liu, Zhengxi and Liang, Zining and others},
  journal={arXiv preprint arXiv:2509.21060},
  year={2025}
}

Acknowledgements

We thank the organizers of DCASE 2025 and the research community for their valuable feedback and support.

Related Resources