Official repository for the paper "Measuring Audio's Impact on Correctness: Audio-Contribution-Aware Post-Training of Large Audio Language Models"
- [2026.04] Update on MMSU Metric of released models: Based on community feedback, we identified a flaw in our evaluation script that artificially inflated the MMSU scores of our released models by ignoring sequence order. We sincerely apologize for this oversight. Crucially, please note that our AudioMCQ training data, the paper's conclusions regarding audio-contribution, and the MMAR/MMAU metrics remain completely unaffected. When comparing against our work, we recommend reporting the MMAR/MMAU results or re-evaluating our published checkpoints using your own exact-match algorithm. We deeply apologize for any inconvenience this may have caused to the research community.
- [2026.03] 🔥 We released AudioMCQ-StrongAC-GeminiCoT, a highly curated subset featuring native CoT reasoning from Gemini 3.1 Pro! It proudly serves as the official training set for DCASE 2026 Challenge Task 5.
- [2026.02] Paper accepted by ICLR 2026.
- [2025.09] Paper published on arXiv.
- [2025.09] AudioMCQ dataset released with 571k samples!
- [2025.07] We achieve 1st place in the DCASE 2025 Audio-Question-Answering challenge by using AudioMCQ!
- Dataset: https://huggingface.co/datasets/inclusionAI/AudioMCQ
- Paper: https://arxiv.org/abs/2509.21060
- DCASE 2025 Challenge: 1st Place Results
AudioMCQ is a comprehensive audio multiple-choice question dataset with 571k samples designed for post-training Large Audio Language Models (LALMs). The dataset features dual chain-of-thought annotations and audio-contribution filtering, achieving state-of-the-art results in audio understanding tasks.
- 571k high-quality samples across sound, music, speech, and temporal domains
- Dual CoT annotations: Structured and unstructured reasoning paths
- Audio-Contribution filtering: Weak (54.8%) and strong (45.2%) splits
- Pre-trained models available: Weak-to-Strong and Mixed-to-Strong paradigms
For complete dataset information, statistics, data format, and download instructions, please visit:
The Hugging Face repository contains:
- Full dataset documentation
- Detailed statistics and examples
- Data format specifications
- Download links for audio files
- Usage instructions
- Model checkpoints
We provide trained model checkpoints for two post-training paradigms:
| Training Paradigm | Hugging Face Link |
|---|---|
| Weak-to-Strong | inclusionAI/AudioMCQ-Weak-To-Strong |
| Mixed-to-Strong | inclusionAI/AudioMCQ-Mixed-To-Strong |
All training code used for this project can be found in the /training_scripts directory.
- Haolin He: harlandzzc@link.cuhk.edu.hk
If you find AudioMCQ useful in your research, please cite:
@article{he2025measuring,
title={Measuring Audio's Impact on Correctness: Audio-Contribution-Aware Post-Training of Large Audio Language Models},
author={He, Haolin and Du, Xingjian and Sun, Renhe and Dai, Zheqi and Xiao, Yujia and Yang, Mingru and Zhou, Jiayi and Li, Xiquan and Liu, Zhengxi and Liang, Zining and others},
journal={arXiv preprint arXiv:2509.21060},
year={2025}
}We thank the organizers of DCASE 2025 and the research community for their valuable feedback and support.


