Number of papers: 1
- Authors: Lama Ahmad, Sandhini Agarwal, Michael Lampe, Pamela Mishkin
- Abstract: Red teaming has emerged as a critical practice in assessing the possible risks of AI models and systems. It aids in the discovery of novel risks, stress testing possible gaps in existing mitigations, enriching existing quantitative safety metrics, facilitating the creation of new safety measurements, and enhancing public trust and the legitimacy of AI risk assessments. This white paper describes OpenAI’s work to date in external red teaming and draws some more general conclusions from this work....
- Link: Read Paper
- Labels: code model, code model security, benchmark