Miles Brundage
Senior Advisor for AGI Readiness · OpenAI · 2024
Said 'Neither OpenAI nor any other frontier lab is ready' for AGI. Noted OpenAI placed increasingly restrictive limits on what he could publish. The AGI Readiness team was disbanded after his departure.
Sources
Key Publications
- Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable ClaimsarXivpreprint
This paper argues that the AI community's reliance on voluntary, unverifiable commitments to safety and ethics is insufficient, and proposes concrete mechanisms through which AI developers can make claims about their systems' safety, security, fairness, and privacy that external parties can actually verify. The authors, who include researchers from major AI labs, academic institutions, and civil society organizations, identify a gap between the aspirational principles that organizations publish and the lack of infrastructure for holding them accountable to those principles. They propose a toolkit of verification mechanisms organized across three categories: institutional mechanisms such as third-party audits and red-teaming exercises, software mechanisms such as audit trails and formal verification tools, and hardware mechanisms such as secure computing environments that enable privacy-preserving evaluation. The paper is significant because it moves the conversation about AI governance beyond abstract principles toward practical implementation, offering a roadmap for building the trust infrastructure that responsible AI deployment requires. Its multi-stakeholder authorship and emphasis on actionable proposals have made it influential in policy discussions about AI regulation, audit requirements, and the design of safety evaluation frameworks.