Microsoft, Google, OpenAI Set Up $10m Fund To Promote AI Research
Microsoft, Google, and OpenAI have set up a $10 million AI Safety Fund, to promote responsible artificial intelligence research. This was done under a collective body called Frontier Model Forum.
The Forum made the announcement while introducing Chris Meserole as its first Executive Director recently. This was the major announcement from the squad since establishing the Frontier Model Forum Industry body in July.
Speaking on the purpose of the $10 million AI Safety Fund, the statement reads, “Over the past year, industry has driven significant advances in the capabilities of AI. As those advances have accelerated, new academic research into AI safety is required. To address this gap, the Forum and philanthropic partners are creating a new AI Safety Fund, which will support independent researchers from around the world affiliated with academic institutions, research institutions, and startups.
“The initial funding commitment for the AI Safety Fund comes from Anthropic, Google, Microsoft, and OpenAI, and the generosity of our philanthropic partners, the Patrick J. McGovern Foundation, the David and Lucile Packard Foundation, Eric Schmid, and Jaan Tallinn. Together, this amounts to over $10 million in initial funding.
“The primary focus of the Fund will be supporting the development of new model evaluations and techniques for red teaming AI models to help develop and test evaluation techniques for potentially dangerous capabilities of frontier systems, the Forum stated, while affirming that we believe that increased funding in this area will help raise safety and security standards and provide insights into the mitigations and controls that industry, governments, and civil society need to respond to the challenges presented by AI systems.
“The Fund will put out a call for proposals within the next few months. Meridian Institute will administer the Fund, their work will be supported by an advisory committee composed of independent external experts, experts from AI companies, and individuals with experience in grantmaking,” it explained.
The Frontier Model Forum is an industry body focused on ensuring that any advancement in AI technology and research remains safe, secure, and under human control while identifying best practices and standards. It also aims to facilitate information-sharing among policymakers and the industry.