Google, Microsoft, OpenAI, and Anthropic launch partnership promoting responsible AI

In this 2007 file photo, exhibitors work on laptop computers in front of an illuminated sign of the Google logo at the industrial fair Hannover Messe in Hanover, Germany.
In this 2007 file photo, exhibitors work on laptop computers in front of an illuminated sign of the Google logo at the industrial fair Hannover Messe in Hanover, Germany. Copyright Jens Meyer/AP Photo
Copyright Jens Meyer/AP Photo
By Giulia CarbonaroAFP
Share this articleComments
Share this articleClose Button

US tech giants Google and Microsoft will cooperate with OpenAI and the start-up Anthropic to launch the Frontier Model Forum, a self-regulating industry body that will promote safe and responsible AI.

ADVERTISEMENT

Google, Microsoft, OpenAI, and the start-up Anthropic announced on Wednesday that they will create a self-regulating industry body that will promote the safe and responsible use of frontier artificial intelligence (AI) models, called the Frontier Model Forum.

Frontier AI models are defined as large-scale machine-learning models that exceed what existing models are currently capable of.  

According to a statement released by the companies, the forum will draw on the "technical and operational expertise of its member companies" to benefit the entire AI industry "by advancing technical evaluations and developing a public library of solutions to support industry best practices and standards." 

Crucially, it will create one space where different companies can come together to discuss the advancement of AI.

"Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control," Brad Smith, Vice Chair & President of Microsoft, said in a press release. "This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity."

Anna Makanju, Vice President of Global Affairs of OpenAI said that "it is vital that AI companies–especially those working on the most powerful models–align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible."

Other companies will be able to join the industry body later on given that they develop and deploy frontier models; that they're committed to doing it so safely; and that are willing to contribute to advancing the forum's efforts and take part in its initiatives.

The introduction of generative AI-powered interfaces like ChatGPT, Bing, and Bard has stoked concern among members of the public and authorities.

The European Union is currently finalising a draft AI regulation which will impose obligations on companies in the sector, such as improved transparency with its users.

Last week in the US, President Joe Biden and his administration secured "commitments" from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI to uphold "three principles" in AI development - safety, security, and trust.

Share this articleComments

You might also like