Google, OpenAI, Microsoft and Anthropic join forces to promote safe AI development
Google LLC, Microsoft Corp., ChatGPT developer OpenAI LP and artificial intelligence research startup Anthropic today announced the formation of a new industry body that’s intended to ensure the development of safe and responsible AI models.
In a joint announcement published today on Google’s blog, the four companies outlined the Frontier Model Forum, which will be responsible for advancing AI research, identifying best practices and collaborating with governments and policymakers and academics in paving the way to make sure that AI is trustworthy and safe.
“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” said Microsoft Vice Chair and President Brad Smith. “This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”
Ever since its debut in November, OpenAI’s ChatGPT AI chatbot has produced a dizzying rush of popularity thanks to its capability to understand natural language and reply using convincing humanlike conversation. Since that time, AI models have only become more refined, becoming capable of producing research, long-form essays, performing logic puzzles and more with ease. Other models are also capable of creating convincing images and video by drawing on existing material, creating concerns about privacy and impersonation.
Membership in the forum is limited to companies working on large-scale machine-learning models that will build on and exceed the capabilities of current models, the statement said. The members will be tasked with developing and deploying these “frontier models” and demonstrating a strong commitment to building them safely and with joint initiatives in mind toward safety, trust and community outreach.
In the coming months, the Frontier Model Forum intends to establish an advisory board that will guide its priorities. The founding companies will also become the focal point for decision-making and consulting with governments and the community on how to shepherd the development of AI models.
According to the statement, the foundational companies intend to follow in the footsteps of other initiatives that work to bring positive outcomes for AI development integrity such as Partnership for AI and MLCommons.
The formation of the forum comes at a time when world governments are waking up to the raw power behind AI models and the burgeoning industry. All of the forum members joined an announcement of voluntary commitments with the White House last week stating that they would address growing concerns about the risks of AI models after President Joe Biden met with science and technology advisors in April to discuss the impact of AI technology.
OpenAI recently became the target of an investigation by the Federal Trade Commission over its trade practices after an artificial intelligence ethics group asked the commission to look into the startup.
At the same time, the European Union has been pushing forward with what will become the world’s first comprehensive AI law, dubbed the “AI Act,” which will govern how AI is regulated in the EU. The legislation, which has been in debate since April, is an attempt to balance the growing industry against the safety and concerns of consumers and human rights such as privacy, surveillance, copyright and more.
Image: Pixabay
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU