OpenAI, tech giants join White House initiative to address AI risks
The White House today announced that seven prominent tech firms will take steps to address the risks posed by their artificial intelligence models.
The steps will be taken as part of a set of voluntary commitments that the companies in question have agreed to make. The group includes Amazon.com Inc., Meta Platforms Inc., Microsoft Corp. and Google LLC. They are joined by OpenAI LP as well as two other well-funded AI startups, Inflection and Anthropic, that have raised more than $1.7 billion from investors over the past few months.
“These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI – safety, security, and trust – and mark a critical step toward developing responsible AI,” the White House stated.
The first collection of commitments focuses on preventing the misuse of advanced AI models. Before releasing a new AI model to the public, OpenAI and the other participants will perform tests to find potential issues. The testing will be conducted with the help of independent experts, the White House detailed.
The companies have agreed to carry out red-teaming, which is the practice of simulating bad actors’ behavior to find flaws in a system. Their testing efforts will focus on determining whether a new AI model could lower the entry barriers to weapons development or aid hacking campaigns. The companies’ red-teaming initiatives will also seek to uncover other issues, including societal risks such as bias and discrimination as well as cases where an AI may create copies of itself.
An AI developer may not necessarily catch all the issues affecting a new model before releasing it. To address that possibility, the companies participating in the White House’s initiative have agreed to provide incentives for third-party risk research into their models. They will roll out incentives such as bug bounty programs to support external researchers’ work.
Another focus of the White House’s initiative is ensuring that leading AI developers’ systems are properly protected against hackers.
As part of the commitments they have agreed to make, OpenAI and the other participants will take steps to secure unreleased AI weights. Those are components of a neural network that help determine how it processes data. The companies will keep their weights in secure environments, limit employee access to those environments and implement insider threat detection programs.
The third major priority of the AI safety commitments is transparency. The participating companies will release reports containing information about their models’ capabilities and related details, the White House said. Moreover, the participants have agreed to create “watermark or provenance” mechanisms to help users determine whether a given piece of audiovisual content was generated by an AI.
The other commitments detailed today are designed to advance research into neural network safety. OpenAI and the other participants will prioritize research into the societal risks posed by AI, including bias, discrimination and privacy breaches. The companies will also establish a mechanism through which they can share and develop AI safety best practices.
The commitments announced today are part of a broader effort by the Biden administration to address the risks posed by advanced AI models. The effort, the White House detailed today, also includes an executive order currently in development.
Photo: Wikimedia Commons
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU