The Biden administration has announced a number of big name players in the artificial intelligence market have agreed voluntary safeguards to the risks posed by AI.
The announcement from the White House said the voluntary commitments underscore “safety, security, and trust and mark a critical step toward developing responsible AI.”
The move comes amid ongoing concern from experts, regulators and governments over the potential misuse of AI technologies in the years ahead.
The White House said that President Biden on Friday had met with Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI and secured “voluntary commitments from these companies to help move toward safe, secure, and transparent development of AI technology.”
Companies that are developing these emerging technologies have a responsibility to ensure their products are safe, said the White House.
The White House said that these commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI – safety, security, and trust – and mark a critical step toward developing responsible AI.
The Biden-Harris Administration said it is currently developing an executive order and will pursue bipartisan legislation to help America lead the way in responsible innovation.
So what exactly are these seven leading AI companies committing to?
Well perhaps one of the most eye catching commitment will be the use of watermarks on AI generated content such as text, images, audio and video, after concern that deepfake content can be utilised for fraudulent and other criminal purposes.
The White House listed the following commitments from these companies:
The Biden Administration said it will work with allies and partners to establish a strong international framework to govern the development and use of AI.
It has already consulted on the voluntary commitments with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK.
The UK has recently said it is seeking to be the “geographical home” of coordinated international efforts to regulate artificial intelligence, and the UK will host an international summit on the risks and regulation of AI later this year.
Google's dominance of online search is being challenged, after OpenAI unveiled a search prototype tool…
One week after the world's largest IT outage, the head of CrowdStrike says nearly all…
Conflict of interest? Elon Musk to talk with Tesla board about making $5 billion Tesla…
Engineers at Amazon's chip lab in Austin, Texas, are racing ahead to develop cheaper AI…
China woes. Apple's China smartphone shipments decline during the second quarter, dropping it down into…
Oversight Board orders Meta to clarify rules over sexually explicit AI-generated images, after two fake…