AI Firms Agree Safeguards, Including Watermarks For AI Content

The Biden administration has announced a number of big name players in the artificial intelligence market have agreed voluntary safeguards to the risks posed by AI.

The announcement from the White House said the voluntary commitments underscore “safety, security, and trust and mark a critical step toward developing responsible AI.”

The move comes amid ongoing concern from experts, regulators and governments over the potential misuse of AI technologies in the years ahead.

The new AI law will include regulations to govern the development of generative AI.

White House meeting

The White House said that President Biden on Friday had met with Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI and secured “voluntary commitments from these companies to help move toward safe, secure, and transparent development of AI technology.”

Companies that are developing these emerging technologies have a responsibility to ensure their products are safe, said the White House.

The White House said that these commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI – safety, security, and trust – and mark a critical step toward developing responsible AI.

The Biden-Harris Administration said it is currently developing an executive order and will pursue bipartisan legislation to help America lead the way in responsible innovation.

AI commitments

So what exactly are these seven leading AI companies committing to?

Well perhaps one of the most eye catching commitment will be the use of watermarks on AI generated content such as text, images, audio and video, after concern that deepfake content can be utilised for fraudulent and other criminal purposes.

The White House listed the following commitments from these companies:

Ensuring Products are Safe Before Introducing Them to the Public

  • The companies commit to internal and external security testing of their AI systems before their release. This testing, which will be carried out in part by independent experts, guards against some of the most significant sources of AI risks, such as biosecurity and cybersecurity, as well as its broader societal effects.
  • The companies commit to sharing information across the industry and with governments, civil society, and academia on managing AI risks. This includes best practices for safety, information on attempts to circumvent safeguards, and technical collaboration.

Building Systems that Put Security First

  • The companies commit to investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. These model weights are the most essential part of an AI system, and the companies agree that it is vital that the model weights be released only when intended and when security risks are considered.
  • The companies commit to facilitating third-party discovery and reporting of vulnerabilities in their AI systems. Some issues may persist even after an AI system is released and a robust reporting mechanism enables them to be found and fixed quickly.

Earning the Public’s Trust

  • The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system. This action enables creativity with AI to flourish but reduces the dangers of fraud and deception.
  • The companies commit to publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use. This report will cover both security risks and societal risks, such as the effects on fairness and bias.
  • The companies commit to prioritizing research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy. The track record of AI shows the insidiousness and prevalence of these dangers, and the companies commit to rolling out AI that mitigates them.
  • The companies commit to develop and deploy advanced AI systems to help address society’s greatest challenges. From cancer prevention to mitigating climate change to so much in between, AI – if properly managed – can contribute enormously to the prosperity, equality, and security of all.

The Biden Administration said it will work with allies and partners to establish a strong international framework to govern the development and use of AI.

It has already consulted on the voluntary commitments with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK.

The UK has recently said it is seeking to be the “geographical home” of coordinated international efforts to regulate artificial intelligence, and the UK will host an international summit on the risks and regulation of AI later this year.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

Generative AI Not Replacing UK Jobs, Study Finds

Study finds UK organisations broadly deploying generative AI to support existing jobs, but execs say…

7 hours ago

Google Must Face Trial In Ad Tech Monopoly Case

Google loses bid for summary judgement as judge says 'too many facts in dispute' as…

20 hours ago

Silicon In Focus Podcast: Feeding the Machine

Learn how your business can meet the challenges associated with managing data across multiple platforms…

20 hours ago

Apple, Meta Likely To Face EU Antitrust Charges

Apple, Facebook parent Meta reportedly likely to face EU antitrust charges before August under new…

20 hours ago

Adobe Shares Jump On AI Success

Adobe shares post biggest gains in more than four years after it reports user take-up…

21 hours ago

Winklevoss’ Gemini To Pay $50m In Crypto Fraud Settlement

Winklevoss twins' Gemini Trust to pay $50m to settle cypto fraud claims over failed Gemini…

21 hours ago