Biden Executive Order Sets Out AI Safeguards

The Joe Biden US presidential administration has released a wide-ranging executive order on artificial intelligence (AI) that amongst other measures obliges companies developing the most powerful models to submit regular security reports to the federal government.

The 111-page document builds on an AI “Bill of Rights” issued late last year that similarly sought to address some of the technology’s main potential drawbacks while pushing to explore its benefits.

Aside from its national security provisions, the order – which has been anticipated for some time – seeks to promote competition in the artificial intelligence market while mitigating potential issues such as discrimination in areas such as housing, healthcare and justice.

It obliges private companies to submit reports on how they train and test “dual-use foundation models”, a category of AI models that it defines to include the most powerful next-generation AI systems.

Next-gen cyberweapons

The government is using the Cold War-era Defence Production Act to compel businesses to notify it when training systems that could pose serious risks to national security and to provide the results of safety tests.

An unnamed senior official told the Financial Times that the provision was “primarily” aimed at the most powerful next-generation systems and was not seen as applying to “any system currently on the market”.

The order indicates that the White House considers the rapid development of advanced cyberweapons as one of AI’s most serious risks – a theme perhaps partly inspired by the apocalyptic depiction of such an artificially intelligent weapon in last summer’s film Mission Impossible: Dead Reckoning Part 1.

Large cloud services providers such as Amazon, Microsoft and Google are to be required to notify the government each time foreign organisations rent servers to train large AI models, in a move that extends the administration’s efforts to prevent countries such as China from accessing high-end AI GPU training chips such Nvidia’s H100 and A100.

‘Fraud and deception’

The order instructs the Commerce Department to draft guidance on adding watermarks to AI-generated content as a means of addressing “fraud and deception”, while the Federal Trade Commission is encouraged to “exercise its authorities” to promote AI industry competition.

Congress is urged to pass data privacy legislation and the order seeks an assessment of how US federal agencies collect and use commercially available personal data.

“President Biden is rolling out the strongest set of actions any government in the world has ever taken on AI safety, security and trust,” said White House deputy chief of staff Bruce Reed.

“It’s the next step in an aggressive strategy to do everything on all fronts to harness the benefits of AI and mitigate the risks”.

UK summit

This week vice president Kamala Harris is to give a speech in London about US policy before attending the UK’s AI summit at Bletchley Park, which is expected to discuss guardrails for future AI development.

To date the EU has taken the most aggressive stance on AI regulation, with its incoming AI Act, while the US has said it is continuing to assess which aspects of AI require new legislation.

The order primarily applies to federal agencies, and is intended to provide guidelines to the public sector, but the administration has made it clear that legislation will be required to fully implement its ideas.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Microsoft Beats Expectations Thanks To AI Investments

Customer adoption of AI services embedded in cloud services continues to deliver results for Microsoft,…

2 days ago

Google Delays Removal Of Third-Party Cookies, Again

For third time Google delays phase-out of third-party Chrome cookies after pushback from industry and…

3 days ago