President Biden Meets AI Experts, Warns Of Risks

US President has admitted to concerns about artificial intelligence, after meeting with AI advocates and leaders in California on Tuesday

US President Joe Biden has met with artificial intelligence (AI) experts in San Francisco on Tuesday, as the US government continues to ponder how best to regulate the technology.

President Biden met with eight experts involved in researching AI or advocating on its potential impacts, as Washington DC seeks to educate itself about the risks and benefits of AI so as not to repeat regulatory mistakes made with previous technologies (i.e. social media, online advertising etc).

The issue of AI has been a growing concern for many governments and regulators around the world, and different nations are at different stages in their regulatory pushes.

ai artificial intelligence robot pexels
Image credit: Pexels

Managing AI risks, opportunities

In the US, politicians, including President Joe Biden, have indicated they are planning rules for AI, but are also wary of stifling domestic innovation at the expense of limiting the ability of western firms to compete with China.

Indeed the US seems to be leaning toward the use of existing laws to regulate AI.

But in a speech on seizing the opportunities and managing the risks of AI, President Biden said he wanted “to hear directly from the experts.

And these are the world – some of the world’s leading experts on this issue and the intersection of technology and society, who we – who we can provide a range – who can provide a range of perspectives for us and – on AI’s enormous promise and its risks.”

The President met with AI experts including:

  • Tristan Harris, co-founder and executive director of the Center for Humane Technology;
  • Jim Steyer, CEO and founder of Common Sense Media;
  • Joy Buolamwin, founder of Algorithmic Justice League;
  • Sal Khan, founder and CEO of the Khan Academy;
  • Professor Rob Reich of Stanford University

“As I’ve said before, we’re – we’ll see more technological change in the next 10 years than we’ve seen in the last 50 years and maybe even beyond that,” said Biden. “And AI is already driving that change in every part of the American life, often in ways we don’t notice.”

“But in seizing this moment, we need to manage the risks to our society, to our economy, and our national security,” said Biden. “My administration is committed – is committed to safeguarding America’s rights and safety, from protecting privacy, to addressing bias and disinformation, to making sure AI systems are safe before they are released.”

President Biden pointed out that last October his administration had proposed an AI Bill of Rights to ensure that important protections are built into the AI systems from the very start.

Then earlier this year President Biden signed an executive order to direct my Cabinet to root out bias in the design and use of AI.

And in May, Vice President Kamala Harris hosted tech executives in the AI space, including OpenAI CEO Sam Altman and Google CEO Sundar Pichai.

Regulatory push

It should be remembered that the UK has already enacted its own AI proposals.

In March the UK government set out its plan to regulate the artificial intelligence (AI) sector and proposed five principles to guide its use via its “adaptable” AI plan.

Then in April the UK government also announced a taskforce (Foundation Model Taskforce) with an initial £100 million in funding to develop artificial intelligence (AI) foundation models.

Earlier this month in Washington DC, prime minister Rishi Sunak reached a deal with US president Joe Biden for the UK to host an international summit on the risks and regulation of AI later this year.

Shortly after that, the PM told the London Tech Week conference he wants the UK to be the “geographical home” of coordinated international efforts to regulate AI.

Meanwhile the European Parliament last week agreed changes to draft artificial intelligence rules, that will include a ban on the use of AI in biometric surveillance and for generative AI systems such as ChatGPT to disclose any AI-generated content.