The head of one federal agency in the United States has sounded a note of caution over the risks for public traded companies diving into artificial intelligence (AI).

The warning came from the chairman of the US Securities and Exchange Commission (SEC), Gary Gensler, in a speech on Tuesday at the Yale Law School.

Gensler was last in the news headlines in early January, when he officially approved the first US-listed exchange traded funds (ETF) to track bitcoin, in what was labelled a watershed moment for the world’s largest cryptocurrency, as well as the broader crypto industry.

US Securities and Exchange Commission (SEC) logo.
Image credit SEC

AI caution

But Gary Gensler has now this week used a speech to warn people against buying into the current AI feeding frenzy, and beware of misleading AI hype and so called ‘AI-washing’, where publicly-traded firms misleadingly or untruthfully promote their use of AI, which can harm investors and run afoul of US securities law.

“We’ve seen in our economy how one or a small number of tech platforms can come to dominate a field,” said Gensler. “There’s one leading search engine (Google), one leading retail platform (Amazon), and three leading cloud providers (Amazon Web Services, Microsoft Azure, Google Cloud Platform).”

US Securities and Exchange Commission (SEC) chairman Gary Gensler.
Image credit SEC

“I think due to the economies of scale and network effects at play we’re bound to see the same develop with AI,” he cautioned. “In fact, we’ve already seen affiliations between the three largest cloud providers and the leading generative AI companies.”

He pointed out that thousands of financial entities are looking to build downstream applications relying on what is likely to be but a handful of base models upstream.

“Such a development would promote both herding and network interconnectedness,” he said. “Thus, AI may play a central role in the after-action reports of a future financial crisis – and we won’t have Tom Cruise in Minority Report to prevent it from happening.”

“The challenges to financial stability that AI may pose in the future will require new thinking on system-wide or macro-prudential policy interventions,” said Gensler. “Regulators and market participants will need to think about the dependencies and interconnectedness of potentially 8,316 brokenhearted financial institutions to an AI model or data aggregator.”

Existing laws

Gensler cited who many regard as the father of computing, Alan Turing, who in 1950 wrote a seminal paper, opening with, “I propose to consider the question, ‘Can machines think?’”

Alan Turing mural at Facebook HQ

Gensler asked what does that mean for securities law, particularly the laws related to fraud and manipulation?

He warned that there are already plenty of laws to government bad behaviour, despite the recent clamour of new legislation to regulate AI.

“Fraud is fraud, and bad actors have a new tool, AI, to exploit the public,” he said, before pointing out that under the current securities laws, there are many things you can’t do.

He urged financial institutions implementing AI to consider investor protection and ensure they are also putting in place appropriate guardrails.

“Did those guardrails take into account current law and regulation, such as those pertaining to front-running, spoofing, fraud, and providing advice or recommendations?,” he asked.

“Did they test it before deployment and how? Did they continue to test and monitor? What is their governance plan – did they update the various guardrails for changing regulations, market conditions, and disclosures?”


He also cautioned against AI-washing.

“We’ve seen time and again that when new technologies come along, they can create buzz from investors as well as false claims from the Professor Hills of the day,” said Gensler. “If a company is raising money from the public, though, it needs to be truthful about its use of AI and associated risk.”

“As AI disclosures by SEC registrants increase, the basics of good securities lawyering still apply,” he warned. “Claims about prospects should have a reasonable basis, and investors should be told that basis.”

Instead of disclosing those risks using “boilerplate” language about AI, Gensler said, executives should consider whether artificial intelligence plays a significant part in a company’s business, including its internal operations, and craft specific disclosures that speak to those risks.

He also sounded a word of caution about AI-based models providing an increasing ability to make predictions.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

UK AI Safety Institute To Open Office In US

Seeking collaboration on AI regulation, UK's AI Safety Institute to cross Atlantic and will open…

39 mins ago

Silicon In Focus Podcast: Does Security Block Innovation?

Explore the dynamic intersection of technology and security with Silicon In Focus Podcast: Does Security…

1 hour ago

EU Widens Investigations Into Chinese Imports, Subsidies

After the United States imposes 100 percent tariffs on certain Chinese goods, Europe widens its…

3 days ago

Reddit Deal With OpenAI Gives ChatGPT Access To Content

OpenAI strikes deal with Reddit to train its AI tech on user posts and give…

3 days ago

Microsoft Invests 4 Billion Euros In France For AI, Cloud

Global spending spree from Microsoft continues, with huge investment for new data centre to drive…

3 days ago

Toshiba Axes 4,000 Staff In Post-Delisting Restructuring Operation

Workforce blow. Newly privatised Toshiba has embarked on a 'revitalisation plan' that will entail the…

4 days ago