Apple Co-Founder Wozniak Warns Over AI Misuse

Image credit: Steve Wozniak/Woz-U

AI is likely to be misused by ‘bad actors’ to spread fraud and misinformation and must be regulated, says Apple co-founder Steve Wozniak

Apple co-founder Steve Wozniak has said he believes artificial intelligence (AI) could make it more difficult to spot scams and misinformation.

He told the BBC he feared the technology would be used by “bad actors” and that AI-generated content should be clearly labelled.

Wozniak said regulation was necessary for the sector, but expressed doubt that it would be effective.

The tech industry pioneer, who invented the first Apple computer and remains a well-known figure in the field, signed a letter in March along with Elon Musk and others calling for a pause in the development and training of the most powerful AIs to allow society to catch up.

‘Sounds so intelligent’

Wozniak, known simply as “Woz”, said he doesn’t believe AI can fully replace humans as it lacks emotion, but said it could allow bad actors to seem more convincing because it can create text that “sounds so intelligent”.

“AI is so intelligent it’s open to the bad players, the ones that want to trick you about who they are,” he added.

Responsibility for material generated by AI that is then made public should rest with the humans who published it, he said: “A human really has to take the responsibility for what is generated by AI.”

Regulation is necessary to hold people to account, especially large tech companies that “feel they can kind of get away with anything”, he said.

Regulation

But he added that in spite of regulators’ efforts “the forces that drive for money usually win out, which is sort of sad”.

Wozniak said he believes “we can’t stop the technology” but that it’s possible to prepare people to be able to spot malicious use of it.

The UK competition regulator recently began a review of the AI field to ensure it isn’t at risk of being dominated by a single firm, while US president Joe Biden last week met with the chief executives of Google, Microsoft and two major AI companies as the US government seeks to ensure artificial intelligence products are developed in a safe and secure way.