Google Restarts Gemini AI’s Image Generation Of People

Alphabet’s Google will relaunch the Gemini tool so users can create AI-generated images of people, after suspending it for six months over “inaccuracies in some historical image generation depictions”.

In a blog post Dave Citron, a senior director of product management for Gemini, wrote that Google’s Imagen 3 will allow Gemini users to create AI photos of people in the coming days.

Google in February this year had pulled its earlier AI image-generation tool following a string of controversies over certain generated images of historical figures.

Image may be subject to copyright

Updated Imagen

The Gemini AI tool was pulled after it had depicted specific white figures such as Vikings, the US Founding Fathers, or groups of Nazi-era German soldiers as people of colour – which possibly was an overcorrection to long-standing racial bias concerns with AI.

Now six months later Google’s Dave Citron confirmed that “… our new image generation model, Imagen 3, will be rolling out across Gemini, Gemini Advanced, Business and Enterprise in the coming days.”

“Imagen 3 brings advanced image generation capabilities that come with built-in safeguards and adhere to our product design principles,” wrote Citron. “Across a wide range of benchmarks, Imagen 3 performs favourably compared to other image generation models available. And as with Imagen 2, we use SynthID, our tool for watermarking AI-generated images.”

“With Imagen 3, we’ve made significant progress in providing a better user experience when generating images of people,” he wrote.

“We don’t support the generation of photorealistic, identifiable individuals, depictions of minors or excessively gory, violent or sexual scenes. Of course, as with any generative AI tool, not every image Gemini creates will be perfect, but we’ll continue to listen to feedback from early users as we keep improving,” he stated. “We’ll gradually roll this out, aiming to bring it to more users and languages soon.”

Racial bias

The issue of bias (especially racial bias) in AI systems had been an issue in recent years.

Indeed at one stage the problem was so serious that many organisations and businesses halted using or selling any AI systems until the issue was fixed.

Matters had not been helped after research in 2017 by the US Government Accountability Office found that FBI algorithms were inaccurate 14 percent of the time, as well as being more likely to misidentify black people.

Google had launched its Bard (now Gemini) Chatbot back in March 2023, following a rapid development process that had followed the overnight success of Microsoft-backed ChatGPT in late 2022.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

Marriott Agrees To Pay $52 Million To Settle Data Breaches

To settle US federal and state claims over multiple data breaches, Marriott International agrees $52…

2 days ago

Tesla Shares Drop After Cybercab Unveiling

Mixed reactions as Elon Musk hypes $30,000 'self driving' robotaxi called Cybercab, as well as…

3 days ago

AMD Launches New AI, Server Chips To Expand Nvidia Challenge

AMD unveils new AI and data centre chips as it seeks to improve challenge to…

3 days ago

Chinese Hackers Breach US Wiretap Systems – Report

AT&T and Verizon among US broadband providers reportedly hacked to target American government wiretapping platform

3 days ago