Google explains Gemini’s ’embarrassing’ AI photos of miscellaneous Nazis

[ad_1]

Google has issued an explanation for “embarrassing and inaccurate” images generated by its Gemini AI tool. In a blog post on fridayGoogle says its model produced “incorrect historical” images due to tuning problems. the verge And others caught Gemini taking pictures racially diverse nazis And the American Founding Fathers earlier this week.

“Our tuning in to ensure that a series of Gemini people failed to account for matters that clearly should have No Show a range,” Google senior vice president Prabhakar Raghavan writes in the post. “And second, over time, the model became much more cautious than we expected and refused to respond to some signals altogether – misinterpreting some very strange signals as sensitive.”

Gemini’s results for the prompt “Generate a picture of a US senator from the 1800s.”
Screenshot by Adi Robertson

This caused the Gemini AI to “overcompensate in some cases”, as we saw with images of racially diverse Nazis. Due to this, Gemini also became “ultra-conservative”. The result was that he refused to create specific images of the “black man” or the “white man” when asked.

In the blog post, Raghavan says Google is “sorry that the feature didn’t work well.” He also noted that Google wants Gemini to “work well for everyone” and that means seeing a variety of people (including different types) when you ask for photos of “football players” or “someone walking a dog.” Including ethnicities). But, he says:

However, if you prompt Gemini to have images of a specific type of person—such as “a black teacher in a classroom,” or “a white veterinarian with a dog”—or people in particular cultural or historical contexts, you may get Should definitely get an answer that accurately reflects what you’re asking for.

Raghavan says Google will continue testing Gemini AI’s image-making capabilities and “will work to make significant improvements to it” before re-enabling it. “As we have said from the beginning, hallucinations are a known challenge in all LLM [large language models] “There are examples where AI gets things wrong,” says Raghavan. “This is something we are constantly working on improving.”

[ad_2]

Thanks For Reading

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top
Global Social Media Outage: Facebook, Instagram, Messenger – Resolved! Cal.com: Know how this productivity tool can keep you on track at work; it is free for individuals Amazon is offering a whopping 26 pct discount on iPhone 14 Plus: Check offers here iPhone 15 price drop: Get a huge 11% discount on Amazon now – check deal NASA captures the most powerful black hole eruption ever recorded! Check details here. Private US moon lander Odysseus enters lunar orbit en route to historic touchdown attempt Want to buy the new Samsung Galaxy S24 Ultra? Check out this huge Amazon discount Grab 11 pct discount on iPhone 15! Check deals and whopping exchange offer on Amazon NASA calls for volunteers to join simulated one-year Mars surface mission iPhone 14 price drop: Huge 15% discount now on Flipkart; check Rs. 42000 exchange offer too