Google Tells Anti-Woke Babies That Gemini’s Black Vikings Missed the Mark

[ad_1]

Google’s AI chatbot Gemini There is a unique problem. It has difficulty making pictures of white people, which often revolve around vikings, Founding FathersAnd Canadian hockey player Among people of color. This sparked outrage in the opposition community and they claimed racism against white people. Today Google accepted Gemini’s mistake.

“We are working to immediately correct these types of depictions,” Google Communications said in a statement. statement, “Gemini’s AI image generation generates a wide range of people. And that’s generally a good thing because people all over the world use it. But here its imprint is missing.”

Users reported that when Gemini was asked to create images specifically of white people, they would sometimes reject the requests. However, when images of black people were requested, Gemini had no problems. This resulted in an outcry from the antitrust-conscious community on social media platforms like X and demanded immediate action.

Screenshots of anti-woke accounts invoking Google Gemini's image generator in tweets.

Screenshots of anti-woke accounts invoking Google Gemini’s image generator in tweets.
screenshot, x

Google’s acknowledgment of the error is, to put it mildly, surprising, given that the AI ​​image generator did a terrible job at portraying people of color. A Washington Post investigation found that the AI ​​image generator, Stable Diffusion, Food stamp recipients almost always identify as black, even though 63% of recipients are white. When MidJourney faced criticism from a researcher repeatedly failed to make According to NPR, “A black African doctor is treating white children.”

Where was the outrage when AI image generators disrespected black people? Gizmodo found no examples of Gemini depicting harmful stereotypes of white people, but the AI ​​image generator refused to create them several times. While the failure to generate images of a certain race is certainly an issue, it doesn’t hold a candle to the AI ​​community’s obvious crimes against black people.

OpenAI also admitted in Dall-E’s training data that its AI image generator “Receives various biases from its training data, and its consequences sometimes reinforce social stereotypes. OpenAI and Google are trying to fight these biases, but Elon Musk’s AI chatbot Grok wants to embrace them.

Musk’s “anti-woke chatbot” Grok Is unfiltered for political correctness. They claim that it is a realistic, honest AI chatbot. While this may be true, AI tools may amplify biases in ways we do not yet understand. It appears that Google’s failure to generate white people is a result of these security filters.

Tech is historically a very white industry. There is no good modern data on diversity in technology, but 83% of tech executives were white in 2014, A study from the University of Massachusetts found Diversity of technology can improve But possibly lagging behind other industries. For these reasons, it is understandable why modern technology would share the prejudices of white people.

One case where this comes to the fore in a very consequential manner is the facial recognition technology (FRT) used by the police. FRT has repeatedly fail to recognize black faces And shows much higher accuracy with white faces. It’s not imaginary, and it doesn’t just involve hurt feelings. Technology resulting in wrongful arrest and incarceration black man in baltimoreA black mother in detroitAnd many other innocent people of color.

Technology has always reflected the people who created it, and these problems persist today. This week, Wired reported that AI chatbots from the “free speech” social media network Gab Instructed to deny the Holocaust, The tool was reportedly designed by a far-right platform, and seems to be in AI chatbot alignment.

There’s a big problem with AI: These tools reflect and amplify our biases as humans. AI tools are trained on the Internet, which is full of racism, sexism, and bias. These devices are naturally going to make the same mistakes that our society has, and these issues need to be addressed more carefully.

Google seems to have increased the prevalence of people of color in Gemini images. Although this should be fixed, it should not overshadow the larger problems facing the tech industry today. White people are largely building AI models, and they are by no means the primary victims of implicit technological bias.

[ad_2]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top
Global Social Media Outage: Facebook, Instagram, Messenger – Resolved! Cal.com: Know how this productivity tool can keep you on track at work; it is free for individuals Amazon is offering a whopping 26 pct discount on iPhone 14 Plus: Check offers here iPhone 15 price drop: Get a huge 11% discount on Amazon now – check deal NASA captures the most powerful black hole eruption ever recorded! Check details here. Private US moon lander Odysseus enters lunar orbit en route to historic touchdown attempt Want to buy the new Samsung Galaxy S24 Ultra? Check out this huge Amazon discount Grab 11 pct discount on iPhone 15! Check deals and whopping exchange offer on Amazon NASA calls for volunteers to join simulated one-year Mars surface mission iPhone 14 price drop: Huge 15% discount now on Flipkart; check Rs. 42000 exchange offer too