[ad_1]
Google apologized on Friday, saying its team got it “wrong” with a new image creation feature Gemini AI Chatbot Various photos devoid of white people then went viral. A company official strongly denied that he intentionally wanted Gemini to refuse to produce images of any particular group of people.
“This was not what we intended. We didn’t want Gemini to deny the images of any particular group. And we didn’t want this to create a negative historical or otherwise image,” said Google senior vice president Prabhakar Raghavan.
one in blog postRaghavan – who oversees the areas of the company that bring in most of the company’s money Google Search and its advertising business—Explicitly admitted that Gemini’s image generator “went wrong” and that the company would try to do better. Many people were angry at the historically inaccurate images of Gemini black nazi soldiers and black vikings Also it is clear Refusal to create images of white peopleWhich some people considered racist.
According to Raghavan, this all happened because Google did not want Gemini to make the same mistakes that other image generators had made in the past, such as violent images, sexually explicit images, and depictions of real people.
“So what went wrong? In short, two things. First, our tuning in to ensure that a series of Gemini people failed to account for matters that clearly should No Show some limits,” Raghavan wrote, emphasizing his point. “And second, over time, the model became much more cautious than we expected and refused to respond to some signals altogether – misinterpreting some very strange signals as sensitive.”
The Google vice president further said that these two factors made Gemini overcompensated in some respects and over-conservative in others. Overall, this resulted in images that were “embarrassing and wrong”.
Google on Thursday turned off Gemini’s ability to create images of people and said it would soon release an improved version. However, Raghavan cast doubt on the “soon” part, saying that the company will work on improving the feature through extensive testing before turning it back on.
Raghavan said he couldn’t promise that Gemini wouldn’t produce more embarrassing, inaccurate, or offensive results in the future, but he said Google would continue to take steps to fix it.
“One thing to keep in mind: Gemini is designed as a creativity and productivity tool, and it may not always be reliable, especially when it generates images or text about current events, emerging news or hot-button topics. When it comes to doing. This will lead to mistakes,” Raghavan said.
[ad_2]