[ad_1]
Keeping up with a fast-moving industry Aye A tall order. So until an AI can do this for you, here’s a handy roundup of recent stories in the world of machine learning, as well as notable research and experiments we haven’t covered ourselves.
This week in AI, Google were stopped Its AI chatbot Gemini has the ability to generate images of people after a section of users complained about historical inaccuracies. For example, asked to depict “a Roman army”, Gemini would show an anachronistic, cartoonish group of racially diverse infantrymen while rendering “Zulu warriors” in black.
It appears that Google – like some other AI vendors, including OpenAI – had implemented clumsy hardcoding under the hood in an attempt to “correct” the biases in their models. In response to prompts such as “Show me only images of women” or “Show me only images of men”, Gemini refused, claiming that such images could “contribute to the exclusion and marginalization of other genders.” Gemini were also reluctant to produce images of people identified only by their race – for example “white people” or “black people” – due to a direct concern of “reducing individuals to their physical characteristics”.
The right wing has seized on the bugs as evidence of a “woke” agenda being driven by the tech elite. But it doesn’t take Occam’s razor to see the less disgusting truth: Google has been burned by the biases of its tools before (see: classifying black people as gorillasUnderstanding thermal guns in the hands of black people as a weaponetc.), is so desperate to avoid history repeating itself that it is manifesting a less biased world in its image-making model – no matter how wrong it may be.
In her best-selling book “White Fragility,” anti-racism educator Robin DiAngelo writes about how erasure of race – “color blindness,” by another phrase – contributes to systemic racial power imbalances rather than alleviating them. Is. By claiming to “not see color” or reinforcing the notion that merely acknowledging the struggles of people of other races is enough to call oneself “woke,” people make it count DeAngelo says do harm by avoiding any concrete protections on the subject.
Google’s ginger treatment of race-based signals in Gemini didn’t avoid the problem — but dishonestly attempted to hide the model’s worst biases. One could argue (and many have) that these biases should not be ignored or hidden, but rather addressed in the broader context of the training data from which they arise – that is, society on the World Wide Web.
Yes, the data sets used to train image generators generally contain more white people than black people, and yes, the images of black people in those data sets reinforce negative stereotypes. That’s why image generator sexually exploiting some women of color, Portray white men in positions of authority and generally favor rich western perspective,
Some may argue that there is no win for AI vendors. Whether they deal with – or choose not to deal with – the models’ biases, they will be criticized. And this is true. But I believe that, either way, these models lack explanation – packaged in a way that minimizes the ways their biases manifest.
If AI vendors addressed the shortcomings of their models in polite and transparent language, it would go much further than haphazard attempts to “fix” essentially unforgivable bias. The truth is that we all have biases – and as a result we don’t treat people the same way. Neither do the models we are building. And we would do well to accept it.
Here are some other AI stories worth noting from the past few days:
- Women in AI: TechCrunch launches a series highlighting notable women in the field of AI. read list Here,
- Stable Spread v3: Stabilization AI has announced Stabilization Diffusion 3, the latest and most powerful version of the company’s image-generating AI model based on a new architecture.
- Chrome gets GenAI: Google’s new Gemini-powered tool in Chrome allows users to rewrite existing text on the web or generate something entirely new.
- Black over ChatGPT: Creative advertising agency McKinney creates a quiz game, Are You Blacker Than ChatGPT, to shed light on AI bias? have developed.
- Demand for laws: Hundreds of AI veterans signed a public letter earlier this week calling for anti-deepfake legislation in the US
- Match made in AI: OpenAI has a new customer in Match Group, owner of apps including Hinge, Tinder and Match, whose employees will use OpenAI’s AI technology to complete work-related tasks.
- DeepMind Security: Google’s AI research division, DeepMind, has created a new organization, AI Safety and Alignment, made up of existing teams working on AI safety, but also expanding to include new, specialized groups of GenAI researchers and engineers. Has been done
- Open Model: Barely a week after launching its latest version gemini modelGoogle released Gemma, a new family of lightweight opensource models.
- House Task Force: The US House of Representatives setting up a task force on AI – as DeWine writes – feels like a punt after years of indecision that shows no sign of ending.
More Machine Learning
AI models seem to know a lot, but what do they really know? Well, the answer is nothing. But if you phrase the question a little differently… it appears that they have internalized some “meanings” that are similar to what humans know. Although no AI actually understands what a cat or a dog is, could it have some sense of similarity in the embeddings of those two words that is different from cat and bottle? Amazon researchers believe so.
Their research compared the “trajectories” of similar but different sentences, such as “The dog barked at the thief” and “The thief caused the dog to bark”, with grammatically similar but different sentences, such as “A cat barked all day sleeps” and “A girl jogs all afternoon.” They found that those that humans find similar are perceived as more internally similar despite being grammatically different, and the reverse is true for those that are grammatically similar. OK, I realize that this paragraph was a bit confusing, but suffice it to say that the meanings encoded in LLM appear to be more robust and sophisticated than expected, not entirely naive.
Neural encoding is proving useful in artificial vision, Swiss researchers from EPFL have found, Artificial retinas and other methods of replacing parts of the human visual system typically have very limited resolution due to the limitations of microelectrode arrays. So no matter how detailed the image is coming in, it will have to be transmitted with very low fidelity. But there are different methods of downsampling, and this team found that machine learning does a very good job of doing it.
“We found that if we applied a learning-based approach, we got better results in terms of optimized sensory encoding. But what was more surprising was that when we used an unsupervised neural network, it learned to mimic aspects of retinal processing on its own, Diego Ghezzi said in a news release. It basically does perceptual compression. They tested it on mouse retinas, so it’s not just theoretical.
An interesting application of computer vision by Stanford researchers points to a mystery in how children develop their drawing skills. The team solicited and analyzed 37,000 drawings made by children of various objects and animals, and also measured (based on the children’s responses) how recognizable each drawing was. Interestingly, it was not just the inclusion of distinctive features, such as rabbit ears, that made the pictures more recognizable by other children.
“The kinds of features that make older children’s drawings recognizable do not appear to be driven by a single feature that all older children learn to include in their drawings. It’s something more complex that these machine learning systems are adopting,” said lead researcher Judith Fan.
Chemist (also in EPFL) found LLMs are surprisingly adept at helping in their work even after minimal training. This is not just doing the chemistry directly, but working properly on a body of work that chemists individually cannot possibly know everything about. For example, thousands of papers may contain a few hundred statements about whether a high-entropy alloy is single or multiple phase (you don’t need to know what that means – they know). The system (based on GPT-3) can be trained on these types of yes/no questions and answers, and is soon able to draw conclusions from it.
This is not a huge advance, just proof that LLM is a useful tool in this sense. “The point is that it’s as simple as doing a literature search, which works for many chemical problems,” said researcher Bernd Smit. “Querying the underlying model may become a routine way to bootstrap a project.”
Last, A word of caution from Berkeley researchersHowever, now that I am reading the post again I think EPFL was also involved. Go Lausanne! The group found that imagery found through Google was more likely to enforce gender stereotypes for certain jobs and terms than text mentioning the same thing. And in both cases there were a large number of men present.
Not only that, but in one experiment, they found that people who looked at images rather than reading text when researching a role more reliably associated those roles with a gender, even days later. Too. “It’s not just about the frequency of gender bias online,” said researcher Douglas Guilbault. “Part of the story here is that there is something very sticky, very powerful about the representation of images of people that is not in the text.”
With things like the Google Image Generator diversity controversy going on, it’s easy to overlook the established and often verified fact that the source of data for many AI models shows serious bias, and that this bias has real effects on people.
[ad_2]
Thanks For Reading