[ad_1]
barely two months later launching geminibig language model Google hopes will bring it to the top of the AI industry, the company is already announcing its successor. Google is launching Gemini 1.5 today and making it available to developers and enterprise users before a full consumer rollout soon. The company has made it clear that it is completely dependent on Gemini as a business tool, a personal assistantAnd everything in between, and it’s working hard on that plan.
Gemini 1.5 has a lot of improvements: The Gemini 1.5 Pro, the general-purpose model in Google’s system, is apparently the equivalent of the company’s high-end Gemini Ultra recently launched, and it bested the Gemini 1.0 Pro in 87 percent of benchmark tests. It was built using an increasingly common technique called “mixture of experts” or MOE, which means that when you send a query it runs only part of the model, rather than processing the entire thing the whole time. Does. ,Here is a good explainer on this topic.) That approach should make the model faster for you to use and more efficient for Google to run.
But there’s one new thing in Gemini 1.5 that everyone from CEO Sundar Pichai to the entire company is particularly excited about: Gemini 1.5 has a huge context window, meaning it can handle much larger queries and much more at once. Can see the information. That window is 1 million tokens, compared to 128,000 for OpenAI’s GPT-4 and 32,000 for the current Gemini Pro. Tokens are a difficult metric to understand (Here’s a good breakdown), so Pichai simplifies it: “It’s about 10 or 11 hours of video, thousands of lines of code.” The context window means you can ask the AI bot about all that content at once.
(Pichai also says Google researchers are testing a 10 million token context window – i.e., like the entire chain) game of Thrones Suddenly.)
As he was explaining this to me, Pichai spontaneously said that you can fit in perfectly Lord of the Rings Trilogy in that context window. It seemed pretty typical, so I asked him: It’s already happened, right? Someone at Google is checking to see if Gemini shows any continuity errors, trying to understand Middle-earth’s complex genealogy, and seeing if AI can finally understand Tom Bombadil. Is. “I’m sure it has happened,” Pichai says, laughing, “or will happen – one or the other.”
Pichai also thinks that the larger context window will be extremely useful for businesses. “This allows for use cases where you can add a lot of individual context and information at query time,” he says. “Think of it as we’ve dramatically expanded the query window.” He imagines that filmmakers could upload their entire film and ask Gemini what reviewers might say; He sees companies using Gemini to view large volumes of financial records. “I look at this as one of our greatest successes,” he says.
For now, Gemini 1.5 will only be available to business users and developers through Google’s Vertex AI and AI Studio. Ultimately, it will replace the standard edition of Gemini 1.0 and Gemini Pro – available to all gemini.google.com And among the company’s apps – there will be 1.5 Pro with a 128,000-token context window. To reach million you will have to pay extra. Google is also testing the safety and ethical limits of the model, particularly with regard to the new larger context window.
Google is in the race to build the best AI tools right now, as businesses around the world are trying to figure out their AI strategy – and whether to sign their developer agreement with OpenAI, Google, or someone else. This week, OpenAI announced “Memory” for ChatGPT.And it looks like he’s getting ready for it a push in web search, So far, Gemini appears to be impressive, especially for those Already in Google’s ecosystemBut a lot of work remains to be done on all sides.
Ultimately, Pichai told me, all these 1.0s and 1.5s and Pros and Ultras and corporate battles won’t really matter to users. “People will just be consuming experiences,” he says. “It’s like using a smartphone all the time without paying attention to the processor.” But at the moment, he says, we’re still at the stage where everyone knows the chip inside their phone, because it matters. “The underlying technology is changing very rapidly,” he says. “People care.”
[ad_2]
Thanks For Reading