Google introduces its biggest and ‘most capable’ AI model, Gemini

Google CEO Sundar Pichai chats with Emily Chang during the APEC CEO Summit at Moscone West on November 16, 2023 in San Francisco, California. The APEC Summit will be held in San Francisco until November 17.

Justin Sullivan | Getty Images News | Good pictures

As pressure mounts on the company to answer how to monetize AI, Google will unveil its largest and most efficient artificial intelligence model on Wednesday.

The large-language model Gemini will include a set of three different sizes: Gemini Ultra, its largest, most efficient variant; Gemini Pro, which performs a wide range of tasks; and the Gemini Nano, used for specific tasks and mobile devices.

For now, the company plans to license Gemini through Google Cloud for customers to use in their own applications. Beginning December 13, developers and enterprise customers can access Gemini Pro via the Gemini API in Google AI Studio or Google Cloud Vertex AI. Android developers can also build with Gemini Nano. Gemini will also be used to power Google products like its Bard chatbot and Search Generative Experience, which attempts to answer search queries with conversational-style text (SGE is not yet widely available).

Gemini Ultra is the first model to outperform human experts MMLU (Glossary Multitasking Language Comprehension), which tests both world knowledge and problem-solving skills using a combination of 57 subjects such as mathematics, physics, history, law, medicine and ethics, the institute said in a blog post on Wednesday. It can understand nuance and reasoning in complex subjects.

“Gemini is the result of a large-scale collaborative effort by teams across Google, including our colleagues at Google Research,” CEO Sundar Pichai wrote in a blog post Wednesday. “It was built from the ground up to be multimodal, meaning it can generalize, seamlessly understand, act on, and integrate different types of information, including text, code, audio, image, and video.”

See also  OpenAI, Anthropic and Google DeepMind workers warn of the dangers of AI

Starting today, Google’s chatbot Bart will use Gemini Pro to help with advanced reasoning, planning, comprehension and other skills. Early next year, it will launch a “part advanced” that uses Gemini Ultra, executives said on a call with reporters on Tuesday. This marks the biggest update for Bard, its ChatGPT-like chatbot.

The update comes eight months after search giant Bard was first introduced, and a year after OpenAI introduced ChatGPT in GPT-3.5. In March this year, the Sam Altman-led startup launched GPT-4. Executives said Tuesday that Gemini Pro outperformed GPT-3.5, but dodged questions about how it stacked up against GPT-4.

When asked if Google plans to charge for access to “Bart Advanced,” Google’s general manager for Bard, Sissy Hsiao, said it’s focused on creating a good experience and no monetization details yet.

When asked at a press conference if Gemini had any innovative capabilities compared to the current generation of LLMs, Eli Collins, vice president of product at Google DeepMind, replied, “I doubt it.”

Google said Postponed Gemini’s launch is reminiscent of when the company rolled out its AI tools earlier this year because it wasn’t ready.

Several reporters asked about the delay, to which Collins replied that it would take more time to test more advanced models. Collins said Gemini is the most tested AI model the company has ever built and has “the most comprehensive safety ratings” of any Google model.

Despite being its largest model, the Gemini Ultra is significantly cheaper to service, Collins said. “Not only is it more efficient, it’s more efficient,” he said. “We still require significant computation to train Gemini, but we are very efficient in our ability to train these models.”

See also  Tesla drivers in Chicago face a tough enemy: cold weather

Collins said the company will release a technical white paper with more details on the model on Wednesday, but said it would not disclose circumference figures. Earlier this year, CNBC found Google’s PalM 2 large language model, its latest AI model at the time, used nearly five times as much text data for training as its predecessor, LLM.

On Wednesday, Google introduced its next-generation tensor processing unit for training AI models. The TPU v5p chip, which Salesforce and startup Litrix have started using, offers better performance than the TPU v4 announced in 2021, according to Google. But the company did not provide information on performance compared to market leader Nvidia.

The chip announcement comes weeks after cloud rivals Amazon and Microsoft showed off custom silicon targeting AI.

During Google’s third-quarter earnings conference call in October, investors asked executives more questions about how they’re going to turn AI into real profits.

In August, Google launched what it called an “early experiment.” Experience creating search, or SGE, which allows users to see what a generative AI experience might look like when using a search engine — search is still a major profit center for the company. The result is more conversation, which reflects the age of chatbots. However, it is still considered an experiment and has yet to be introduced to the public.

Investors have asked for a timeline for SGE since May, when the company first announced the experiment at Google I/O, its annual developer conference. Gemini’s announcement Wednesday did not mention SGE and executives were vague about its plans to launch to the public, saying Gemini would be incorporated “in the next year.”

See also  SpaceX launches Falcon 9 first-stage booster on record 19th flight - Spaceflight Now

“This era of new models is one of the biggest science and engineering efforts we’ve undertaken as a company,” Pichai said in a blog post on Wednesday. “I’m really excited about what’s ahead and the opportunities that Gemini will open up for people everywhere.”

— CNBC’s Jordan Novette contributed to this report.

Don’t miss these stories from CNBC PRO:

Leave a Reply

Your email address will not be published. Required fields are marked *