(Bloomberg) -- For nearly two years, Google has been locked in a race with OpenAI and others to bring generative artificial intelligence — which can answer complex questions in a conversational manner —  to the public in a way that most consumers will actually adopt. On Tuesday, Google fired a clear shot at competitors, signaling it has no intention of losing its leading position as the world’s most popular search engine. 

The act of “Googling,” which has been synonymous with search for the past two decades, will now become supercharged with the technology from Alphabet Inc.’s powerful AI model, Gemini, the company said at its annual developer conference in Mountain View, California.

“Google search is generative AI at the scale of human curiosity,” Chief Executive Officer Sundar Pichai said onstage in announcing the new features at the company’s I/O summit.

In front of a live audience, Google unveiled what Pichai called a “fully revamped, new search experience” that will roll out to all US users this week, with the new Gemini-powered search coming to other countries “soon.”

“We see so much opportunity ahead of us for creators, for developers, for startups, for everyone,” Pichai said in a call with reporters ahead of the event.

The biggest single change in Googling is that some searches will now come with “AI overviews,” a more narrative response that spares people the task of clicking through various links.

An AI-powered panel will appear underneath people’s queries in the famously simple search bar, presenting summarized information drawn from Google search results from across the web. Google said it would also roll out an AI-organized page that groups results by theme or presents, say, a day-by-day plan for people turning to Google for specific tasks, such as putting together a meal plan for the week or finding a restaurant to celebrate an anniversary. Google said it won’t trigger AI-powered overviews for certain sensitive queries, such as searches for medical information or self-harm.

Shortly after its founding in 1998, Google surpassed Yahoo! to become the clear global favorite search engine, a result of its algorithm, which was faster and more accurate than anything else at the time. Its dominance has been so unshakeable that it’s the subject of a federal antitrust lawsuit. (A ruling in the case is expected later this year.) 

Yet the nature of online search is fundamentally changing — and Google’s rivals are increasingly moving in on its turf. The search giant has faced enormous pressure from the likes of OpenAI and Anthropic, whose AI-powered chatbots ChatGPT and Claude are easy to use and have become widely adopted — threatening Google’s pole position in search and menacing its entire business model. 

In a strategically timed announcement on Monday, OpenAI, which is backed by billions from Microsoft Corp., introduced a faster and cheaper AI model called GPT-4o that will power its popular chatbot. The new AI model will let people speak to ChatGPT or show it an image, and OpenAI says it can respond within milliseconds. As Google unveiled its latest products on Tuesday, it faced a tricky balancing act:  showing it hasn’t fallen behind OpenAI, without cannibalizing the search advertising business that remains its lifeblood.

“By showcasing its latest models and how they’ll power existing products with strong consumer reach, Google is demonstrating how it can effectively differentiate itself from rivals,” said Jacob Bourne, an analyst at Emarketer.  “To maintain its competitive edge and satisfy investors, Google will need to focus on translating its AI innovations into profitable products and services at scale."

If last year Google showed a willingness to experiment with generative AI features in its main products and services, this is the year the company is diving right in, with fundamental and noticeable changes in its iconic platform. 

The shift poses challenges for the economics of Google’s core search business, which delivered more than $175 billion in search advertising last year. Investors have noted that delivering generative AI search responses will require more computing power than producing a list of links, potentially eating into the margins of Google’s hugely profitable search machine. In an interview with Bloomberg last week, Liz Reid, Google’s vice president for search, said the company has made progress in bringing down the cost of generative AI search. She said the company had no plans for the AI-powered additions to be tied to a subscription, as has been reported in the press.

By bringing more generative AI to its search engine, Google hopes to reduce the time and mental load it takes for users to find the information that they are looking for, Reid said.

“Search is a very powerful tool. But there’s lots of times where you have to do a lot of hard work in searching,” Reid said. “How can we take that hard work out of searching for you, so you can focus on getting things done?” Reid said the new AI-powered Google search will be able to process billions of queries.

But Google must also take care not to rock the boat too much. People may click on fewer ads if the AI overviews fully address their questions. The ecosystem of news sites and other websites that rely on the search giant for traffic may also see fewer visitors because of Google’s changes.  Reid tried to project an air of calm for advertisers and publishers. Ads will continue to appear in dedicated slots throughout Google search results, with labeling to distinguish sponsored items from organic results, she said. The company’s tests, meanwhile, have shown that generative AI searches are a jumping-off point to other websites for users, not the end of the road, she added. 

Reid declined to say how often users will see the overviews but said that Google the company will focus on delivering them when they can provide “meaningful value” on top of the traditional search experience.

Yet publishers, in particular, are wary. Raptive, a company that helps digital creators build brands, estimates that 25% of search traffic to publishers will disappear if Google broadly rolls out “search generative experience,” or SGE, like the generative AI search engine that Google introduced on Tuesday.“By building an experience that is designed to keep more traffic inside of Google, fewer people will visit individual websites and creator revenue will be hit,” Marc McCollum, chief innovation officer at Raptive, wrote in an email. “So Google will gain share and revenue, while the very people who created the content that they have used to build SGE will languish.”

Google executives have stressed that search will remain central in the new age of AI. Reid, for instance, described a new “visual search” feature coming soon to Google’s opt-in Search Labs experiment that will allow people to take a video of a malfunctioning gadget, like a record player, and ask Google for an AI overview to help them troubleshoot the problems. 

In a call with reporters on Monday, Demis Hassabis, CEO of Google’s AI lab DeepMind, went even further in showcasing Gemini’s ability to respond to queries. Hassabis showed off Project Astra, a prototype of an AI assistant that can process video and respond in real time. In a prerecorded video demo, an employee walked through an office as the assistant used the phone’s camera to “see,” responding to questions about what was in the scene. The program correctly answered a question about which London neighborhood the office was located in, based on the view from the window, and also told the employee where she had left her glasses. Hassabis said that the video was captured “in a single take, in real time.” 

“At any given moment, we are processing a stream of different sensory information, making sense of it and making decisions,” Hassabis said of the Project Astra demo. “Imagine agents that can see and hear what we do to better understand the context we’re in and respond quickly in conversation, making the pace and quality of interaction feel much more natural.” Pichai later clarified that Google is “aspirationally” looking to bring some features of Project Astra to the company’s core products, particularly Gemini, towards the later half of this year. 

In order to keep advancing in artificial intelligence, Google has also had to update its suite of AI models, and the company shared more progress on that front on Tuesday. It announced Gemini 1.5 Flash, which Google says is the fastest AI model available through its application programming interface, or API, typically used by programmers to automate high-frequency tasks like summarizing text, captioning images or video, or extracting data from tables.

It also unveiled updates to Gemini Nano, Google’s smallest AI model, expanding beyond text inputs to include images; introduced a newer version of its family of open models, Gemma 2, with improved efficiency; and said the company had achieved better benchmarks on its powerful AI model, Gemini 1.5 Pro.

On Tuesday, Google confirmed that developers can use Gemini 1.5 Pro to process more text, video and audio at a time — up to 2 million “tokens,” or pieces of content. That amounts to about 2 hours of video, 22 hours of audio or over 1.4 million words. Google says this amount of processing far outpaces other AI models from competitors, including OpenAI.

Google also highlighted its generative media tools and services, introducing new models and updating existing ones. It announced a new video generation model on Tuesday that the company is calling Veo, which generates high-quality videos lasting beyond a minute — a response to OpenAI’s buzzy video generation tool, Sora. Google is letting creators sign up to join a waitlist for testing the product, and said it would bring some of Veo’s capabilities to YouTube Shorts and other video products “in the future.”

Google announced updates to Imagen 3, the third iteration of its text-to-image AI model, which includes improvements such as fewer image distortions. And, Google is continuing to experiment with AI-generated music with a service called Lyria, as well as a suite of music AI tools known as the Music AI Sandbox.

Much of the battle for AI superiority relies on having powerful semiconductors that are able to handle all the data being processed. To that end, Google announced a new version of its in-house-designed chip for data centers, the sixth version of its TPU, or Tensor Processing Unit. The latest one will be 4.7 times faster than its predecessor, have access to more memory and feature faster connections with other chips, Google said.

Amid growing concerns about how companies should deal with the wave of AI-generated content and concerns over copyright, Google said it would roll out a system for watermarking created by Gemini and the video model Veo. The system, called SynthID, embeds imperceptible digital tags into AI-generated images, audio, video and even text, so that people can track a particular piece of media’s provenance. Google plans to release the technology in open source form this summer so outside developers can use it.

Google also tried to frame Gemini as a powerful agent that can assist users as they go about their daily lives. Users who pay $20 a month for Google’s AI premium subscription plan will gain access to a version of Gemini that can process 1 million tokens —  or about 700,000 words — at once, which Google said is the largest of any model that’s widely available to the general public. That means people can ask the AI model to digest large volumes of data for them, such as summarizing 100 emails, the company said. A new feature called Gemini Live will let Google’s premium subscribers speak naturally with the company’s AI software on their mobile devices, even pausing or interrupting Gemini Live mid-response with questions.

Google said that people’s files will remain private and aren’t used for training AI models. Subscribers will be able to create custom versions of Gemini, known as Gems, to serve a specific purpose, such as coaching them on their running.

--With assistance from Ian King.

(Updates with analyst comment in 11th paragraph.)

©2024 Bloomberg L.P.