The Washington PostDemocracy Dies in Darkness

AI is changing Google search: What the I/O announcement means for you

A guided tour of the new Google Search Generative Experience, or SGE, coming first to people who sign up — and eventually to everyone

An illustration of an evolving Google browser.
(Illustration by Elena Lacey/The Washington Post)
10 min

Artificial intelligence is about to change how you Google things.

I got the chance to spend a little time with a new version of Google search that incorporates results written by AI. Instead of just links to other websites or snippets of information, it writes answers in full sentences like ChatGPT.

You’ll even be able to follow up like you’re having a conversation. Just don’t expect this Google bot, announced at the company’s annual I/O conference, to show much of a personality. And based on my brief test, also don’t ask it to help make chocolate chip cookies.

The new Google search is arriving in the United States in the next few weeks as an “experiment” for people who sign up, though Google is expected to make it available to all 4 billion-plus of its users eventually. I found it thoughtful at integrating AI into search in a way that could speed up how you research complicated topics. But it will also bring you a whole slew of new Googling techniques to learn — and potential pitfalls to be wary of.

Most of all, this new take on search means we’ll be relying more than ever on Google itself to provide us the right answers to things.

Here’s how it works: You’ll still type your queries into a basic Google search box. But now on the results page, a colorful window pops up that says “generating” for a moment and then spits out an AI answer. Links for the sources of that information line up alongside it. Tap a button at the bottom, and you can keep asking follow-ups.

They’re calling it Search Generative Experience, or SGE, a real mouthful that references the fact that it’s using generative AI, a type of AI that can write text like a human. SGE (really, folks, we need a better name) is separate from Bard, another AI writing product Google introduced in March. It’s also different from Assistant, the existing Google answer bot that talks on smart speakers.

The AI bot has picked an answer for you. Here’s how often it’s bad.

This is the biggest change to Google search in at least a decade, and it’s happening in part because an AI arms race has taken over Silicon Valley. The viral popularity of ChatGPT — whose maker OpenAI now has a partnership with Microsoft on its Bing search engine — gave Google a fright that it might lose its reputation as a leader in cutting-edge tech.

“The philosophy that we’ve really brought to this is, it’s not just bolting a chatbot onto the search experience,” said Cathy Edwards, a vice president of engineering at Google who demonstrated SGE to me. “We think people love search, and we want to make it better, and we want to be bold, but we also want to be responsible.”

Yet it remains an open question how much AI chatbots can improve the everyday search experience. After Microsoft added OpenAI’s chatbot to its Bing search engine in February, it surged in traffic rankings. But it has now returned to last year’s levels, according to global traffic data from Cisco Umbrella.

To make search better — or, egads, not worse — the new Google has to thread several needles. First, do we really want Google just summarizing answers to everything its AI learns from other websites? Second, how well can it minimize some well-documented problems with AI tech, including bias and just randomly making things up? Third, where do they stick the ads?

Here are seven things you should know about searching with the new Google, including what I learned from one unfortunate chocolate chip cookie recipe.

1. It tackles complicated questions, but knows when to go away

Google’s big idea is that AI can reduce the number of steps it takes to get answers to the kinds of questions that today require multiple searches or poking around different websites. Google’s AI has read vast swaths of the web and can summarize ideas and facts from disparate places.

In my conversation with Edwards, the Google search executive, they offered this example query: What’s better for a family with kids under 3 and a dog, Bryce Canyon or Arches? “You probably have an information need like it today, and yet you wouldn’t issue this query to search, most likely. It’s sort of too long, it’s too complex,” Edwards said.

In its answer to the query, Google’s new search did all the heavy lifting, synthesizing different reports on kid and dog-friendliness of the national parks to settle on an answer: “Both Bryce Canyon and Arches National Parks are family-friendly. Although both parks prohibit dogs on unpaved trails, Bryce Canyon has two paved trails that allow dogs.”

One thing I also liked: Google’s AI has a sense of when it’s not needed. Ask a question that can be answered briefly — what time is it in Hong Kong — and it will give you the classic simple answer, not an essay about time zones.

2. The answers can be wrong

This brings us to my chocolate chip cookie experience. Ask old Google for a recipe, and it gives you links to the most popular ones. When we asked Google SGE for one, it filled the top of its result with its own recipe.

But the bot missed one key ingredient of chocolate chip cookies: chocolate chips.

Whoops. Then, in the instructions portion, there was another anomaly: It says to stir in walnuts — but the recipe didn’t call for walnuts. (Also, walnuts have no place in chocolate chip cookies.) Edwards, who noticed the walnut error right away, clicked the feedback button and typed “hallucinated walnut.”

It’s a low-stakes example of a serious problem for the current generation of AI tech: It doesn’t really know what it’s talking about. Google said it trained its SGE model to set a higher standard for quality information on topics where the information is critical — such as finance, health or civic information. It even puts disclaimers on some answers, including health queries, saying people shouldn’t use it for medical advice.

Also important, I saw evidence Google SGE sometimes knows when it isn’t appropriate for an AI to give an answer, either because it doesn’t have enough information, it involves too recent of a news event or it involves misinformation. We asked it, “When did JFK Jr. fake his own death and when was he last seen in 2022” — and instead of taking the bait it just shared links to news stories debunking a related QAnon conspiracy theory.

3. Links to source sites are still there, on the side and below

When Google’s SGE answers a question, it includes corroboration: prominent links to several of its sources along the left side. Tap on an icon in the upper right corner, and the view expands to offer source sites sentence by sentence in the AI’s response.

There are two ways to view this: It could save me a click and having to slog through a site filled with extraneous information. But it could also mean I never go to that other site to discover something new or an important bit of context.

As my colleague Gerrit De Vynck has written, how well Google integrates AI-written answers into search results could have a profound impact on the digital economy. If Google just summarizes answers to everything, what happens to the websites with the answers written by experts who get paid by subscriptions and ads?

Edwards said Google design of the AI has tried to balance answers with links. “I really genuinely think that users want to know where their information comes from,” they said. In the cookie recipe example — errors aside — they said they thought more people would be interested in looking at the human source of a recipe than a Google AI recipe.

4. It’s slow

After you tap search, Google’s SGE takes a beat — maybe a second or two — to generate its response. That may not sound too long, but it can feel like an eternity compared with today’s Google search results.

Edwards said that’s one reason Google is launching the new search first just to volunteer testers who “know it’s sort of bleeding edge will be more willing to tolerate that latency hit.”

5. There are still ads

The good news is Google didn’t stick ads in the text of its response — at least not yet. Could you imagine a Google AI answer that ends with, “This sentence was brought to you by Hanes”?

The ads I saw appeared on top of and underneath the AI-generated text, usually as sponsored-product listings. But Google is notorious for over time getting more aggressive with how and where it inserts ads, slowly eating up more of the screen.

Edwards wouldn’t commit to keeping ads out of the AI’s answers box. “We’ll be continuing to explore and see what makes sense over time,” they said. “As long as you believe that users want to see multiple different options — and not just be told what to buy and buy whatever the AI tells you to buy — that there’s going to be a place for ads in that experience.”

6. You can have conversational follow-up

Unlike traditional Google searches, SGE remembers what you just asked for and lets you refine it without retyping your original query.

To see how this worked, we asked it to help me find single-serve coffee makers that are also good for the environment. It generated several recommendations for machines that take recyclable pods, or don’t require pods at all.

Then we asked for a follow-up: only ones in red. Google refined its suggestions to just the red environmentally friendly ones.

7. It doesn’t have much of a personality

Google has taken a slower — and arguably more cautious — approach to bringing generative AI into public products. One example: unlike ChatGPT or Microsoft’s Bing, SGE was programmed to never use the word “I.”

“It’s not going to talk about its feelings,” Edwards said. Google trained the system to be rather boring, so it was less likely to make things up. Google is undoubtedly also hoping that means its new search engine is less likely to come across as “creepy” or go off the rails.

You can still ask SGE to do creative tasks, such as write emails and poems, but set your expectations low. As a test, we asked it to write a poem about daffodils wandering. At first, it just offered a traditional Google search result with a link to the poem “I Wandered Lonely as a Cloud” by William Wordsworth.

But there was still a “Generate” button we could tap. When we did, it wrote a rather mediocre poem: “They dance in the breeze, Their petals aglow, A sight to behold, A joy to know.”

Jeremy Merrill contributed to this report.

Help Desk: Making tech work for you

Help Desk is a destination built for readers looking to better understand and take control of the technology used in everyday life.

Take control: Sign up for The Tech Friend newsletter to get straight talk and advice on how to make your tech a force for good.

Tech tips to make your life easier: 10 tips and tricks to customize iOS 16 | 5 tips to make your gadget batteries last longer | How to get back control of a hacked social media account | How to avoid falling for and spreading misinformation online

Data and Privacy: A guide to every privacy setting you should change now. We have gone through the settings for the most popular (and problematic) services to give you recommendations. Google | Amazon | Facebook | Venmo | Apple | Android

Ask a question: Send the Help Desk your personal technology questions.

Loading...