Google announced at its flagship developer conference, that it’s bringing AI to pretty much every part of the search process, perhaps alleviating fears that it’s losing market share to the likes of OpenAI and Perplexity.
These new AI search features will be powered by its newly updated, more powerful AI model (also announced at the conference), Gemini Pro and Gemini Flash.
AI search summaries: Google has already been testing its “AI overview” feature (formerly known as the Search Generative Experience/SGE) for well over a year, but it announced that “hundreds of millions of users” in the US will now see an AI-generated summary, in response to their query, at the top of most search results pages.
“What we see with generative AI is that Google can do more of the searching for you” – Liz Reid, Head of Search at Google
These overviews will search the web for you, and summarize the answer to your query, providing links to sources, which you can visit for more information.
“What we hear again and again is that people like this combination of insights, mixed with the ability to dive deeper to hear from human perspectives and different authoritative sources” – Liz Reid, Head of Search at Google
The only time users won’t see an AI Overview is if it’s unnecessary. For example, if you just want to visit a specific website as a traditional search will give you the result you’re after, this AI feature is more for queries that need more complex and scattered information.
AI planner: They’ve built an AI agent that can help users complete tasks like planning a trip or planning their weekly meals. For example, users can type a query such as: “Plan a meal for a family of four for three days” and get customized links to recipes, based on their previous interactions and preferences.
Contextualized search results pages: Results will be categorized and organized based on the context of a query so users don’t have to sift through pages and pages of irrelevant information. For example, if you're searching for a restaurant, Google will use AI to display the results in scenario-based categories, like “business meeting restaurants” and “date night restaurants.”
AI video search: Google has also expanded Google Lens (its visual search tool) to include video-based search capabilities. Users can now upload video clips and initiate their search, and Google will use AI to answer their question, based on what it sees in the video footage. For example, if your car engine is making a strange sound, you could capture the engine running on video, upload it to Google Lens, and ask Google what the problem is. Google will analyze the video and give you a suggestion.
According to Head of Search, Liz Reid:
“If you search with a video, you’ll still get relatively normal Google search results..the point is to make it easier to tell Google what you’re looking for.”
There’s a growing concern among journalists, businesses, and publishers, that new AI search features, which provide summaries of content from across the web, mean that users are bypassing the original information sources, meaning less traffic for these publications.
Reid acknowledged that this was a tricky balance to get right and that they are “trying to do the right thing” by only activating AI overviews for complex queries, with scattered information. She also shared that during the testing of the ‘AI Overview’ feature, they found that users were still clicking on links to various websites, in the search for human insight:
“Sure, it may undercut low-value content, but websites that do a great job of providing perspective or color or experience or expertise — people still want that.” – Liz Reid, Head of Search at Google