Google is still drip-feeding AI into Search, Maps, and Translate

Illustration: The VergeGoogle announced a bunch of AI-enabled Search, Maps, and Translate updates as part of its “Google presents: Live from Paris” event on Wednesday. While it showed off hints of the ChatGPT-rivaling “Bard” AI chatbot-style search powered by...

Google is still drip-feeding AI into Search, Maps, and Translate

Google announced a bunch of AI-enabled search, Maps, and Translate updates as part of its Google presents: Live from Paris event on Wednesday.

While it showed off a brief demo of “Bard,” the ChatGPT-rivaling AI chatbot-style search powered by its LaMDA technology, most of the features focused on features we've already seen, like “visual search” implementations that expand Google Lens, a more immersive version of Google Maps, and a Google Translate that is better at understanding context.

Of course, you can kinda, sorta mess around with Microsoft’s new ChatGPT-powered Bing already and access the already-available ChatGPT itself, which has been enough to raise a “code red” about AI features within Mountain View.

Some of the features are things you can take advantage of soon; disappointingly, many are a ways out.

Let’s start first with some that are available in the near term.

One of the biggest pieces of news is about multisearch, a tool that lets you start a search using an image along with a few words of text. As an example of how you might use it, let’s say you see a shirt you like, but you want it in a different color. With multisearch, you can snap a pic of the shirt and search for that other color to see where you might be able to buy it. It launched first in the US, but now multisearch is available globally on mobile wherever Google Lens is.Immersive View in Google Maps, which combines a 3D view of a certain area with specific information like traffic and weather, is beginning to roll out in five cities: London, Los Angeles, New York City, San Francisco, and Tokyo.A number of new features are designed to assist EV drivers, with charge stops suggested for shorter trips, a filter for “very fast” charging stations that have 150kW chargers or higher, and mentions of locations that have chargers indicated in search results for places like grocery stores or hotels.An AR feature in Google Lens that blends translated text into the image it came from is starting to roll out globally.

Other new features were also announced, but so far, these don’t have specific release windows attached — these are shipping in the coming weeks / months.

The ability for Android users to search using words or images on their screens without leaving the app they’re in.If you use Google Maps on your phone, Google will be able to show you ETAs and where to turn next right on your lock screen. (And yes, it will be compatible with iOS’s Live Activities.)“Multisearch near me,” which lets you search for things like where you can find a certain food dish nearby, will be expanded to wherever Lens is available. You can currently use it in the US.Multisearch on the mobile web will be available globally in the next few months.Immersive View in Google Maps will expand to Florence, Venice, Amsterdam, Dublin, and more.Indoor Live View, which overlays AR directions to help you navigate tricky buildings and places, will be expanding to more than 1,000 new airports, train stations, and malls.Search with Live View, which lets you learn more about specific places by viewing them through your phone’s camera, is expanding to Barcelona, Dublin, and Madrid.Google Translate will be able to show you additional context about certain words or phrases. Google shared an example of how the feature could be helpful if you’re trying to pin down the right translation for the word “novel,” which, in English, can refer to a book or something that’s original. This feature will come first to languages including English, French, German, Japanese, and Spanish in the coming weeks.The new Google Translate redesign that already launched on Android will be coming to iOS in “a few weeks.”