Google’s Sergey Brin Says AI Can Synthesize Top 1,000 Search Results via @sejournal, @martinibuster

The Google co-founder explains that search is shifting from link retrieval to AI-powered synthesis of information. The post Google’s Sergey Brin Says AI Can Synthesize Top 1,000 Search Results appeared first on Search Engine Journal.

Google’s Sergey Brin Says AI Can Synthesize Top 1,000 Search Results via @sejournal, @martinibuster

Google co-founder Sergey Brin says AI is transforming search from a process of retrieving links to one of synthesizing answers by analyzing thousands of results and conducting follow-up research. He explains that this shift enables AI to perform research tasks that would take a human days or weeks, changing how people interact with information online.

Machine Learning Models Are Converging

For those who are interested in how search works, another interesting insight he shared was that algorithms are converging into a single model. In the past, Googlers have described a search engine as multiple engines, multiple algorithms, thousands of little machines working together on different parts of search.

What Brin shared is that machine learning algorithms are converging into models that can do it all, where the learnings from specialist models are integrated into the more general model.

Brin explained:

“You know, things have been more converging. And, this is sort of broadly through across machine learning. I mean, you used to have all kinds of different kinds of models and whatever, convolutional networks for vision things. And you know, you had… RNN’s for text and speech and stuff. And, you know, all of this has shifted to Transformers basically.

And increasingly, it’s also just becoming one model.”

Google Integrates Specialized Model Learnings Into General Models

His answer continued, shifting to explaining how it’s the usual thing that Google does, integrating learnings from specialized models into more general ones.

Brin continued his answer:

“Now we do get a lot of oomph occasionally, we do specialized models. And it’s it’s definitely scientifically a good way to iterate when you have a particular target, you don’t have to, like, do everything in every language, handle whatever both images and video and audio in one go. But we are generally able to. After we do that, take those learnings and basically put that capability into a general model.”

Future Interfaces: Multimodal Interaction

Google has recently filed multiple patents around a new kind of visual and audio interface where Google’s AI can take what a user is seeing as input and provide answers about it. Brin admitted that their first attempt at doing that with Google Glasses was premature, that the technology for supporting that wasn’t mature. He says that they’ve made progress with that kind of searching but that they’re still working on battery life.

Brin shared:

“Yeah, I kind of messed that up. I’ll be honest. Got the timing totally wrong on that.

There are a bunch of things I wish I’d done differently, but honestly, it was just like the technology wasn’t ready for Google Glass.

But nowadays these things I think are more sensible. I mean, there’s still battery life issues, I think, that you know we and others need to overcome, but I think that’s a cool form factor.”

Predicting The Future Of AI Is Difficult

Sergey Brin declined to predict what the future will be like because technology is moving so fast.

He explained:

“I mean when you say 10 years though, you know a lot of people are saying, hey, the singularity is like, right, five years away. So your ability to see through that into the future, I mean, it’s very hard”

Improved Response Time and Voice Input Are Changing Habits

He agreed with the interviewers that improved response time to voice input are changing user habits, making real-time verbal interaction more viable. But he also said that voice mode isn’t always the best way to interface with AI and used the example of a person talking to a computer at work as a socially awkward application of voice input. This is interesting because we think of the Star Trek Computer voice method of interacting with a computer but what it would get quite loud and distracting if everyone in an office were interacting audibly with an AI.

He shared:

“Everything is getting better and faster and so for you know, smaller models are more capable. There are better ways to do inference on them that are faster.

We have the big open shared offices. So during work I can’t really use voice mode too much. I usually use it on the drive.

I don’t feel like I could, I mean, I would get its output in my headphones, but if I want to speak to it, then everybody’s listening to me. So I just think that would be socially awkward. …I do chat to the AI, but then it’s like audio in and audio out. Yeah, but I feel like I honestly, maybe it’s a good argument for a private office.”

AI Deep Research Can Synthesize Top 1,000 Search Results

Brin explained how AI’s ability to conduct deep research, such as analyzing massive amounts of search results and conducting follow-up research changes what it means to do search. He described a shift in search that changes the fundamental nature of search from retrieval (here are some links, look at them) to generating insights from the data (here’s a summary of what it all means, I did the work for you).

Brin contrasted what he can do manually with regular search and what AI can do at scale.

He said:

“To me, the exciting thing about AI, especially these days, I mean, it’s not like quite AGI yet as people are seeking or it’s not superhuman intelligence, but it’s pretty damn smart and can definitely surprise you.

So I think of the superpower is when it can do things in the volume that I cannot. So you know by default when you use some of our AI systems, you know, it’ll suck down whatever top ten search results and kind of pull out what you need out of them, something like that. But I could do that myself, to be honest, you know, maybe take me a little bit more time.

But if it sucks down the top, you know thousand results and then does follow-on searches for each of those and reads them deeply, like that’s, you know, a week of work for me like I can’t do that.”

AI With Advertising

Sergey Brin expressed enthusiasm for advertising within the context of the free tier of AI but his answer skipped over that, giving the indication that this wasn’t something they were planning for. He instead promoted the concept of providing a previous generation model for free while reserving the latest generation model for the paid tiers.

Sergey explained:

“Well, OK, it’s free today without ads on the side. You just got a certain number of the Top Model. I think we likely are going to have always now like sort of top models that we can’t supply infinitely to everyone right off the bat. But you know, wait three months and then the next generation.

I’m all for, you know, really good AI advertising. I don’t think we’re going to like necessarily… our latest and greatest models, which are you, know, take a lot of computation, I don’t think, we’re going to just be free to everybody right off the bat, but as we go to the next generation, you know, it’s like every time we’ve gone forward a generation, then the sort of the new free tier is usually as good as the previous pro tier and sometimes better.”

Watch the interview here:

Sergey Brin, Google Co-Founder | All-In Live from Miami