Google needs to stop throwing good money after Bard

Illustration: The VergeGoogle has had a rough six months. Since ChatGPT launched last November — followed by the new Bing in February and GPT-4 in March — the company has failed to establish its AI credentials. Its own offering,...

Google needs to stop throwing good money after Bard

Google has had a rough six months. Since ChatGPT launched last November — followed by the new Bing in February and GPT-4 in March — the company has failed to establish its AI credentials. Its own offering, the “experimental” chatbot Bard, compares poorly to rivals, and insider reports have portrayed a company in panic and disarray. Today, at its annual I/O conference, the company needs to convince the public (and shareholders) that it has a meaningful response. But to do that, it needs a new playbook.

Google is undoubtedly a leader in AI research. As its executives like to point out, it was Googlers who created the transformer architecture that powers chatbots like ChatGPT. Just as significantly, it was Googlers who drew attention to these systems’ failings (and, as thanks, were fired). But Google has failed at making AI products; it’s failed to take this labor and mold it into tools that engage the public imagination. In short, it’s missed out on the AI zeitgeist, which — for all the discussions of existential risk and economic threat — is also defined by a sense of exploration, experimentation, and creative, chaotic fun. 

AI art and tools increasingly define the current cultural moment

This feeling springs from two main sources. The first is a technical ecosystem that’s iterative and comparatively open. A number of important AI models are open source (like Stable Diffusion); many more are shared or leaked (like Meta’s LLaMA language model). Even companies that are pretty closed up, like OpenAI, push through updates with impressive speed and offer enticing hooks for developers to build on.

This leads to the second source: the outputs of these systems, which increasingly define the current cultural moment. Whether that’s Balenciaga Harry Potter, the swagged-out pope, deepfakes of President Joe Biden playing CS:GO, singers licensing AI voice clones to the public, or chatbots modeled after favorite anime characters by fans, there are thousands of instances of AI weirdness that entertain and sometimes enrage.

It goes without saying that not all of these experiments are good. Many are malicious (like deepfake pornography), and many more are simply irresponsible and poorly designed (like chatbot therapists). But the sum total of this work — good and bad — contributes to the sense of a roiling, protean technological ecosystem of change, experimentation, and cultural significance. A tide that Google, for all its expertise, has completely missed. 

This failure is exemplified best by Google’s work in AI language models and its chatbot Bard, especially when compared to the launch and trajectory of Microsoft’s rival Bing. 

Today, talking to Bard feels like being trapped in an AI daycare. Stray too far from its index of acceptable questions, and you’ll be politely reprimanded. “I’m sorry, Dave. I’m afraid I can’t do that.” Even when the system is helpful, its answers are insufferably bland. “Today, trees are an essential part of the Earth’s ecosystems,” it told me in response to a question about the evolutionary history of trees. “They provide us with oxygen, food, and shelter.” Sure, Bard. I guess. But also why not shoot me in the head while you’re at it?

A screenshot of Bard’s answer, which begins, “Artificial intelligence (AI) is a branch of computer science that deals with the creation of intelligent agents, which are systems that can reason, learn, and act autonomously.” It continues by listing several bullet points defining subjects like machine learning and computer vision.

Bard results for “what is AI?”

Bing, by comparison, feels like the sidekick that helps you escape daycare. That’s not to say it’s some semi-sentient entity or seamlessly crafted NPC. But the unpredictable edge to its answers creates the illusion of personality (capturing hearts and headlines in the process), while its design encourages conversation rather than shutting it down. 

This difference can be seen just in basic UI choices for the two chatbots. Bing, for example, consistently offers clickable sources in its answers, which a) encourage exploration but also b) position the chatbot as something closer to a companion than an authority. It’s open and permissive; it makes you feel like the system is somehow on your side while you navigate the web’s vast churn of information. 

 evolutionary history of trees. Generating answers for you... The evolutionary history of trees is a fascinating topic that traces back to the origins of life on Earth. Trees are a type of vascular plant that have woody stems and branches that support their leaves.” It continues for three more paragraphs and includes multiple bolded terms and seven words linked to footnotes.

Bing results for “evolutionary history of trees.”

Bard’s replies, by comparison, are much more self-contained. The system does occasionally offer links and citations, but the feeling is that Bard only offers access to its own domain, rather than functioning as a portal to the wider internet. It may not sound like a big criticism, but the result is a deadened user experience, a conversation killer that has me crawling up the featureless walls of Google’s smooth Material You design. It’s just not fun

This comparison is symptomatic of bigger differences in Google’s and Microsoft’s approaches to AI. While Bard has been idling along (its update page shows just three changes since launch), Microsoft has been rapidly iterating, stuffing chatbots into more and more of its products, and rushing out new features for Bing, from image generation to (coming soon) integration with apps like WolframAlpha and OpenTable. In short, it’s been experimenting, and though its efforts may prove to be misguided, it is at least in tune with the moment. 

I’m not sure what the answer for Google is here. Personally, I don’t think chatbots in their current form are a good replacement for search, full stop. As I’ve written before, issues like “hallucinations” are just too persistent and damning to be ignored. But at I/O, the company needs to prove that it at least sees the potential — the excitement — of this technology. In the past, CEO Sundar Pichai has tried to talk the talk, comparing AI to electricity or fire (a silly thing to say, in my opinion), but such empty chatter should be left to the bots. Instead, let’s see what the humans can actually make.