What is Grok? Elon Musk’s controversial ChatGPT competitor explained
Grok is the Elon Musk-backed AI offshoot of X that he's hoping will compete with ChatGPT and Bard. Can it? Here's what you need to know.
Grok! It might not roll off the tongue like ChatGPT or Windows Copilot, but it’s a large language model chatbot all the same. Developed by xAI, an offshoot of the programmers who stuck around after Elon Musk purchased X (formerly known as Twitter), Grok is designed to compete directly with OpenAI’s GPT-4 models, Google’s Bard, and a range of other public-facing chatbots.
Launched in November 2023, Grok is designed to be a chatbot with less of a filter than other AIs. It’s said to have a “bit of wit, and has a rebellious streak.”
It’s only for X Premium users
Jon Martindale / DigitalTrendsWant to play around with Grok yourself? You can, if you’re an X Premium+ subscriber and also reside in the U.S. (r use a VPN). Even so, there’s a waiting list to try out the prototype, so there’s no guarantee you’ll get in.
While you might expect it to open up to a wider audience when it’s out of beta, Musk has suggested that Grok will remain a feature exclusive to Premium+ X subscribers for the foreseeable future.
It’s a chatbot
Just weeks after discussing how dangerous Musk thought the recent advances in AI were, he launched Grok in beta form. Though he claimed that it would be more open in its communication style, with less of a topic filter, it’s ultimately a chatbot like any of the others.
That’s because it was built in much the same way. It’s a large language model AI, which means it was built using lots of training data, some from X, some from the web at large. We’re told its running on the Grok-1 large language model (LLM) with some 33 billion parameters. This would put it behind Meta’s LLaMa and OpenAI’s GPT-4, which have 70 billion and 1.76 trillion parameters, respectively. It was also trained for only a couple of months, whereas other models have taken years to put together.
That may be why Grok appears to be just as susceptible (if not to more so) to hallucinations as other AIs like ChatGPT. It often makes up facts or cites nonexistent sources when asked for factual responses.
It uses data from X
One of the big selling points of Grok is that it brings live data from X. Where other language models might use Google or Bing to find up-to-date information to augment their answers to user prompts, Grok can base its responses off of up-to-date information from X. That gives it access to information that other chatbots don’t have, but it also means that it is potentially more susceptible to misinformation due to the sheer scale of its spread on X since Musk’s takeover.
That does mean it has access to Musk’s X, account, however. Which means it doesn’t agree with him all the time.
It’s supposed to be anti-woke, but it isn’t
A major reason for Musk’s takeover of X and the launch of Grok has been his own attempt to fight back against what he considers “wokeness.” Grok was supposed to be an AI that eschewed societal politeness in favor of a particular brand of humor, as well as a propensity to lean right in its political biases. However, that’s not turned out to be the case. Indeed, many of Musk and Grok’s initial fans are now complaining that the AI has been captured by “woke programmers.”
Has Grok been captured by woke programmers?
I am extremely concerned here. 😮 https://t.co/lrlOJ79Mzz
— Wall Street Silver (@WallStreetSilv) December 8, 2023
In reality, Grok’s training data was based on much the same data as every other AI: the humans who interact with each other online. Therefore, Grok was likely to graduate toward sounding like the other AIs out there.
In its default mode, most researchers found that Grok would respond with an overtly neutral answer if asked about topics that are seen as controversial by right-wing commenters, such as gun control, climate change, or debunked conspiracy theories. However, if you switch from the standard “fun” mode to its regular mode, it will provide more reasoned responses.
It’s borrowing from ChatGPT
Grok has been found to make responses as if it was ChatGPT at times. That’s not because it was built on the same technology as ChatGPT, but because it was trained on the web since the release of ChatGPT. That’s led it to hoover up AI-written content, and in one instance, an OpenAI privacy policy, which it proceeded to feed back to its users as if it wrote it.
This is a problem that has been cited by a number of experts and may worsen with all LLM AIs if it isn’t addressed. In a world where AIs can and do create mountains of recycled content every day, AI developers need a way to differentiate between AI and human content so that they don’t just train their future models on AI-generated content.
The name comes from a Heinlein novel
If you’re wondering where the bizarre name came from, “Grok” is a neologism from the Robert Heinlein novel, Stranger in a Strange Land. The original meaning was to “understand intuitively or by empathy, to establish rapport with.” That sounds like a lofty goal for an AI, but it’s likely that it’s been employed here in the fashion that it grew to be known by within the programming community in the decades after Heinlein’s work: To understand something so fully, that it’s part of your identity.
Grok is supposed to understand us so well it is us. We don’t think it’s quite there yet.