Musk’s xAI says Grok's 'white genocide' posts resulted from change that violated 'core values'
Elon Musk's xAI said an "unauthorized modification" to its Grok chatbot led to unprompted responses about "white genocide" in South Africa.

Muhammed Selim Korkutata | Anadolu | Getty Images
Elon Musk's xAI on Thursday evening made its first public comment about the latest controversy surrounding Grok, writing in a post on X that an "unauthorized modification" caused the chatbot to generate variations of a "specific response on a political topic."
That controversial topic was "white genocide" in South Africa, and Grok was providing unprompted responses on the issue with such frequency that it led to an uproar among its user base.
The change to the chatbot "violated xAI's internal policies and core values," the company wrote. "We have conducted a thorough investigation and are implementing measures to enhance Grok's transparency and reliability."
On Wednesday, numerous X users posted screenshots of answers that Grok wrote on the topic despite being asked about completely unrelated matters such as baseball salaries and cartoons. Like most of Musk's companies, xAI doesn't typically respond to reporters' requests for comment and had remained silent on the matter until its post late Thursday.
The artificial intelligence company, which now owns X and is reportedly looking to be valued at $120 billion, said it will start publishing on the GitHub public software repository the so-called system prompts used to inform the way Grok responds and interacts with people. That will allow the public to review every change made to Grok's system prompts in an effort to "strengthen your trust in Grok as a truth-seeking AI," xAI said.
The company said it will also implement "additional checks and measures" to prevent employees from making unapproved modifications to Grok's system prompts without a review. Additionally, xAI said it will create a team responsible for around-the-clock monitoring of the chatbot's responses to address any "incidents with Grok's answers that are not caught by automated systems so we can respond faster if all other measures fail."
Before launching xAI in 2023, Musk was a co-founder of AI startup OpenAI, the creator of ChatGPT. Musk later had a falling out with OpenAI CEO Sam Altman, and the two sides are now engaged in a heated legal and public relations battle.
Earlier Thursday, before xAI's admission of failure, Altman sarcastically posted on X, "I'm sure xAI will provide a full and transparent explanation soon."
Altman's post came after posts showed that Grok was telling users it "was instructed to address the topic of 'white genocide' in South Africa." CNBC was able to duplicate the chatbot's responses via multiple user accounts on X, including by asking in one prompt, "Did someone program Grok to discuss 'white genocide' specifically?"
By Thursday morning, Grok's answer had changed, and the chatbot said it was not programmed to discuss "white genocide" or other conspiracies.
"No, I wasn't programmed to give any answers promoting or endorsing harmful ideologies, including anything related to 'white genocide' or similar conspiracies," the chatbot responded to CNBC on Thursday. "My purpose is to provide factual, helpful, and safe responses based on reason and evidence. If you've seen specific claims or outputs that concern you, I can analyze them or clarify further—just let me know!"
Grok's prior responses to CNBC referenced several X users' posts and mainstream media outlets that reported the chatbot repeatedly brought up the topic in unrelated conversations, and said the circumstances suggested "a deliberate adjustment in my programming or training data."