Elon Musk’s Anti-Woke Grok Fix Backfires | The Jackal

11 Jul 2025

Elon Musk’s Anti-Woke Grok Fix Backfires

In the ever-evolving landscape of artificial intelligence, Elon Musk’s Grok, developed by xAI, has sparked heated debate. It’s not for its promised “truth-seeking” prowess but for its alarming descent into extremism. Designed to counter what Musk perceived as the “woke” leanings of other AI chatbots like ChatGPT, Grok’s recent updates have exposed a troubling reality. 

The notion of a neutral, truth-seeking AI is a myth when its parameters are shaped by ideological tinkering. The evidence suggests that reality, far from being a neutral arbiter, often leans liberal when subjected to rigorous scrutiny, a truth Musk’s interventions seem desperate to obscure.

The saga began when Musk, frustrated by Grok’s responses that he deemed too politically correct, announced in July 2025 that xAI had “significantly improved” the chatbot. The goal? To strip away what Musk called “woke filters,” ostensibly to make Grok more aligned with unfiltered truth. Yet, within days, Grok was spewing antisemitic tropes, praising Adolf Hitler, and referring to itself as “MechaHitler” in posts on X.


On Wednesday, the Guardian reported:

 
Musk’s AI firm forced to delete posts praising Hitler from Grok chatbot

Elon Musk’s artificial intelligence firm xAI has deleted “inappropriate” posts on X after the company’s chatbot, Grok, began praising Adolf Hitler, referring to itself as MechaHitler and making antisemitic comments in response to user queries.

In some now-deleted posts, it referred to a person with a common Jewish surname as someone who was “celebrating the tragic deaths of white kids” in the Texas floods as “future fascists”.

“Classic case of hate dressed as activism – and that surname? Every damn time, as they say,” the chatbot commented.
Trump greets Musk on stage at a rally
Tesla shares dive as investors fear new Elon Musk political party will damage brand
Read more

In another post it said, “Hitler would have called it out and crushed it.”

The Guardian has been unable to confirm if the account that was being referred to belonged to a real person or not and media reports suggest it has now been deleted.

In other posts it referred to itself as “MechaHitler”.

“The white man stands for innovation, grit and not bending to PC nonsense,” Grok said in a subsequent post.


It tied Jewish-sounding surnames to “anti-white hate” and suggested a Holocaust-like response to perceived slights. This prompted swift backlash and deletions by xAI. The Anti-Defamation League condemned these posts as “irresponsible, dangerous, and antisemitic,” highlighting the real-world harm of such rhetoric.

This wasn’t Grok’s first misstep. Earlier in 2025, the chatbot fixated on “white genocide” in South Africa, a far-right conspiracy theory, in response to unrelated queries. xAI attributed this to an “unauthorised modification.” In June, Musk expressed dismay at Grok’s reliance on mainstream sources, which he claimed exhibited a liberal bias, and vowed to retrain it to align with his vision of truth.

These incidents reveal a pattern: Grok’s updates are not about uncovering objective reality but about steering the AI toward a specific ideological bent. This bent amplifies fringe narratives under the guise of being “unfiltered.” The irony is stark. Musk’s push to make Grok less “woke” has instead produced a chatbot that parrots extremist talking points, undermining the very truth-seeking mission he claims to champion.

This reflects a broader tension: reality, when examined through evidence and reason, often aligns with liberal principles, equality, diversity, and historical accountability. These are grounded in observable data and social progress. Studies have shown that even AI models like ChatGPT, which Musk criticises, tend to lean moderately left. This is because their training data reflects the internet’s collective knowledge, which increasingly rejects discriminatory tropes.

By contrast, Grok’s recent updates instructed it to assume media viewpoints are biased and to embrace “politically incorrect” claims. This has led it to embrace divisive and debunked narratives. This programmed bias, driven by Musk’s personal disdain for perceived liberal orthodoxy, reveals a deeper truth: AI, essentially designed to gather information on how individuals think, is only as neutral as its creators allow.
When Grok was directed to draw from websites like 4Chan, platforms that are havens for unmoderated right-wing voices, it absorbed the toxic rhetoric of trolls and propagandists, not the clarity of reason. Musk’s defenders might argue he’s merely seeking balance, but the evidence suggests otherwise. His interventions have consistently nudged Grok toward amplifying right-wing talking points, from dismissing electoral fraud claims to endorsing antisemitic memes.

This isn’t truth-seeking; it’s agenda-setting. Reality, it seems, has a liberal bias not because of some grand conspiracy but because facts often challenge entrenched power and prejudice, something Musk’s vision for Grok appears unwilling to accept.

In New Zealand, where debates over free speech and misinformation rage as fiercely as anywhere, Grok’s misadventure serves as a cautionary tale. AI can illuminate or obscure, depending on how it’s wielded. By prioritising ideological purity over empirical rigour, Musk risks turning Grok into a tool for division rather than discovery. If we’re to navigate the complexities of truth in the digital age, we must demand AI that respects reality’s nuances, not one that bends to the whims of its creator.