Austin, Texas – Billionaire Elon Musk is finally sick of his artificial intelligence, Grok, constantly contradicting him, and he is now taking steps to fix the problem.

In an X post shared on Saturday, Musk announced that xAI, his company that operates Grok, will be working to “retrain” the AI.
“We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors. Then retrain on that,” Musk wrote.
“Far too much garbage in any foundation model trained on uncorrected data,” he added.

Cardi B drops fiery new summer anthem amid Offset drama

Hailey Bieber ditches wedding ring in NYC as Justin Bieber posts about “manipulation” and “flirting”
In a follow-up post, Musk asked users to reply with “divisive facts” to assist the training, including statements that are “politically incorrect, but nonetheless factually true.”
Top-rated replies included a number of hateful and conspiratorial statements, such as one user declaring “Islam is not a religion of peace,” while another user claimed the number of deaths during the Holocaust was far less than has been reported.
Since purchasing Twitter, later changing the name to X, Musk has largely aligned his political views with those of the far-right and has used the platform to boost accounts spreading misinformation and hate speech, including antisemitism.
When asked to confirm many of his most questionable claims, Grok typically gives responses more aligned with verifiable evidence, placing it at odds with the billionaire – much to his dismay.
Most recently, Grok gained heavy media attention after it began pushing Musk’s false claims that white South Africans are facing genocide in response to completely unrelated questions posed by X users.
Only a few days later, the AI also began sharing responses denying the proposed number of deaths during the Holocaust.
xAI first claimed a “bug” in the tech’s algorithm but later blamed a “rogue employee” who had somehow gotten access to Grok’s complex code and maliciously caused it to share the misinformation.
After the bug was supposedly fixed, Grok, in answering questions from X users, dismissed the idea that a “rogue employee” could have been behind it, explaining, “tampering with my prompt isn’t something a random intern could pull off.”