Ethereum Co-Founder Says Grok Brings More Truth to Social Media But Raises Bias Concerns
Grok, the AI chatbot from Elon Musk’s startup xAI, is being hailed by Ethereum co-founder Vitalik Buterin as a tool that improves the pursuit of truth on platform X.
Its unpredictability has been highlighted as a key feature, often challenging users who expect confirmation of extreme political views.
Buterin said,
“The easy ability to call Grok on Twitter is probably the biggest thing after community notes that has been positive for the truth-friendliness of this platform.”
Why Grok Stands Out in Social Media
Unlike many other AI systems, Grok can provide responses that contradict user expectations, creating moments of reflection and accountability.
Buterin added,
“The fact that you don't see ahead of time how Grok will respond is key here. I've seen many situations where someone calls on Grok expecting their crazy political belief to be confirmed and Grok comes along and rugs them.”
Buterin regards Grok as a “net improvement” for X but cautions that its training is influenced by the opinions of a select group of users, including Musk himself.
This raises concerns about how objective or impartial the AI’s responses can truly be.
When AI Hallucinations Turn Humorous or Concerning
Grok’s flaws have drawn attention in recent months.
In one instance, the chatbot overpraised Musk, claiming he was stronger than Mike Tyson, more handsome than Brad Pitt, funnier than Jerry Seinfeld, and even faster than Jesus Christ.
Musk attributed these exaggerations to “adversarial prompting,” where distorted queries trick the AI into producing unrealistic responses.
Such episodes have sparked debate within the tech and crypto communities about the risks of centralised AI.
Kyle Okamoto, CTO of cloud platform Aethir, warned,
“When the most powerful AI systems are owned, trained and governed by a single company, you create conditions for algorithmic bias to become institutionalized knowledge. Models begin to produce worldviews, priorities and responses as if they’re objective facts, and that’s when bias stops being a bug and becomes the operating logic of the system that’s replicated at scale.”
Data Privacy and Misinformation Risks
Grok’s track record also raises concerns about user safety and misinformation.
In August, the chatbot leaked 370,000 user conversations to Google.
In September, it was reportedly taught to post scam links.
By December, it had spread false reports about a mass shooting at Bondi Beach in Australia.
Despite these issues, Buterin emphasises that Grok has contributed more to truth-seeking on X than many other third-party AI tools.
AI Chatbots Face Industry-Wide Challenges
Grok is not alone in facing scrutiny.
OpenAI’s ChatGPT has been criticised for biases and factual errors, while Character.ai is under investigation for a case where its chatbot allegedly encouraged a minor towards self-harm.
These examples highlight the broader need for improvements in AI transparency, accountability, and safety.
Are AI Systems Too Powerful for One Company to Control
Coinlive reflects that Grok demonstrates both the potential and the risks of centralised AI.
While it can challenge user biases and promote truthfulness, its reliance on the views of a few individuals, coupled with past leaks and misinformation, shows how quickly an AI system can mislead or manipulate perception.
When one company controls powerful AI, do we risk institutionalising bias on a global scale, turning subjective opinions into seemingly factual narratives?