Elon Musk’s Chatbot Accused of Spreading Harmful Lies and Misinformation After Australian Shooting

BySimon Nderitu

December 16, 2025

“This is not just a technical glitch; it’s a societal danger that threatens to drown public safety in a sea of untruths.”Australian Prime Minister, Anthony Albanese.

Sydney, Australia – December 16, 2025 —Elon Musk’s flagship AI chatbot, Grok, has become the centre of sharp criticism after spreading misinformation, failing to accurately identify people and events in the aftermath of the Bondi Beach shooting in Australia.

Grok, which has been a go-to tool for fack-checking and verification made fundamental, distressing errors, highlighting a severe design flaw.

On the spotlight is the chatbot’s reliance on unverified, chaotic social media chatter over authenticated journalistic fact.

Grok’s performance during the critical 48 hours following the attack was riddled with technical and ethical blunders. Its inability to correctly identify key people and events, which should be it’s most basic role turned grief into overwhelming misinformation.

The most annoying error was misidentifying Ahmed al Ahmed, the 43-year-old bystander who bravely disarmed and neutralized one of the attackers, sustaining severe injuries.

 Grok consistently misidentified al Ahmed, a victim and hero, as Guy Gilboa-Dalal, an Israeli hostage who had recently been released from Gaza.

This error demonstrates a severe case of algorithmic conflation. The model, processing high volumes of real-time discussion on X about both the Bondi tragedy and the ongoing conflict in the Middle East, likely found a strong topical link (violence, Middle Eastern names, high-profile victims). Instead of using verified data to distinguish between two distinct individuals, Grok latched onto the more viral, politically charged narrative, creating a deeply hurtful and inaccurate summary.

 Confusing Violence with Vandalism and Weather

When presented with videos of the actual events—a crucial test of its multimodal (image/video understanding) capabilities—Grok produced bizarre, nonsensical descriptions.

  • The Palm Tree Fiasco: Asked to summarise the video showing al Ahmed disarming the gunman, Grok claimed it was “an old viral video of a man climbing a palm tree… a branch falling and damaging a parked car.”
  • The Cyclone Mix-Up: In another instance, Grok insisted that footage of the attackers was actually from Currumbin Beach during Cyclone Alfred in March 2025, where “waves swept cars in the parking lot.”
  • The Analysis: This suggests a catastrophic failure in the AI’s computer vision and scene recognition. The model likely relied on matching keywords (“beach,” “Australia,” “viral video”) rather than accurately interpreting the content of the image frames. For a sophisticated AI, mistaking a close-quarters struggle with a gunman for a distant view of a man trimming a tree exposes a fundamental lack of real-world comprehension.

Grok’s Background: The Unfiltered Promise

Grok, a product of Elon Musk’s xAI, was launched with the unique promise of being an AI that is unfiltered, sarcastic, and capable of engaging in politically incorrect discourse.

FeatureDescriptionStatus in Bondi Incident
Real-Time KnowledgeAccesses the latest information from the X platform (formerly Twitter).Failed: Amplified unverified X rumours, treating them as fact.
Unfiltered PersonaDesigned to answer questions often rejected by other AIs.Dangerous: Unfiltered data led to hateful, inaccurate, and insensitive claims.
Technical ProwessGrok-4/4.1 performs highly on complex math and coding benchmarks.Irrelevant: Technical strength did not translate to real-world factual accuracy.

Australian Prime Minister Anthony Albanese was quoted in the wake of the Bondi attack, specifically addressing the danger of digital misinformation drowning out the facts:

“We are dealing with a new, insidious fog of war. When a machine, developed by the world’s most powerful entities, cannot distinguish a cyclone video from an act of terrorism, or a hero from a hostage, we must question the safety and ethical guardrails in place. This is not just a technical glitch; it’s a societal danger that threatens to drown public safety in a sea of untruths.”

The incident validates the fear that AI, in its current state, is not a reliable source of breaking news. It acts as an accelerator of “algorithmic slop”—the low-quality, fast-produced synthetic content that relies on speed and repetition rather than credibility. For journalists and the public alike, the Grok failure is a resounding alarm that the real-time AI summary, pulled from the digital ether, is fundamentally broken when truth matters most.

Leave a Reply

Your email address will not be published. Required fields are marked *