On October 10, 2024, Tesla unveiled its much-anticipated Cybercab, a fully autonomous robotaxi without a steering wheel or pedals, at the “We, Robot” event in Los Angeles. Billed as a cornerstone of Tesla’s vision to redefine transportation, the Cybercab promises to leverage the company’s Full Self-Driving (FSD) technology to deliver a safer, more efficient future. CEO Elon Musk projected volume production to begin in 2026, with a price tag under $30,000, and announced a pilot robotaxi service in Austin, Texas, set to start taking fares in June 2025. Yet, despite this bold innovation, legacy media outlets are quick to spotlight Tesla’s setbacks, often framing accidents involving FSD as evidence of its dangers. This raises a critical question: is the media’s scrutiny rooted in genuine safety concerns, or is it fueled by a narrative that paints Elon Musk as the villain?
The Cybercab is Tesla’s boldest step toward a world where driving is fully automated. With butterfly-wing doors, wireless inductive charging, and an AI-driven system relying solely on cameras, the two-seater vehicle embodies Tesla’s commitment to cutting-edge technology. Starting in June 2025, Cybercabs and other Tesla models, like the Model Y, will offer driverless rides in Austin through a Tesla app, marking the debut of the company’s autonomous ride-hailing network. Musk envisions a future where Cybercabs operate as a robotaxi fleet, allowing owners to earn income by deploying their vehicles when not in use, with plans to scale to millions of units by late 2026.
At the heart of this vision is Tesla’s FSD software, which uses advanced artificial intelligence to process data from millions of Tesla vehicles, enabling cars to navigate complex environments without human intervention. Musk’s claim that autonomous vehicles will be 10-20 times safer than human-driven cars is backed by data that compares favorably to human drivers.
Tesla’s safety metrics provide compelling evidence that FSD and its precursor, Autopilot, outperform human drivers in crash prevention. In late 2024, Tesla recorded one crash per 5.94 million miles driven with Autopilot engaged, compared to the U.S. national average of one crash per 700,000 miles. Even without Autopilot, Tesla vehicles are safer, with one crash per 1.29 million miles, thanks to their robust design and passive safety features like low rollover risk and rigid structures.
These statistics suggest Tesla’s technology significantly reduces accident rates. For context, human drivers in the U.S. cause approximately 6 million crashes annually, with over 40,000 fatalities. If FSD achieves even a fraction of Musk’s projected safety gains, it could save thousands of lives yearly. The Cybercab, built from the ground up for autonomy, aims to push these numbers further by eliminating human error entirely.
Despite the data, Tesla’s FSD has faced intense scrutiny, particularly when accidents occur. Federal investigations have followed incidents like a fatal pedestrian crash in 2024 and a motorcyclist’s death in 2023, often citing driver inattention or technology limitations in low-visibility conditions. Media outlets frequently amplify these incidents, framing FSD as a risky experiment. Headlines emphasize crashes over context, rarely comparing Tesla’s safety metrics to the broader epidemic of human-driven accidents, which include over 7,000 pedestrian fatalities annually in the U.S.
This selective focus fuels a narrative that Tesla’s technology is inherently unsafe, overshadowing its potential to revolutionize transportation. When a Tesla struck a pedestrian in 2024, reports noted sun glare as a factor but downplayed that the human driver was not charged, suggesting the incident was not solely FSD’s fault. Yet, the coverage often implies Tesla’s technology is uniquely flawed, ignoring similar challenges faced by competitors like Waymo or Cruise.
Why does legacy media seem so eager to pounce on Tesla? A recurring theme is the personal animosity toward Elon Musk, often encapsulated in the phrase “Elon bad.” Musk’s polarizing persona—marked by his controversial statements and unorthodox leadership—has made him a lightning rod for criticism. Media outlets frequently tie Tesla’s challenges to Musk’s behavior, portraying his ambitious timelines, like the 2019 promise of robotaxis by 2020, as evidence of unreliability, even though delays are common in cutting-edge tech.
This bias manifests in several ways. Tesla’s marketing of FSD as “Full Self-Driving” has been called misleading, with critics arguing it overstates the system’s capabilities. Tesla’s camera-only approach, while controversial compared to Lidar-based systems, is often dismissed outright, ignoring the potential for data-driven improvements from Tesla’s vast fleet. The media’s focus on Tesla’s accidents contrasts with its relative silence on human-driven crashes, which are far more frequent. This double standard suggests a preconceived narrative, where Musk’s leadership and Tesla’s disruption of traditional automakers make them easy targets.
To be clear, Tesla is not above scrutiny. Investigations into FSD’s performance in edge cases, like complex intersections or low-visibility conditions, are warranted given the stakes of autonomous driving. Tesla’s recall of thousands of Cybertrucks in 2024 for a stuck accelerator pedal shows that even its hardware isn’t infallible. However, the media’s disproportionate focus on Tesla’s missteps risks distorting public perception. Autonomous driving is a nascent field, and competitors have faced similar setbacks, yet they receive less vitriol.
The Cybercab’s success hinges on Tesla overcoming regulatory hurdles and proving FSD’s reliability in real-world conditions. Musk’s claim of a 2026 production launch, following the 2025 pilot, is ambitious, and some question its feasibility given the leap to unsupervised FSD. But writing off Tesla’s vision because of Musk’s bravado or isolated incidents is shortsighted. The company’s safety data suggests FSD is already reducing crashes, and the Cybercab could push that further.
The Tesla Cybercab represents a daring step toward a future where transportation is safer, cheaper, and more sustainable. Its reliance on FSD, backed by data showing crash rates far below human drivers, offers hope for reducing the carnage of human error on roads. Yet, legacy media’s fixation on Tesla’s accidents, often framed through an “Elon bad” lens, risks undermining this potential. By cherry-picking incidents and downplaying context, outlets perpetuate a narrative that serves clicks more than truth.
As Tesla moves toward its June 2025 pilot and 2026 Cybercab launch, the public deserves a balanced discourse—one that holds Tesla accountable for its flaws but also recognizes its strides in safety and innovation. The media should compare FSD’s crash rates to human-driven vehicles, scrutinize competitors with equal rigor, and question whether their coverage is swayed by Musk’s polarizing persona. Only then can we have an honest conversation about the future of autonomous driving and whether Tesla’s Cybercab will deliver on its promise to transform our world.