The latest National Highway Traffic Safety Administration (NHTSA) investigation into nearly 2.9 million Tesla vehicles equipped with Full Self-Driving (Supervised), or FSD is a scathing indictment of the entire philosophy underpinning Elon Musk’s rush to deploy unfinished, potentially lethal technology on public roads.
Reports of vehicles running red lights and driving on the wrong side of the road are fundamental, life-threatening failures that prove the system is simply not safe for current deployment. This is a safety issue and an ethical failure. The company’s terminology: “Full Self-Driving” is misleading because it fosters a dangerous over-reliance among consumers that the technology simply cannot support.
The key to understanding this systemic risk lies in the dangerous ambiguity of Level 2 driver-assistance software. Tesla insists the FSD system requires drivers to “always be alert to take over at any time.” However, the NHTSA has received 58 incident reports, including six crashes resulting in injuries, where the cars provided “little notice to a driver or opportunity to intervene.”
This points to a critical human factors problem: the system is designed to perform complex driving tasks, yet it fails with little warning, creating a jarring, stressful, and ultimately unsafe handover. The very nature of this Level 2 system encourages driver complacency, a predictable human response to automation.
Consumers who pay extra for FSD software are being sold a convenience that acts as a cognitive hazard. The repeated failure at the same Maryland intersection cited in the NHTSA report demonstrates a failure not of human attention, but of the machine’s ability to reliably perceive and obey foundational traffic laws.
Tesla’s Corporate Culture of Over-Promising and Under-Delivering
Tesla’s consistent use of the term “Full Self-Driving” is not an innocent marketing oversight; it is a calculated strategy that inflates public trust and downplays the technology’s beta status. This is unethical, especially when the potential consequences involve crashes, injuries, and even fatalities.
The NHTSA’s concurrent investigation into the Model Y door-locking mechanisms that reportedly trapped children further suggests a pattern of prioritising rapid development and cost-cutting over rigorous safety testing and consumer well-being. The company’s focus on competing with cheaper electric vehicles, as evidenced by its recent price adjustments, should not come at the expense of human lives.
Why It Matters
The current regulatory framework is clearly not enough. To safeguard the public, immediate and decisive action is required to redefine and police the deployment of advanced driver assistance systems (ADAS).
The most immediate solution is to legally prohibit vehicle manufacturers from using misleading terminology like “Full Self-Driving” or “Autopilot” for any system that requires active human supervision.
Regulators also must implement a stringent truth-in-marketing standard for ADAS features, ensuring that the software name accurately reflects the SAE automation level (e.g., Driver Assistance Level 2). This important step of autonomous vehicle regulation will manage consumer expectations and combat the dangerous over-reliance that leads to accidents.
Additionally, given the frequent failure of drivers to intervene, regulatory bodies should mandate that any vehicle with Level 2 functionality must include a robust, non-defeatable Driver Monitoring System (DMS). This system must go beyond simple torque sensors on the steering wheel, using eye-tracking and head-position monitoring to ensure the driver’s attention is genuinely focused on the road, with an immediate, non-negotiable hard system shutdown if vigilance lapses.
Lastly, when a Full Self-Driving car runs a red light and causes a crash, the manufacturer must be held explicitly liable for the system’s flaw. Regulators should require manufacturers to maintain an expanded, standardised Event Data Recorder (EDR)—a black box—that captures all sensor inputs, FSD decision logs, and driver monitoring data in the moments before a crash.
Mandating data transparency will enable investigators to swiftly and accurately determine if the fault lies with the software or with gross driver negligence, ensuring that accountability is correctly assigned, whether to the EV manufacturer or the user.