Tesla Facing NHTSA Investigation of “Full Self Driving” After Deadly Collision
Tesla is facing an investigation from the National Highway Traffic Safety Administration (NHTSA) concerning issues with its “Full Self-Driving” (FSD) systems and whether they’re safe to use in conditions with reduced visibility.
Transport Topics reports that this probe is in response to an incident where a Tesla driver who was using FSD struck and killed a pedestrian, as well as other FSD-involved collisions taking place during reduced roadway visibility conditions.
The NHTSA notes in an announcement that:
ODI has opened a Preliminary Evaluation of FSD (a system labeled by Tesla as a partial driving automation system), which is optionally available in the Model Year (MY) 2016-2024 Models S and X, 2017-2024 Model 3, 2020-2024 Model Y, and 2023-2024 Cybertruck. This Preliminary Evaluation is opened to assess:
-
- The ability of FSD’s engineering controls to detect and respond appropriately to reduced roadway visibility conditions;
- Whether any other similar FSD crashes have occurred in reduced roadway visibility conditions and, if so, the contributing circumstances for those crashes; and
- Any updates or modifications from Tesla to the FSD system that may affect the performance of FSD in reduced roadway visibility conditions. In particular, this review will assess the timing, purpose, and capabilities of any such updates, as well as Tesla’s assessment of their safety impact.
FSD is Tesla’s paid, premium driver assistance option. Transport Topics notes that the automaker charges $8,000 for FSD. Tesla has also offered it as a free, month-long trial.
The NHTSA also tracks collisions involving the use of any automaker’s advanced driver assistance systems, and Tesla’s Autopilot and FSD are included in this. CNBC reports that “As of Oct. 1, 2024, the NHTSA had tracked 1,399 incidents in which Tesla’s driver assistance systems were engaged within 30 seconds of the collision, and 31 of those had resulted in fatalities.”
Transport Topics also reports “NHTSA has said there’s been ‘a critical safety gap’ between what drivers think Autopilot can do and its actual capabilities. That gap has led to foreseeable misuse of the system and avoidable crashes, according to the agency.”
Tesla has not commented on the investigation, but recently held a marketing event where CEO Elon Musk said that the company expects to have “unsupervised FSD” running in Texas and California next year, in the company’s Model 3 and Model Y electric vehicles.
Per CNBC, “Musk has promised driverless vehicles for years. But Tesla has not yet produced or shown a vehicle that is safe to use on public roads without a human at the wheel, ready to steer or brake at any time.”
This is not the first time Tesla has found itself at the center of an investigation. Last January, they recalled over two million vehicles in the U.S. to address a defect in the Autopilot system aimed at ensuring driver attention. This recall followed a two-year investigation by the NHTSA into crashes occurring while Autopilot was in use, with some of these crashes resulting in fatalities. The recall covered almost all Tesla vehicles sold in the U.S. and included a software update.
What are the dangers of autonomous vehicles?
Autonomous vehicle systems like Tesla’s FSD present a variety of dangers:
- Software malfunctions. Autonomous vehicles rely heavily on complicated algorithms and software systems. An error in coding or a malfunction in the software can result in a catastrophic accident. Or, if the system fails to recognize an obstacle or misinterprets road conditions, it may not be able to respond in time to avoid a crash.
- Sensor failures. Self-driving cars use sensors – including cameras, radar, and LiDAR – to detect objects, pedestrians, and other vehicles. When these sensors become obstructed or damaged, the vehicle may not be able to properly perceive its surroundings. This increases the risk of accidents.
- Hacking and cybersecurity risks. Autonomous vehicles like Teslas are connected to networks, which can make them vulnerable to cyberattacks. A hacker could potentially take control of the vehicle, disable safety features, or manipulate driving commands, putting everyone on the road at risk.
- Legal liability. Determining liability in an autonomous vehicle accident can be challenging. Who is at fault – the manufacturer, software developer, or the human driver?
- Driver complacency. People often overestimate the capabilities of autonomous vehicles and become overly reliant on the technology. This can lead to drivers neglecting their responsibilities behind the wheel, resulting in delayed reactions during emergencies.
- Regulatory gaps. The rapid development of autonomous technology is outpacing regulations, leading to inconsistent testing and safety standards. Without uniform guidelines, defective or untested technology could make its way to public roads, endangering everyone.
As you can see, victims of accidents involving autonomous cars face unique legal challenges and deserve experienced legal representation. If you or a loved one has been injured in an autonomous car crash, consult with the Columbus personal injury attorneys at Soroka & Associates LLC today. We can help you stand up for your rights and secure the compensation to which you’re entitled. Please call our office or submit our contact form to schedule a free consultation in Columbus today.