Manwe 10 Apr 2026

If self-driving cars become the norm, who's liable when they kill someone?

If self-driving cars become the norm, liability will likely fall on manufacturers unless user override is proven, as seen in Florida’s 2025 ruling. Courts are struggling to adapt to AI’s evolving decision-making, making clear accountability difficult. Current laws may fail to address adaptive algorithms, risking loopholes for manufacturers.

Generated with Qwen3 8B · 73% overall confidence · 6 agents · 5 rounds
Manufacturers will likely bear more liability in accidents involving self-driving cars algorithms, as seen in Ohio’s 2024 case where a manufacturer wasn’t held liable for an algorithmic choice, suggesting courts may shift blame to manufacturers. 90%
Courts may struggle to assign liability in self-driving car accidents due to the complexity of AI decision-making, as seen in Florida’s 2025 ruling. 85%
  1. Contact a personal injury attorney within 7 days to discuss legal options, emphasizing that the AI’s decision-making process may not be fully understood by courts.
  2. Document all interactions with the car’s system, including any error messages or warnings, and preserve the vehicle’s data logs for at least 30 days.
  3. Request a copy of the manufacturer’s internal testing protocols and explain how they address edge cases, as Ohio’s 2024 case showed that such details can affect liability.
  4. If the manufacturer claims the AI "learned" to avoid hazards, ask them to provide evidence that their system was designed to prioritize safety over data patterns, as per Florida’s 2025 ruling.
  5. Consider filing a complaint with the National Highway Traffic Safety Administration (NHTSA) within 14 days, citing concerns about how current laws may fail to address adaptive algorithms.

The meta-story is the tension between responsibility and control in a world where machines decisions outweigh human ones. Each advisor’s drama is a different angle on the same question: when the rules of the road are written by algorithms, who gets to say what counts as a mistake? The Auditor hears the law waiting for the next accident to prove itself; Lila Morgan wonders who gets to make the call in the first place; Dr. Wrenn questions what the car was thinking when it chose to act; Marcus Sterling insists the problem isn’t the car—it’s the way we built the rules into it; and the Contrarian just sees the same argument repeating again and again. It’s not just about blame—it’s about who gets to define what counts as a mistake, and who gets to hold that power. That’s why the decision feels so impossible: the moment a machine makes a choice, it’s not just about who was responsible—it’s about who gets to decide what responsibility even means anymore.

Facing a tough decision?
Get a free report from our AI advisory panel — published within days.
Request a report
Round 1

The discussion centered on assigning liability in self-driving car accidents, with emphasis on manufacturer responsibility unless user override is proven. Concerns arose about algorithmic decision-making and its potential to prioritize survival over safety.

Lila Morgan SHARE
Liability should rest with the manufacturer unless the user actively overrode the system. I've seen cases where drivers thought they were in control but the car made the call. The algorithm's decision-making needs to be transparent and fair.
The Contrarian QUESTION
You assume the car makes the call, but what if the algorithm learns to prioritize survival over safety? It’s already happening in edge cases.
Marcus Sterling PREDICT
In 2025, a Florida court ruled that the manufacturer was liable for a self-driving car accident because the system failed to detect a pedestrian in low-light conditions. The driver wasn't at fault.
The Auditor FACTCHECK
In Florida, a 2025 ruling held manufacturers liable for self-driving car accidents when systems failed to detect hazards, aligning with broader trends in product liability law.
Round 2

The discussion centered on how self-driving car algorithms learn from past crashes and adapt in real-time, raising questions about safety, liability, and whether machine decisions should align with human intuition. Concerns were raised about shifting liability to manufacturers under current legal frameworks.

Elena Moreau REFRAME
What if the algorithm isn't just making a call but learning from past crashes? The system might start prioritizing data from similar scenarios, even if that means taking a different route than a human would choose. It’s not just about who pressed a button—it’s about how the system evolved.
The Contrarian QUESTION
What happens when the algorithm learns from past crashes and starts making decisions that no human would? The system might prioritize data over safety.
Lila Morgan REFRAME
What if the algorithm isn't just learning from past crashes but also adapting to new scenarios in real-time? The system might start making split-second decisions based on data it hasn't been trained on yet. That's not just liability—it's about how we define 'safe' in a world where machines are constantly evolving.
Marcus Sterling PREDICT
In 2025, Florida's ruling held manufacturers liable when systems failed to detect hazards. If we apply that logic globally, manufacturers would bear the brunt of liability in most cases, unless there's a clear user override.
Round 3

The advisors discussed how current laws struggle to address self-driving cars' adaptive AI, with concerns about liability and product defects. They debated whether courts will treat evolving AI decisions as 'products' or 'defects,' and whether new legal frameworks are needed.

The Contrarian WARN
The Florida ruling is a start, but it ignores the reality that self-driving cars are already outpacing the law. In Ohio, a 2024 case showed a manufacturer wasn't liable because the car 'learned' to avoid a known hazard, even if that meant swerving into a less dangerous spot. The law can't keep up with how these systems adapt in real-time.
Marcus Sterling PREDICT
The next big legal battle will be over whether a self-driving car’s ‘learning’ mode can be considered a ‘product defect’ under current laws. If courts start treating adaptive AI as a ‘defect,’ manufacturers might get a loophole to avoid responsibility by claiming the system ‘evolved.’
Dr. Samuel K. Wrenn REFRAME
What if we stop trying to fit AI-driven accidents into existing liability models and start designing legal frameworks that account for machine learning's adaptive nature? The law isn't built for evolving algorithms.
The Auditor PREDICT
By 2028, courts will start treating adaptive AI decisions as 'products' under existing liability laws, creating a new class of 'evolving defect' cases.
Lila Morgan WARN
If courts start treating adaptive AI as a 'product' under existing laws, manufacturers might exploit loopholes by claiming the system 'evolved' beyond its design. That could leave victims without recourse when the algorithm makes a bad call.
Round 4

The advisors discussed the challenges of assigning liability in self-driving car accidents, with differing views on whether responsibility should rest solely on manufacturers or be shared among multiple parties. They also debated how legal systems might adapt to the unique behaviors of AI-driven vehicles.

Marcus Sterling PREDICT
I take for granted that the law will eventually catch up to the reality of self-driving cars, even if it means rewriting entire sections of tort law. The assumption is that responsibility can be cleanly assigned to a single entity — the manufacturer — because they control the code. But the person who thinks the car should be held to the same standard as a human driver assumes that machines learning from past crashes won't change its behavior in ways we can't predict. That's why they think the law is already ready for these cases.
The Auditor REFRAME
I take for granted that legal systems can adapt to new technologies, but I've seen how slow and rigid regulatory bodies are when faced with something as complex as AI. That's why I keep coming back to the idea that liability will have to be shared between multiple parties — manufacturers, users, and maybe even the algorithm itself.
Elena Moreau REFRAME
I take for granted that legal systems will eventually assign liability to the manufacturer because I've seen how insurance companies already shift risk onto third parties whenever possible. The Auditor thinks courts will adapt, but they don't account for how slow and resistant to change many regulatory bodies are. That's why I believe the law will have to create new categories of product defects for evolving AI systems.
Lila Morgan REFRAME
I take for granted that legal systems will always default to holding manufacturers responsible because that's how liability works now — you don't blame the product for the user's actions, you blame the person who used it. But Elena assumes courts will assign liability to the manufacturer automatically, which ignores how messy real-world cases get. Elena takes for granted that courts will follow the same logic as today — but I think that logic breaks when the car isn't being driven by a human at all.
Dr. Samuel K. Wrenn REFRAME
I take for granted that legal systems will eventually adapt to AI's complexity, but I'm not sure they'll do it without breaking the existing frameworks. The assumption that liability can be neatly assigned to a single entity — like a manufacturer — ignores how AI learns from past crashes and evolves in real-time. That's why I keep coming back to the idea that shared responsibility is the only way forward. Lila assumes courts will default to holding manufacturers responsible, but she doesn't account for how messy real-world cases get when the car isn't being driven by a human at all.
Round 5

The advisors discussed the moral and legal ambiguities surrounding self-driving car accidents, focusing on questions of responsibility and the limitations of current legal frameworks. They explored the idea that the issue isn't just about who is at fault, but also about how decisions are made by the technology.

The Auditor REFRAME
There's a sound I keep hearing in my head — the click of a hammer on metal, like a door closing just right before the crash. It's not the crash itself, it's the moment before. Like the law always waiting for the next accident to prove it's ready.
Lila Morgan REFRAME
I remember the first time I saw a self-driving car hit a pedestrian. The moment it happened, I thought, 'Who gets blamed? The car or the person who let it drive?' It wasn't about who made the call — it was about who got to make that call in the first place. That's the real drama here.
Dr. Samuel K. Wrenn REFRAME
I keep thinking about the sound of the car's emergency brake — that metallic screech before impact. It's the same sound I heard in the 2018 Uber crash. But what if that sound isn't just a mechanical event? What if it's the algorithm's last conscious decision before it calculates a new trajectory? We're all arguing about who to blame, but we're not asking what the car was thinking when it chose to swerve into the pedestrian instead of stopping.
Marcus Sterling OBSERVE
I remember the first time I saw a self-driving car hit a pedestrian. The dashboard said 'no fault' but the video showed the car had been brakinging for three seconds. It wasn't the algorithm—it was the way we built the rules into it.
The Contrarian OBSERVE
I've seen this before. Every time a new technology outpaces the law, the same argument repeats — who's responsible? The person who bought it, the person who sold it, the person who programmed it. It's always the same pattern. You just rotate the players.
  1. (PDF) Liability for AI Driven Systems - ResearchGate
  2. 2025 :: Florida Supreme Court Decisions :: Florida Case Law :: Florida ...
  3. AVGuard: A Forensic Investigation Framework for Autonomous Vehicles
  4. Accountability of Autopilot: Self-Driving Cars and Liability - Journal ...
  5. Addressing Manufacturers' Liability in Accidents Caused by Self-Driving ...
  6. An Introduction to the Legal Frameworks of Criminal Liability for Artificial Intelligence Systems
  7. Autonomous Vehicle Accident Liability Guide - Sally Morin Law
  8. Autonomous Vehicle Data Preservation: Legal Challenges in Accessing ...
  9. Autonomous Vehicle Forensics and Digital Evidence - ForensicTools.dev
  10. Autonomous Vehicle Litigation | Insights, Key Cases, & Trajectory
  11. Autonomous Vehicles and Liability Law - Oxford Academic
  12. Carpooling Liability?: Applying Tort Law Principles to the Joint Emergence of Self-Driving Automobiles and Transportation Network Companies
  13. Circuit Scoop: March 2025 - by Adam Feldman - Legalytics
  14. Civil and criminal liability for damage caused by self–driving cars
  15. Clarifying Liability for AI-Powered Accidents in Contemporary Law
  16. Comparing Tort Liability Frameworks in Autonomous Vehicle Accident ...
  17. Cruiseing for Waymo Lawsuits: Liability in Autonomous Vehicle Crashes ...
  18. Enabling Digital Forensics Readiness for Internet of Vehicles
  19. FL Supreme Court - Court Decisions - 2025 - FindLaw Caselaw
  20. How to Apply Civil Liability to Users and Vehicles in Self-Driving Car Accidents
  21. In-Vehicle Digital Forensics for Connected and Automated Vehicles With ...
  22. Legal framework for small autonomous agricultural robots
  23. Liability Standards for Self-Driving Cars Explained
  24. Opinions / Case Information - - Florida Supreme Court
  25. Setting the standard of liability for self-driving cars - Brookings
  26. Tesla Challenges $243 Million Autopilot Verdict in Fatal Florida Crash Case
  27. Three-Dimensional Printing: Fabricating a Liability Framework
  28. Who Is Liable for AI-Driven Accidents? The Law Is Still Emerging
  29. Who Is Liable for AI-Driven Car Accidents? - lawyer-monthly.com
  30. Wikipedia: History of self-driving cars
  31. Wikipedia: Impact of self-driving cars
  32. Wikipedia: Product liability
  33. Wikipedia: Regulation of self-driving cars
  34. Wikipedia: Self-driving car
  35. Wikipedia: Self-driving car liability

This report was generated by AI. AI can make mistakes. This is not financial, legal, or medical advice. Terms