When an AI decision causes harm: Who’s on the hook?

Navigating the fast-evolving legal risks of AI—from liability and governance to global platform duties.

Author avatar
Heather Dailey 29 September 2025
When an AI decision causes harm: Who’s on the hook?

Written by Naren Gangvarapu, Chief Information and Digital Officer, Australian Private Registry Investments

Criminal liability

AI systems don’t have mens rea. When a crime occurs “via” AI (e.g., an autonomous function contributing to a death, or a model generating criminal material), prosecutors look for a responsible human or corporation: the operator who deployed it, the supervisor who ignored warnings, or the company that failed to build adequate safeguards. In the 2018 self-driving fatality in Arizona, the human safety driver was prosecuted — not the software — illustrating how criminal liability currently “flows back” to human control, oversight and corporate policies. (The Verge)

Civil liability (tort & product)

For non-criminal harms — financial loss from faulty recommendations, physical injury, or privacy impacts — civil claims dominate. The EU has moved furthest: the AI Act now applies in phases and interacts with updated product-safety and liability regimes, pushing “due care” duties onto AI providers and deployers and easing the burden of proof for victims in some scenarios. Expect more strict-liability style exposure for defective AI-enabled products and clearer avenues to sue when documentation, logging, and risk controls are missing. (Clifford Chance)

Platform duties (bullying, self-harm content, deepfakes)

If “AI bullies” someone — e.g., a model auto-generates harassment or self-harm prompts inside a platform — regulators increasingly target the service for failing to prevent and remove illegal or harmful content:

  • UK: The Online Safety Act imposes statutory duties to mitigate illegal content and protect children (including age-assurance and removal of self-harm content); Ofcom is actively enforcing. Breaches can lead to major fines and criminal exposure for executives in egregious non-compliance. (GOV.UK)
  • EU: The Digital Services Act requires very large platforms to assess and mitigate systemic risks (including harms from recommender systems) and face powerful centralized enforcement by the European Commission. (Browne Jacobson)
  • Australia: The eSafety Commissioner can order rapid removal of cyberbullying content and even require offenders to apologize; non-compliance triggers penalties for both users and services. (eSafety Commissioner)

The U.S. patchwork

The U.S. leans on existing law plus an AI Executive Order that uses procurement, safety testing, and disclosure levers while Congress debates sector rules. Platform immunity under Section 230 largely remains intact after the Supreme Court sidestepped the core question in Gonzalez/Taamneh, but plaintiff strategies are shifting to product-defect, negligence, and state consumer-protection theories — especially where companies market AI features that may be “unreasonably dangerous.” Ongoing federal safety investigations (e.g., driver-assist recalls) show regulators will police automation under existing safety statutes. (Congress.gov)

Edge cases: fraud, self-harm, and AI-enabled CSAM

  • Fraud/financial crime: Prosecutors pursue human actors (developers, deployers, executives) under aiding-and-abetting, conspiracy, or wire-fraud theories when controls are willfully lax. Civil plaintiffs pair this with negligence and unfair-practices claims.
  • Self-harm amplification: Platform duties in the UK/EU/Australia now directly address algorithmic amplification of self-harm content. Failure to implement “effective and proportionate” mitigations is sanctionable. (GOV.UK)
  • AI-generated child sexual abuse material: Law enforcement treats synthetic CSAM as illegal content; new cases show regulators escalating expectations for proactive detection and takedown. (The Guardian)

Are governments & legal systems ready?

Short answer: not yet — but the EU is furthest along. We have a fast-maturing patchwork: the EU’s AI Act + DSA set the most comprehensive, auditable baseline; the UK’s OSA brings strong child-safety duties; Australia’s eSafety regime has sharp removal powers; the U.S. relies on agency enforcement and sector statutes, with Section 230 still complicating content-liability routes. Global leaders are now calling for harmonized “red lines” and an enforcement body — signal that cooperation is lagging the tech. (DLA Piper)

What responsible leaders should do now

1) Treat AI as “safety-critical” where harm is plausible.
Adopt safety cases and pre-deployment testing, bias/risk assessments, and scenario-based red-teaming for fraud, self-harm, and abuse. Document everything (design choices, data lineage, guardrails, and overrides) to meet rising disclosure and audit expectations. (Knight-Georgetown Institute)

2) Build “defensible deployer” evidence.
Maintain event logs, prompt/response traces, human-in-the-loop checkpoints, and kill-switches. In the EU, expect discovery around “due care” and systemic-risk mitigation; in the UK/Australia, show timely detection and removal; in the U.S., show reasonable design and warnings consistent with safety guidance. (Clifford Chance)

3) Clarify accountability.
Map roles: provider vs. deployer vs. integrator vs. operator. Tie clear RACI to approvals, monitoring, and incident response; ensure board-level oversight and executive accountability align with statutory duties (e.g., OSA senior-manager liability exposure in the UK). (GOV.UK)

4) Govern models and data like products.
Inventory models, versions, training data, and third-party components; run supplier assurance on foundation models; apply product-safety style hazard analysis (FMEA/Bow-Tie) and continuous monitoring in production. EU product-liability reforms make this operational discipline pay off. (Kennedys Law)

5) Prepare for cross-border enforcement.
Design your controls to satisfy the strictest regime you operate in (often the EU). Publish concise transparency notes and set up rapid takedown workflows for bullying, deepfakes, and self-harm content across jurisdictions. (EUR-Lex)

6) Push for coherent public policy.
Back international coordination on “red lines” (e.g., impersonation, self-replication, unsafe bio capabilities); support interoperable audit standards and incident-reporting norms so companies can comply once, globally. (The Verge)


Bottom line for executives and boards

  • Liability finds the human (or the company) until and unless legal personhood for AI ever emerges (unlikely near-term).
  • Documentation is your shield: if you can’t show due care — risk assessments, logs, mitigations — you’ll struggle in court and with regulators.
  • Platform rules are hardening fast around bullying, self-harm and manipulation: non-compliance is now a business-model risk, not just a PR risk.
  • Harmonization is coming, but not here yet — design to the highest bar today to avoid costly retrofits tomorrow.

Find this piece and more at 

Published by

Heather Dailey Content Strategist, Public Sector Network