February 2, 2026
The UK Jurisdiction Taskforce (UKJT) has launched a consultation on its draft Legal Statement on Liability for AI Harms which considers in what circumstances, and on what legal bases, English common law will impose liability for loss that results from the use of AI.
The UKJT describes its role as not to propose policy-based reforms, but rather to explain through the publication of its Legal Statements how existing common law is likely to contend with new private law matters arising from emerging technology. It has already, for example, issued a Statement on the legal status of crypto assets and smart contracts (which we discussed here).
The latest draft Statement concerns liability for harms caused by AI, an issue which the UKJT has previously identified as needing clarification, against a backdrop of “genuine market uncertainty about how and when developers of AI tools and those that utilise them might incur legal liability when things go wrong”.
While the draft Statement is detailed (running to over 80 pages), it is nonetheless deliberately limited in scope, excluding, for example, considerations of criminal, competition, regulatory, and intellectual property law. As the UKJT explains, such omissions are necessary as “a Legal Statement on private law liability for physical and economic harm caused by use of AI is more pressing and should not be delayed to allow for a more wide-ranging analysis”.
Similarly, the Statement largely does not address what are acknowledged to be the majority of cases involving potential liability for loss caused by the use of AI, namely those that will be governed by the terms of the contract. In such cases, liability will turn on how the contract is drafted and how risk is allocated between the parties.
Instead, the UKJT’s emphasis is on circumstances where AI harm occurs and no contract exists between the parties. This means that much of its focus is on the law of negligence, as the Statement explores how duties of care are likely to arise, their scope, the standards that courts are likely to apply to various parties, and how principles of causation can apply in the context of autonomous and opaque systems. It does so across a range of causes of action, from professional negligence to defamation and liability for negligent misstatement as a result of chatbots making false statements. It also considers the role of vicarious liability and the place of product liability, noting the Law Commission’s review of this entire area (discussed here), prompted in large part as a result of emerging technology.
Ultimately, whilst the Statement includes interesting discussions from experts in the field about, for example, how likely various actors along the AI supply chain will be held liable in certain circumstances, or what constitutes “reasonable care and skill” in the context of a professional using AI (including whether a failure to use AI could result in liability), the overall message is one of cautious reassurance: that, despite throwing up new factual questions, existing legal frameworks are well equipped to handle cases involving AI-related harms.
The consultation invites views on whether the subjects addressed in the Statement are appropriate and useful, as well as whether other issues within the scope of the project ought to be addressed. It closes on 13 February and can be read in full here.
Expertise