A “bi-partite” AI Liability framework: Compensatory measures to enforce compliance with preventive measures

Redefining Liability

 

Current regimes and traditional notions of liability, in Europe and elsewhere, have been challenged by the emergence of Artificial Intelligence (AI) and its specific features. On one side, the combination of the features of openness, data-drivenness and vulnerability, enables the harm of further categories of protected interests – such as privacy, confidential information, cybersecurity, etc. – which in turn challenges the notion of damage. On the other side, the characteristics of autonomy, unpredictability, opacity, and complexity, impact the notions of causation and duty of care. All these features render liability assessments difficult, unless AI systems are adequately governed.

 

So far, the EU’s liability framework has only been partially harmonized. For instance, the current Product Liability Directive (Directive 85/374/EC, “PLD”), implemented into Member State law, dates back to 1984 and fails to encompass AI-related harm. Fragmentation in the EU’s existing liability regime call for its revision, to catch up with the rapid changes brought by AI. This is what the proposed framework on AI liability, which can be defined as “bi-partite”, aims to achieve.

 

The proposed AI Act, part of the Commission’s 2021 AI package – and coupled with the proposed Machinery Regulation – constitutes preventive, ex-ante measures adopting a risk-based approach to govern AI systems. Gaps in redress mechanisms under the AI Act, and doubts surrounding its surveillance authority system and AI auditing ecosystem, raise questions regarding the regulation’s enforcement. To face the scenario where a lack of compliance generates damages, the ex-ante legislation has been complemented by two proposals for compensatory, ex-post measures: the revised Product Liability Directive (“revised PLD”) and AI Liability Directive (“AILD”). We look into how the revised PLD and the AILD contribute to the enforcement of the preventive measures, by pushing for compliance with the obligations they introduce.

 

Enforcement issues of AI Governance

 

The AI Act and Machinery Regulation set out obligations allocated between different economic operators of certain AI systems. The AI Act proposes a risk-based approach to obligations, differentiating between AI systems that create unacceptable risk (i.e. prohibited), high risk (“HRAIS”), low risk or minimal risk. It imposes substantive and procedural requirements for HRAIS aimed at enhancing accountability, transparency, accuracy, fairness, safety, and robustness. As for machinery products integrating AI systems, they need to fulfil the essential requirements of both the AI Act and the Machinery Regulation.

 

Both these regulations introduce penalties (respectively, administrative fines and criminal sanctions) for infringing the risk-management obligations – viewed as a way to hold parties responsible for their deployment of AI systems. Yet the question still arises as to who should compensate the AI-caused harm that materializes once the risk is not managed. The AI Act and the Machinery Regulation do not provide status recognition nor procedural rights, and doubts surround the complaint mechanism introduced by the Council proposal of the AI Act. Therefore, holding economic operators liable for AI-generated harm implies a step further from the proposed rules on AI governance: this gap in private enforcement is somewhat filled by the proposed liability directives.

 

Compensatory measures to enforce compliance with preventive measures

 

The revised PLD and the new AILD offer distinct yet overlapping means to push for transparency, burden of proof alleviation, and compensation of victims of AI-related damages occurring despite the preventive measures.

 

The proposal for a revised PLD introduces several changes relevant to AI, related to the directive’s subjective scope (including also any person that modifies a product ‘already placed on the market or put into service’, Article 7(4)) and objective scope. The notion of product refers to intangible items such as digital manufacturing files and software, and thus AI systems (Article 4(1)); damage encompasses “loss or corruption of data[…]” (Article 4(6)(c)); defect assessment considers “the effect on the product of any ability to continue to learn after deployment” (Article 6(1)(c)). Software upgrades, updates, or lack thereof, can make a product defective even if it was not when put into circulation (Article 10(2)).

The new AILD complements the revised PLD by creating a mechanism for national claims of fault-based liability for AI-caused damage, and introduces provisions linked to the AI Act’s preventive measures.

 

The directives each propose two rebuttable legal presumptions and an evidence disclosure mechanism, aimed at overcoming opacity.

At (potential) claimants’ request, national courts are empowered to order evidence disclosure from defendants – subject to safeguards and within the limits of what is “necessary and proportionate to support a claim” (Revised PLD Article 8, AILD Article 3). Non-compliance with disclosure orders activates, under certain circumstances, rebuttable presumptions of defect (for the revised PLD, Article 9(2)) or of non-compliance with duty of care (for the AILD, Article 3(5)).

The revised PLD and AILD both introduce rebuttable presumptions of causality, respectively between the product’s defectiveness and the damage (Revised PLD, Article 9(3)) and between the defendant’s fault and the AI system’s produced output (or failure to do so) giving rise to the damage (AILD, Article 4). The AILD’s presumption only applies if the requirement of fault (e.g. non-compliance with AI Act obligations) is established. To further incentivize disclosure, the AILD provides that the defendant may rebut the causality presumption by demonstrating (although difficult) that “sufficient evidence and expertise is reasonably accessible for the claimant to prove the causal link” – namely through the AI Act obligations of transparency, documentation, logging and recording.

 

 

Remaining gaps in enforcement and harmonization

 

The rules on AI governance and on AI liability constitute “two sides of the same coin”, creating an AI liability framework in the EU. The proposals for the AILD and the revised PLD reinforce the importance of compliance with the ex-ante obligations, particularly concerning HRAIS, and provide tools for consumers to be compensated if compliance is not enough to prevent harm. By doing so, the compensatory measures aim to fill the enforcement gaps of the preventive measures.

 

Yet criticism still arises regarding the enforcement of the much-needed AI liability framework, firstly due to uncertainty on the coordination between the various provisions. This encompasses the potential overlap between the AILD and the PLD (which is addressed for the current version only but not the revised one), the delimitation of the two directives’ respective scopes (regarding complex algorithms that do not fall under the strict definition of AI), and the legal uncertainty arising from the choice of directives as legal instruments (which also calls for greater understanding of AI functioning by authorities and courts enforcing the legislation).

 

The approach adopted by the framework is also criticized. First, applying the AI Act’s risk-based approach to the AI Liability Directive (and to its core principles like disclosure of evidence and part of the burden of proof alleviation) runs the risk of under-inclusiveness of individually pronounced risks. Second, applying a horizontal approach to AI liability (with a « one-size-fits-all solution » for various sectors) does not consider the intrinsic differences between distinct AI applications and the issues they raise. Third, the approach to alleviating the claimants’ burden of proof is deemed not effective enough, as claimants still have to prove numerous elements for the directives’ presumptions and disclosure mechanism to apply.

 

The coordination issues and general criticism reverberate on the effectiveness and enforcement of the proposed AI liability European framework. The redress mechanisms’ effect remains uncertain, and individual redress might need to be complemented by other means to ensure enforcement. Although obligations and rights related to explanation and overcoming opacity have been introduced, difficulties persist in understanding whether one was harmed because of an AI system. Consumers would still find difficulties in proving fault when it comes to complex systems, with the PLD’s no-fault mechanism applying only to material harm. Effective facilitation of redress mechanisms might require the framework to address victims’ need for specific resources – both technical and financial – in order to support their claims – and the availability of legal assistance for AI liability cases. Additionally, organizations investing in AI should start taking steps towards compliance with the new liability framework despite the lack of legal certainty. The overall harmonization of the AI Liability framework, seemingly limited, will need to be evaluated in light of the national implementation of the directives vis-à-vis the direct application of the ex-ante provisions.

Maria Lillà Montagnani and Marie-Claire Najjar
Latest posts by Maria Lillà Montagnani and Marie-Claire Najjar (see all)

Author: Maria Lillà Montagnani and Marie-Claire Najjar

Maria Lillà Montagnani is Professor of Commercial Law (teaching and researching in the field of Law and Technology) and Director of the LL.M in European Business and Social Law at Bocconi University. Marie-Claire Najjar, LL.M., is Academic Fellow (in the field of Law and Technology) and co-coordinator of the LL.M. in European Business and Social Law at Bocconi University.

Leave a Reply

Your email address will not be published. Required fields are marked *