As recently described by The New England Journal of Medicine, the liability risks associated with using artificial intelligence (AI) in a health care setting are substantial and have caused consternation among sector participants. To illustrate that point:

“Some attorneys counsel health care organizations with dire warnings about liability and dauntingly long lists of legal concerns. Unfortunately, liability concern can lead to overly conservative decisions, including reluctance to try new things.”

“… in most states, plaintiffs alleging that complex products were defectively designed must show that there is a reasonable alternative design that would be safer, but it is difficult to apply that concept to AI. … Plaintiffs can suggest better training data or validation processes but may struggle to prove that these would have changed the patterns enough to eliminate the “defect.”

Accordingly, the article’s key recommendations include (1) a diligence recommendation to assess each AI tool individually and (2) a negotiation recommendation for buyers to use their current power advantage to negotiate for tools with lower (or easier to manage) risks.

Creating Risk Frameworks

Expanding from such considerations, we would guide health care providers to implement a comprehensive framework that maps each type of AI tool to specific risks to determine how to manage these risks. Key factors that such frameworks could include are outlined in the table below:

Factor Details Risks/Principles Addressed
Training Data Transparency How easy is it to identify the demographic characteristics of the data distribution used to train the model, and can the user filter the data to more closely match the subject that the tool is being used for? Bias, Explainability, Distinguishing Defects from User Error
Output Transparency Does the tool explain (a) the data that supports its recommendations, (b) its confidence in a given recommendation, and (c) other outputs that were not selected? Bias, Explainability, Distinguishing Defects from User Error
Data Governance Are important data governance processes built into the tool and agreement to protect both the personal identifiable information (PII) used to train the model and used at runtime to generate predictions/recommendations? Privacy, Confidentiality, Freedom to Operate
Data Usage Have appropriate consents been received (1) by the provider for inputting patient data to the tool at runtime and (2) by the software developer for the use of any underlying patient data for model training? Privacy/Consent, Confidentiality
Notice Provisions Is appropriate notice given to users/consumers/patients that AI tools are being used (and for what purpose)? Privacy/Consent, Notice Requirement Compliance
User(s) in the Loop Is the end user (i.e., clinician) the only person evaluating the outputs of the model on a case-by-case basis with limited visibility as to how the model is performing under other conditions, or is there a more systematic way of surfacing outputs to a risk manager who can have a global view of how the model is performing? Bias, Distinguishing Defects from User Error
Indemnity Negotiation Are indemnities appropriate for the health care context in which the tool is being used, rather than a conventional software context? Liability Allocation
Insurance Policies Does current insurance coverage only address software-type concerns or malpractice-type concerns vs. bridging the gap between the two? Liability Allocation, Increasing Certainty of Costs Relative to Benefits of Tools

As both AI tools and the litigation landscape mature, it will become easier to build a robust risk management process. In the meantime, thinking through these kinds of considerations can help both developers and buyers of AI tools manage novel risks while achieving the benefits of these tools in improving patient care.