Legal and ethical questions temper excitement about AI
By: Cynthia Saver, MS, RN
Applications of artificial intelligence (AI) to healthcare are rapidly growing, and with that growth comes big questions related to data management and analysis, ethical issues, legal and regulatory concerns, and user impact.
Data challenges
What is the role of data privacy and protection in the age of AI? Experts are calling for a dialogue to answer that question. “These algorithms have the potential to be very high performing, can provide public good, and can provide [significant] health benefits,” says Michael Matheny, MD, MS, MPH, associate professor of medicine, biomedical informatics, and biostatistics at Vanderbilt University Medical Center, Nashville, Tennessee. “People will have to decide where the balance is between allowing that data to be more accessible for the public good versus keeping it private and personal.”
Dr Matheny, who cochairs the National Academy of Medicine (NAM) Artificial Intelligence in Healthcare Working Group, notes that a country’s culture determines the balance of public good versus privacy and data protection. For example, people in the US and China have very different views on data usage.
Some US institutions are already taking steps to address data protection. Johns Hopkins in Baltimore, for example, has created a secure platform for storing research data in the cloud.
Researchers can access a variety of data, including electronic health record (EHR) data, images, genomics, and physiological monitoring data.
Wendell Wallach, senior advisor to the Hastings Center in Garrison, New York, acknowledges that advances in information and technology make data de-identification—traditionally viewed as sufficient for protecting a person’s privacy—difficult.
“In reality, if you have enough pieces of information about a person, you can probably reconstruct who that individual is, and that would violate their rights,” says Wallach, who is also chair of technology and ethics studies at the Yale Interdisciplinary Center for Bioethics in New Haven, Connecticut, the author of A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control, and principal investigator for the Hastings Center’s Control and Responsible Innovation in the Development of Autonomous Machines Project.
Sound AI algorithms depend on sound analysis. “The problem with many published algorithms is that when we evaluate the performance of that algorithm on a new set of data or a new set of patients, it doesn’t do as well,” says Ferdinand Hui, MD, associate professor of radiology and radiologic science, director of interventional stroke, and co-director of the Radiology Artificial Intelligence Lab at Johns Hopkins.
“We don’t know whether the patients and decision points that got programmed into an algorithmic system to provide care align with the patient populations in a different area of the country from where the system was programmed,” says Danton Char, MD, assistant professor of anesthesiology, perioperative and pain medicine at Stanford University Medical Center in Stanford, California. “What works in Palo Alto might not work in Akron [Ohio],” he says.
Dr Char adds that external pressures could lead to the development of inappropriate algorithms. “Profit-driven pressures and regulatory compliance-driven pressures could cause people to create algorithms that skew toward providing the kind of data that regulators want to hear, or that maximize profits, but at the expense of healthcare or delivering quality health.”
The current emphasis on basing reimbursement on outcomes could lead to the creation of algorithms that guide users toward clinical actions that would improve quality metrics but not necessarily improve patient care. For example, there might be indicators that encourage the ordering of unnecessary testing.
Clinical decision-support systems could also be programmed to boost profits for stakeholders without clinicians having any knowledge of that. This might take the form of recommending medications or devices that the creator or purchaser of the AI algorithm holds a stake in.
“Healthcare exists in this tension between maximizing profit and maximizing health, and those two things don’t always line up,” Dr Char says.
Addressing these data challenges is key to ensure that AI researchers have what they need to build better algorithms.
Ethical issues
Ethical issues include bias, monetizing patient data, and access.
Bias
“So much of our medical research is based on the average, 50-year-old Caucasian male,” says Sonoo Thadaney, MBA, executive director for presence and executive director for the program in bedside medicine at Stanford University School of Medicine in Stanford, California. “We don’t have access to large data sets that represent the populations they aim to serve—with sufficient breadth and depth of diversity in gender, race, and age.”
Thadaney adds that focusing too much on currently available data can lead to unexpected consequences. For example, in the last century there was a well-meaning focus on addressing world hunger by increasing yield per acre, without considering nutrition per acre. “Fast forward to today, and we see that the yields per acre on our planet have certainly gone up, but there are [significant] nutritional differences [among zip codes and countries],” she says.
Some have access to high-nutrition food, but others do not. “We have a food apartheid thanks to our food deserts; we can’t end up with a healthcare apartheid because we only focus on matrices such as efficiency and costs, ignoring criteria such as inclusivity and equity.”
Monetizing patient data
Thadaney notes that patients give permission for a healthcare system to use their data for treatment, billing, and academic research. “Patients have not explicitly given permission to use that data to monetize it for either one institution or a number of institutions,” she says.
Dr Char says that many people will need to volunteer their health data so there is sufficient information to develop AI. “What they should get in return for giving up their data is not clear,” he says.
He notes that in 2017, London’s Royal Free Hospital was found to have breached the Data Protection Act when it gave data for 1.6 million patients to DeepMind, a Google subsidiary. The data transfer was part of a partnership to create Streams, a healthcare app for diagnosing and detecting acute kidney injury. Patients were not told that their data would be used for ongoing testing of the app.
Wallach notes that the European Union’s General Data Protection Regulation gives individuals many rights related to who owns data about them, but that’s not the case in the United States. “The rules [in the US] are looser in terms of what businesses can and cannot do with data,” he says. “There’s a lot of concern that the data is being used in unethical ways or inappropriate ways, and that we should be clarifying the norms on the use of that data.”
Access
Access to AI could be an issue, particularly for smaller hospitals with fewer financial resources. “If I’m in a rural area or a small community hospital, what are the ethical implications of not being able to get the benefits from AI because of the financial outlays?” Dr Char asks.
AI algorithms require updating on a regular basis to be sure they are operating safely and accurately, and those updates contribute to cost.
Dr Matheny says a way to mitigate the financial disparity is to reduce implementation costs through transparent best practices. “That needs to be a conscious effort by stakeholders to encourage national discussion going forward in order to promote standardization and to lower costs of implementation, or only large medical centers will be able to offer the benefits from these technologies,” he says.
Legal questions
Jennifer Geetter, JD, a healthcare attorney with McDermott Will & Emery in Washington, DC, says examples of legal issues associated with AI include:
- Privacy
- Product liability
- Malpractice (the provider listened to the AI product, but it gave wrong advice, or the provider didn’t listen to the AI product when the product gave good advice)
- Informed consent
- Cybersecurity (hacking into medical devices and manipulating the data).
Product liability
When AI performs a simple, automated task, the liability issues are the same as for any other technology, says Dale Van Demark, JD, who also works at McDermott Will & Emery. “The technology has been developed by a company, distributed by a company, and cleared or not cleared by the FDA [Food and Drug Administration] for marketing purposes, so it has already gone through potentially a lot of review processes before it gets into the hands of the people who deliver the service,” he says. “In that context, it’s product liability like any other product liability issue.”
Malpractice
As AI continues to mature, liability issues become more complex. “In the future and not-so-distant future, AI tools will start to perform [more] functions that traditionally have been handled by individuals,” Van Demark says. “When you start down that path, the questions of liability get interesting and difficult.”
Part of the difficulty is pinpointing how and why a computer reaches a particular decision, which may make it hard for the clinician to respond appropriately. For example, if during surgery AI tells the surgeon that there is a 58% chance that harm will occur, the surgeon, who is probably not an expert in statistics, has to decide whether to stop or move forward. What if the surgeon fails to listen to what AI says to do and the patient is harmed? Who is liable?
Dr Char says another issue is the ability to override AI recommendations when the clinician feels the recommendation is incorrect based on the patient’s clinical situation. He notes that already EHRs make it difficult for clinicians to override alerts that aren’t in the patient’s best interest. For example, an EHR recommends a mammogram even though the patient has had bilateral mastectomies.
Informed consent
“A rudimentary principle of the US legal system is that once you disclose sufficiently to a purchaser of a product, you’re pretty much able to wipe your hands of any liability,” says Van Demark, who questions whether the current system is sufficient in the era of AI.
“I’m not convinced the general public and even very knowledgeable clinicians are experienced and educated enough to really understand how these systems work, and to understand the risks associated with them,” he says.
Geetter says that an emerging question is whether patients should specifically be told that an AI-enabled tool will be used in connection with their care before they provide consent.
Cybersecurity
Geetter notes that the Health Insurance Portability and Accountability Act of 1996 (HIPAA) addresses security (for example, data integrity and data availability) in addition to privacy. “There are concerns that the data will be corrupted or held in a ransomware attack,” she says. These concerns apply to any digital tool, but Geetter says, “The AI overlay is whether the cyber risk would corrupt the learning mechanism itself.” The AI platform could then begin learning improperly, with errors proliferating.
Regulatory considerations
How should AI be regulated? How should products using AI be evaluated by the FDA? Already it has cleared several tools using AI algorithms through both De Novo and 510(k) pathways, and it is speeding up the AI approval process. The voluntary precertification (PreCert 1.0) pilot program targets low- to moderate-risk software as a medical device (SaMD).
The program will help determine processes for clearance of first-of-its-kind SaMD. The nine participating companies selected by the FDA for the pilot will be evaluated on five excellence principles: product quality, patient safety, clinical responsibility, cybersecurity responsibility, and proactive culture. Criteria and key performance indicators will be developed for each principle.
Software products from precertified companies will likely undergo a streamlined review process. For example, a precertified company might be allowed to submit less information in a marketing submission for a new digital health product.
In April, the FDA released a proposed regulatory framework for modifications to AI-based SaMD in the form of a discussion paper for comment. The FDA notes that AI products approved to date have been those with “locked” algorithms, which don’t adapt and learn each time the algorithm is used, although the manufacturer provides periodic updates. “Adaptive” or “continuously learning” algorithms do not require manual updating, so they hold significant promise.
How should the field of AI be governed? Wallach says many different entities are creating different standards and best practices. “It would be helpful to start underscoring where there’s a consensus and where there is not a consensus,” he says. “It’s very important to have coordination.” That could include an international governance coordinating committee. In England, the House of Lords Select Committee on Artificial Intelligence published a 2018 report, “AI in the UK: Ready, willing and able?” that included recommended reforms to balance innovation and corporate responsibility.
“A barrier to widespread use of AI is [a lack of understanding of] what technology can and can’t do right now,” Dr Matheny says. “Users need to learn how to critically evaluate these tools in the context of the data that they were derived from, the performance characteristics, and the targets of how they’re being used.”
AI is intended to support clinicians, but it may create challenges. “There is a risk that integrating AI into clinical workflow could significantly increase the cognitive load facing clinical teams and lead to higher stress, lower efficiency, and poorer clinical care,” say the authors of a JAMA opinion article.