How will artificial intelligence impact surgical patient care?

By: Cynthia Saver, MS, RN

Artificial intelligence (AI) may be coming to your OR sooner than you think. AI is already being used to analyze surgical workflow, communication patterns, and errors that went unnoticed during a procedure. OR leaders need to understand AI and participate in its development and application so that patients and organizations can reap the most benefits.

AI programs include:

  • the Predictive Optimization Trees in Emergency Surgery Risk (POTTER) calculator, which uses AI to assess a patient’s risk for death and 17 postoperative complications
  • the Smart Tissue Anastomosis Robot (STAR), which sutures better than expert surgeons (although its use has been limited to animals)
  • the DASH Analytics High-Definition Care Platform, which predicts risk of surgical site infections (SSIs) and suggests ways to reduce them in real time, as the surgeon is closing. The platform reduced SSIs by 74% in 3 years at the University of Iowa Hospitals & Clinics.

Many legal and ethical questions — such as who is responsible when the AI device is wrong and how to manage the massive amounts of high-quality data required for developing AI programs—have yet to be answered.

What is AI?

AI is the ability of a computer to respond and act in ways similar to humans. “The computer is able to perceive, think, plan, learn, and manipulate objects,” says Whende Carroll, MSN, RN-BC, founder of Nurse Evolution, a company that looks at how healthcare technology, data analytics, and innovation concepts can be used to improve health and how healthcare is delivered.

Whende Carroll, MSN, RN-BC

Whende Carroll, MSN, RN-BC

Whende Carroll, MSN, RN-BC

“The backbone of AI is the algorithm,” says Carroll. “In AI, an algorithm is a well-constructed set of rules given to a program, whether it’s used to predict a diagnosis or chronic disease progression, or a medical device such as a robot. It’s the rules a machine is given so that it can plan, learn, or be able to move something.”

Types of AI

Machine learning (ML): This enables machines to learn from experience, just as humans do. That learning depends on tensors, which are clusters of data similar to the neural networks in humans. ML has been used to enable a computer to predict bispectral index based on the infusion rates of propofol and remifentanil—and to predict it more accurately than traditional models of pharmacokinetics.

ML can be supervised or unsupervised.

Supervised learning focuses on training a machine to predict a known result or outcome. For example, feeding large amounts of labeled human data on what a stroke looks like on a CT angiogram will train the computer to recognize a stroke. The product Viz.ai identifies suspected large vessel occlusion strokes when given a patient’s angiogram. If evidence of stroke is found, it notifies the stroke team. The program also notifies the radiologist and physicians caring for the patient of the results and the location of the closest stroke center.

Unsupervised learning involves looking at unlabeled data to detect patterns or structure. For example, a machine can be taught to identify bleeding from non-bleeding tissue. This could help with early identification of abnormal bleeding during a laparoscopic procedure.

Reinforcement learning: This falls somewhere between supervised and unsupervised learning. In essence, the machine tries to accomplish a task, such as coming to a medical decision, while learning from its own successes. Controlling an artificial pancreas system is an example of reinforcement learning in action.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

With artificial neural networks, input “cells” of the artificial intelligence receive data, which are then processed by hidden cells that make adjustments and connections based on an algorithm; output cells then perform a task. The processing enables the computer to develop responses based on pattern recognition and data classification, in the same way that the brain responds to external stimuli.

Artificial neural networks (ANNs): These process signals in a way similar to what occurs in humans. But humans use their knowledge to extrapolate and apply what they know to new situations, whereas neural networks have to be fed large amounts of data until extrapolation isn’t necessary.

One study showed ANNs could analyze data such as patient history, blood pressure, and medications to help predict in-hospital mortality after open repair of an abdominal aortic aneurysm with an accuracy of 95.4%. Other researchers used an ANN and intraoperative electronic medical record data to help predict in-hospital mortality of surgical patients.

Natural language processing (NLP): This focuses on building a computer’s ability to understand human language. NLP lets the machine infer meaning from unstructured data such as providers’ comments in the electronic health record (EHR). In one study, NLP enabled the computer to scan EHRs to identify words and phrases in operative reports and progress notes that predicted anastomotic leak after colorectal resections.

Computer vision (CV): This enables machines to learn from images and videos that are fed into them. CV is being used for analysis of patient cohorts, longitudinal studies, and decision-making in surgery.

“We used video data and AI to build a model that identified with 93% accuracy the steps of a sleeve gastrectomy procedure in real time, noting any missing or unexpected steps,” says Daniel Hashimoto, MD, MS, surgical artificial intelligence and innovation fellow at Massachusetts General Hospital in Boston. Instead of manually evaluating videos of procedures, he and his team trained a computer to classify segments of the videos into operative steps. Deviation from the expected operative path identifies possible undesirable events.

 Daniel Hashimoto, MD, MS

Daniel Hashimoto, MD, MS

Daniel Hashimoto, MD, MS

The ultimate goal is to be able to warn surgeons of missing or unexpected steps in real time, so that an adverse event can be avoided. “AI can learn from hundreds or thousands of surgical procedures simultaneously, but a surgeon learns from one surgery at a time,” Dr Hashimoto says.

The image below illustrates the process of computer vision. Note how the visual image is “read” and then translated into quantitative data.

The image above illustrates the process of computer vision. Note how the visual image is “read” and then translated into quantitative data. Source: Daniel Hashimoto, MD, MS. Used with permission.

The image above illustrates the process of computer vision. Note how the visual image is “read” and then translated into quantitative data. Source: Daniel Hashimoto, MD, MS. Used with permission.

The image below illustrates the process of computer vision. Note how the visual image is “read” and then translated into quantitative data.

The image above illustrates the process of computer vision. Note how the visual image is “read” and then translated into quantitative data. Source: Daniel Hashimoto, MD, MS. Used with permission.

The image above illustrates the process of computer vision. Note how the visual image is “read” and then translated into quantitative data. Source: Daniel Hashimoto, MD, MS. Used with permission.

The image below illustrates the process of computer vision. Note how the visual image is “read” and then translated into quantitative data.

The image above illustrates the process of computer vision. Note how the visual image is “read” and then translated into quantitative data. Source: Daniel Hashimoto, MD, MS. Used with permission.

The image above illustrates the process of computer vision. Note how the visual image is “read” and then translated into quantitative data. Source: Daniel Hashimoto, MD, MS. Used with permission.

A booming market

The healthcare AI market is expected to grow from $2.1 billion in 2018 to $36.1 billion by 2025. Allied Market Research predicts that the global healthcare AI market will reach $22,790 million by 2023, up from $1,441 million in 2016, with a compound annual growth rate of 48.7% from 2017 to 2023.

Big companies are taking a big interest in AI. IBM, Microsoft, Google, LinkedIn, Facebook, Intel, and Fujitsu were the seven biggest machine learning patent producers in 2017, according to IFI Claims Patient Services.

What will be the impact of large companies entering the AI market?

John Beard, MD, MBA

John Beard, MD, MBA

John Beard, MD, MBA

“Those companies that are really delivering the value are going to succeed,” says John Beard, MD, MBA, medical director at ICU Medical, Inc, in San Clemente, California. He expects that within the next 5 years, there will be a consolidation in the market, with some companies becoming known as the best providers of AI resources.

Hospitals are already leveraging AI for competitive advantage. Carolinas HealthCare is developing self-service applications that provide tools patients can use for self-diagnosis and self-treatment in selected scenarios. Leaders there hope AI will help capture additional patient information for their databases.

Insurers are using AI to better assess patient risk so that proactive wellness actions can be taken. AI can also be used to better establish premiums and to reduce time and costs associated with manual health record reviews.

“AI can take all these disparate sources of data and filter them in a way to provide the clinically meaningful components almost in a dashboard type view of a patient,” Dr Beard says, adding that each provider would have access to the data. “That saves a tremendous amount of time and human resources, and also provides improved data for decision making in matching that patient to the right treatment.” Treating the patient more promptly and more effectively will reduce costs.

Nearly half (47%) of hospitals are in the early stages of implementing AI, with 88% of hospital leaders confident or somewhat confident they would see a return on investment for AI, although they believe it will take 3 to 5 years. The top health AI application in terms of value is robot-assisted surgery, according to an Accenture report, which says health applications may potentially result in a $150 billion annual savings for the US healthcare economy in 2026.

But the authors of an opinion piece in the Journal of the American Medical Association are less optimistic about AI’s ability to reduce costs because it needs data storage, data curation, model maintenance and updating, and data visualization. “These tools and related needs may simply replace current costs with different, and potentially higher, costs,” they say.

The image above shows how the steps of a procedure can be tracked in real time, along with an estimate of the remaining surgical time. Courtesy of CAMMA (Computational Analysis and Modeling of Medical Activities), a research group that aims to develop new tools and methods to perceive, model, and analyze clinician and staff activities (http://camma.u-strasbg.fr).

The image above shows how the steps of a procedure can be tracked in real time, along with an estimate of the remaining surgical time. Courtesy of CAMMA (Computational Analysis and Modeling of Medical Activities), a research group that aims to develop new tools and methods to perceive, model, and analyze clinician and staff activities (http://camma.u-strasbg.fr).

The image above shows how the steps of a procedure can be tracked in real time, along with an estimate of the remaining surgical time. Courtesy of CAMMA (Computational Analysis and Modeling of Medical Activities), a research group that aims to develop new tools and methods to perceive, model, and analyze clinician and staff activities (http://camma.u-strasbg.fr).

AI and surgery

There are specific uses for AI at every stage of the surgical continuum, Dr Beard says.

For example, Frederick Memorial Hospital in Frederick, Maryland, has a scheduling system that uses AI to help forecast how long a case will take based on a computation of the surgeon’s recent case activity. “The software reviews the last 10 bookings of a specific surgeon performing a specific surgery and drops the high and low case times to determine the average of the remaining eight cases,” says Cynthia Russell, MSN, RN-BC, perioperative services information systems liaison. The recommended case length is based on that average.

“The caveat is that the prediction is based on the surgery that is booked,” Russell says, noting that a surgeon may not perform the exact booked case because of a clinical situation encountered during the procedure. Using ICD-10 or CPT codes helps improve forecasting accuracy by providing a structured language for comparison purposes.

Nicolas Padoy, PhD, associate professor on a chair of excellence research program in medical robotics at the University of Strasbourg in Strasbourg, France, has extensively studied AI’s role in surgical workflow with his clinical partners at IHU Strasbourg, IRCAD, and University Hospital of Strasbourg.

 Nicolas Padoy, PhD

Nicolas Padoy, PhD

Nicolas Padoy, PhD

“We believe that with the digitalization of the OR, the digital information coming from the different information systems, electronic equipment, and sensors can be used to develop an AI system that can understand the surgical processes taking place in the room, namely recognize in real time the current status of the surgery and of the OR.” The current status would include, for example, the surgical step performed by the surgeon and the actions performed by the circulating nurse.

“When AI techniques are sufficiently mature, I envision that user interfaces in the OR will provide much more contextual support, for example, by showing the right information, instruction, and buttons at the right location, at the right time, and to the right person,” Padoy says.

Padoy and his colleagues used AI and human pose estimation, which consists of computing the location of the persons and their body parts using video data as input, to reduce exposure to radiation during procedures. “The positions of the persons are used to compute the x-ray exposure risk for each person present in the room,” he says. “The radiation exposure risk can then also be displayed on the person, per body part.”

A 2018 study by Padoy and his colleagues showed that AI outperformed traditional methods for estimating remaining surgical time for laparoscopic cholecystectomy (based on 120 videos) and gastric bypass (based on 170 videos) procedures. Dr Hashimoto believes AI will be able to predict case time relatively soon, but its ability to augment surgical decisions is farther in the future.

Padoy says AI data from the past hundred or thousand procedures could be compared to current procedural data to detect a possible anomaly so that the surgeon could be notified. For example, the Triton system uses AI and infrared camera technology to analyze photos of sponges taken by an iPad in an operating or delivery room and quantify blood loss.

AI techniques are being used to help improve surgical robots’ control accuracy and expand their dexterity through automation or semi-automation. “AI can detect ‘no-go’ zones and prevent instruments from entering sensitive anatomical areas,” Padoy notes, adding that AI could analyze data to improve robot design.

“AI will help ensure use of interventions that, based on data and a patient’s particular condition, will maximize recovery and reduce the probability of complications, and then get the patient home to recover sooner,” Dr Beard says. Interoperable infusion pumps in continuous communication with the EHR are already in use. Infusion pump-EHR interoperability ensures that a clinician’s order is transmitted directly to the pump. In addition, an alarm notification can be forwarded to a handheld device used by a nurse. The nurse can either act on the alarm or pause it if immediate action is not needed.

Other surgical applications of AI include:

Training. AI could be used to identify steps in a surgical procedure and display instructions for staff, such as which instruments are needed next, Padoy says. Dr Hashimoto’s team at Massachusetts General Hospital is looking at extrapolating performance data from videos to help create virtual education simulations.

Diagnosing. In one study, AI was used to diagnose a brain tumor in a tissue sample obtained during surgery in just 3 to 4 minutes. In another study, researchers developed a proof-of-concept model in which AI found that 30% of procedures to biopsy high-risk breast lesions could have been avoided.

Analyzing. AI successes to date have been centered mostly in specialties that are image intensive, such as radiology and pathology.

Optimizing supply and instrument use. Dr Hashimoto notes that AI-driven video analysis is being used to identify the surgical instruments used during a procedure, which can help with inventory and future purchasing decisions.

Staffing. Staffing is partially predicated on workflow, so the deeper understanding of complexity, numbers, and lengths of cases that AI can bring could help OR leaders better match staffing needs.

“It’s really critically important that everybody on the clinical care team is willing to work with developers to provide the context of what actually matters when it comes to improving patient outcomes, logistics, and day-to-day workflow,” Dr Hashimoto says. He adds that much of the data generated by AI will be of critical importance to OR leaders.