Editor's Note
Machine learning (ML) models designed to predict patient mortality are falling short when it comes to identifying severe injuries that could lead to death, according to a March 27 report in TechTarget.
The article focuses on research published in Nature Communications Medicine found that ML mortality prediction models failed to detect about 66% of critical injuries in hospital settings when trained solely on patient data.
As detailed in the article, researchers tested the accuracy of in-hospital mortality prediction models using multiple ML testing methods, including gradient ascent and neural activation maps, along with publicly available datasets from intensive care units and cancer patients. Despite the advanced testing techniques, the models failed to generate alerts for conditions like bradypnea (low respiratory rates) and hypoglycemia, which can be life-threatening. Additionally, neural network models produced inconsistent results, such as assigning higher mortality risk to moderate injuries and underestimating the risk associated with severe industries.
As reported by TechTarget, the findings emphasize the need to incorporate medical knowledge into ML model design. Researchers also emphasized the need for new testing methods that account for clinical responsiveness, which refers to how well a model adapts to significant changes in patient condition, rather than focusing solely on robustness, which measures a model's ability to withstand noisy data and small perturbations. Current models optimized for robustness may inadvertently become less sensitive to critical shifts in vital signs.
Read More >>