More than algorithms: an analysis of safety events involving ML-enabled medical devices reported to the FDA
Presenters
Moderator
Watch the Recording
Statement of Purpose
As machine learning (ML) enabled technologies are introduced into clinical care, it is crucial to evaluate their safety and efficacy. While the benefits are well documented in the literature, far less is known about safety. Initial safety research centers on the limitations of machine learning such as its black box nature and susceptibility to data biases, and case studies of particular events.
The present study sought to extend the safety research with a systematic analysis of adverse events involving ML-enabled medical devices captured as part of the Food and Drug Administration’s post-market surveillance program. By analyzing adverse events involving ML devices, as artefacts of the real-world implementation of ML, we demonstrate the need to broaden our perspective of safety beyond algorithms. Most safety events involved data used by medical devices, while problems with the use of ML devices were four times more likely to result in harm.
Learning Objectives
- Understand the types of safety problems associated with the use of machine learning (ML)-based decision support systems and their consequences.
- Identify how and where safety problems can occur when ML systems are used by clinicians and consumers.
- Formulate approaches to improve safe design, implementation and use of ML systems.