Published on October 21st, 2019 | by Emergent Enterprise0
Adopting AI in Healthcare Will Be Slow and Difficult
In all industries, not just in healthcare, the adoption of artificial intelligence is running into obstacles. In this post from the Harvard Business Review, Roger Kahn shares how the challenges in healthcare are not necessarily in technology but in regulations and doctor and patient trust in the tech. This is a reminder that technology is not a free pass to implement new processes without being accountable to checks and balances within the industry. For instance, insurance carriers and legal representatives will need to know culpability when an AI diagnosis goes wrong. These are surmountable challenges but need to solved before any adoption.
Photo: ANDREYPOPOV/GETTY IMAGES
Artificial intelligence, including machine learning, presents exciting opportunities to transform the health and life sciences spaces. It offers tantalizing prospects for swifter, more accurate clinical decision making and amplified R&D capabilities. However, open issues around regulation and clinical relevance remain, causing both technology developers and potential investors to grapple with how to overcome today’s barriers to adoption, compliance, and implementation.
Here are key obstacles to consider and how to handle them:
Developing regulatory frameworks. Over the past few years, the U.S. Food and Drug Administration (FDA) has been taking incremental steps to update its regulatory framework to keep up with the rapidly advancing digital health market. In 2017, the FDA released its Digital Health Innovation Action Plan to offer clarity about the agency’s role in advancing safe and effective digital health technologies, and addressing key provisions of the 21st Century Cures Act.
The FDA has also been enrolling select software-as-a-medical-device (SaMD) developers in its Digital Health Software Precertification (Pre-Cert) Pilot Program. The goal of the Pre-Cert pilot is to help the FDA determine the key metrics and performance indicators required for product precertification, while also identifying ways to make the approval process easier for developers and help advance healthcare innovation.
Most recently, the FDA released in September its “Policy for Device Software Functions and Mobile Medical Applications” — a series of guidance documents that describe how the agency plans to regulate software that aids in clinical decision support (CDS), including software that utilizes machine-learning-based algorithms.
In a related statement from the FDA, Amy Abernethy, its principal deputy commissioner, explained that the agency plans to focus regulatory oversight on “higher-risk software functions,” such as those used for more serious or critical health circumstances. This also includes software that utilizes machine learning-based algorithms, where users might not readily understand the program’s “logic and inputs” without further explanation.
An example of CDS software that would fall under the FDA’s “higher-risk” oversight category would be one that identifies a patient at risk for a potentially serious medical condition — such as a postoperative cardiovascular event — but does not explain why the software made that identification.
Achieving FDA approval. To account for the shifting FDA oversight and approval processes, software developers must carefully think through how to best design and roll out their product so it’s well positioned for FDA approval, especially if the software falls under the agency’s “higher risk” category.