AI in Healthcare: New Report into Regulation

AI in healthcare: new report

Healthcare research company Paragon Health Institute has issued a significant new report on artificial intelligence(AI) in healthcare in the United States. The paper’s title is Healthcare AI Regulation: Guidelines for Maintaining Public Protections & Innovation. It comes at the end of a year in which state legislatures have proposed hundreds of AI-related bills. Indeed the federal government has been promoting expansive cross-agency AI regulation.

The central concern of Paragon’s paper is the prevention of AI misregulation that fails to improve public protections while increasing costs and reducing the medical advances policymakers desire most from AI.

Background

AI is rapidly becoming the focus of conversation in medical device technology. Our pages are not yet peppered with examples, but it’s only a matter of time. Examples such as “Heart Attack Detection Better with AI” paint an obvious picture of the potential for AI in healthcare.

Paragon Health Institute’s newly issued report illuminates the complexities of AI regulation in healthcare. It goers on to proposes guidelines that balance public protections with the need for healthcare innovation.

Paragon is a leader in healthcare research and market-based policy proposals. The organisation provides health policy research as well as market-based policy proposals. The objective is to improve outcomes in the public and private sectors. Journalists and healthcare analysts can review Paragon’s latest studies and commentary at paragoninstitute.org/research/.

AI in Healthcare: Report Proposals

Among the more noteworthy proposals is the recommendation that AI regulation must specify both the type of AI technology and the healthcare context to which each rule applies. In other words, rules would use granular descriptions. An example would be “artificial neural networks in medical image analysis” instead of vague language like “healthcare AI.” This emphasis on the combination of technology definition and medical context recognizes that AI risk is closely tied to these two factors.

Furthermore the paper recommends that policymakers use existing regulatory agencies to govern AI in healthcare. This means exploiting existing industry experience in preference to a centralized AI office or AI “czar”. The paper argues such centralization would decrease regulatory awareness of industry-specific considerations. It would also risk duplicative rulemaking between the AI office and other government agencies. Instead, the guidelines advocate for regulatory agencies having the personnel, internal AI expertise, and resources necessary to perform their duties properly.

The report’s guidelines also touch on matters such as:

  • aligning regulatory efforts with the FDA’s historic work of evaluating software as a medical device;
  • handling scenarios where AI-enabled devices have empirically confirmed benefits but whose mechanism of effect is hard to explain; and
  • concerns around algorithmic discrimination and data privacy in AI-enabled systems.

Author comments

Author and Visiting Research Fellow Kev Coleman speaks about his report.

“AI is poised to make remarkable contributions to American healthcare, but these contributions can be jeopardized by a suboptimal regulatory framework. What we need is an approach that preserves safety standards while discouraging rules that benefit the biggest AI vendors while impeding innovative startups from entering the market.”

SOURCE: Paragon Health Institute, PR Newswire

published: December 11, 2024 in: AI, Clinical/Educational, Healthcare, News, Technology, USA

Leave a Reply

Your email address will not be published. Required fields are marked *

Most read

Latest

^