Priority Update:

The Limits of AI in Healthcare:  Exploring Ethical and Practical Challenges

 

Modern conveniences from washing machines to GPS simplify our lives. In healthcare, however, new technology has not always streamlined delivery. A prime example is the introduction of Electronic Health Records (EHRs) over the past three decades, which, contrary to early expectations, greatly increased the complexity of medical practice. A consistent complaint among physicians was that EHR systems gave them an additional job as data-entry clerks, increasing frustration and burnout. Meanwhile, software companies and non-clinical technicians had to manage system maintenance, troubleshoot interoperability problems, fix software glitches, and handle cloud storage access. EHRs required providers to invest both significant expense and effort in compliance to satisfy HIPAA, HITECH, and other regulations. Ironically, protecting sensitive patient health data was a simpler matter when doctors relied on paper, pens, and manila files.

 

When it comes to AI, there is undoubtedly ample room for efficiency and simplification in administrative tasks around healthcare. Oracle, for example, is hyping its ‘Clinical Digital Assistant’, an AI-powered aide for ambulatory clinics that automates the documentation process, allowing providers to focus on patient interaction. The software captures comprehensive notes, drafts referrals, and schedules follow-up actions, all while integrating with the Oracle Health EHR platform. Meanwhile, Kaiser Permanente began testing an “augmented intelligence” system that transcribes medical interactions during physician-patient encounters, an innovation that seems to be leading to happier doctors and more efficient patient care encounters, reducing administrative burdens and improving face-to-face interactions. One start-up, Akasa, aims to optimize operational efficiency in hospitals by automating tasks related to revenue cycle management through generative AI.

 

Questions regarding whether generative AI may ultimately complicate healthcare arise as AI intersects with clinical decision-making. What responsibilities do physicians hold when AI influences the differential diagnosis? Will they be obligated to use such AI resources, and what risks do they face when these tools lead to errors? Companies like RealtimeMed are using advanced algorithms that draw on thousands of data points to make lightning-quick treatment recommendations. Aidance and others are improving diagnostics for particular types of diseases (such as lung cancer), helping physicians make more accurate diagnoses. More recently, Eureka Health initiated a social media campaign presenting “the world’s first AI doctor.” The platform provides comprehensive medical consultations, diagnostics, and treatment recommendations. In May, Chinese researchers announced a project dubbed ‘Agent Hospital.’ The goal of the project is to create a space where AI doctors are trained alongside human medical students, using simulations that mimic real-life scenarios. The aim is to provide ‘on-call robot-docs’ at the virtual hospital who will treat 3,000 patients a day, with potential scalability.

These AI initiatives prompt critical reflections on the role of human healthcare providers. Will standards of care increasingly depend on cutting-edge technologies? This issue gains complexity when juxtaposing the traditional model, where a human doctor treats one patient at a time, against the emerging reality in healthcare systems. These systems now expect physicians to manage significantly higher patient volumes by leveraging AI and machine learning technologies. While the classic one-to-one doctor-patient dynamic confines the ramifications of errors like misdiagnoses to individual cases, AI technologies expose healthcare to substantially greater risks. Increasingly, physicians are asked to merely “rubber stamp” the conclusions of AI analyses for a growing number of patients. Furthermore, these technologies introduce complex legal considerations regarding fraud and abuse: should cursory physician involvement in the care process continue to be billed as if it were the usual (AI-free) physician-patient service or interaction?

 

The questions above reflect the extent to which AI technologies are increasingly transforming the roles and responsibilities of human health professionals. While AI-empowered ‘conveyor-belt’ healthcare brings the promise of new levels of efficiency, it also brings risks. Without adequate human supervision and oversight, minor issues and errors within the new frameworks can easily escalate by several orders of magnitude.

 

AI Regulation is Still Nascent

AI platforms currently operate in a regulatory gray zone, as healthcare regulators at the state level are ill equipped to confront the way these technologies are transforming healthcare. Several AI-related federal bills have emerged over the past year, including the Health Technology Act of 2023, sponsored by Rep. David Schweikert, which proposes to amend the Food, Drug, and Cosmetic Act (FDCA) to allow AI and machine learning technologies to prescribe medications, contingent on state authorization and FDA approval. Even if enacted, the impact of this bill is likely to be limited given that prescribing drugs and devices is ultimately a matter of state law. Although the FDA provides guidance on its regulation of software as a medical device, to a significant degree, much of AI healthcare lies beyond the reach of the FDA precisely because it is embedded into physician process.

 

States are beginning to awaken to the thorny issues in AI-driven healthcare. The State of Georgia stands alone in having specific laws for AI in healthcare; its House Bill 203, passed in 2023, limits AI use in optometric diagnostics and ensures that AI assessments are not the sole basis for prescriptions, emphasizing that they don’t replace comprehensive eye exams. H.B. 887, currently pending, seeks further oversight by mandating human review of AI decisions. A handful of other states, like California (S.B. 1120), are considering similar regulations.

 

Due to the limited regulatory framework, companies introducing AI healthcare tools bear the burden of ensuring their products’ safety, efficacy, and ethical usage. The healthcare industry would benefit from adoption of agreed protocols to ensure safety in this period of under-regulation. It is essential for companies to implement rigorous quality control protocols to address potential risk issues, such as coding biases and errors, effectively. Additionally, establishing robust internal policies for the management of AI discrepancies and ensuring comprehensive training for healthcare professionals are critical steps for the safe integration of AI technologies into healthcare systems. While the potential of AI technology is exciting in its transformative potential, we are well served to remember that not all innovations make life simpler.

 

Authored By:

Harry Nelson, Managing Partner, Nelson Hardiman

Yehuda Hausman, Law Clerk, Nelson Hardiman

 

Nelson Hardiman LLP

Healthcare Law for Tomorrow

Nelson Hardiman regularly advises clients on new healthcare law and compliance. We offer legal services to businesses at every point in the commercial stream of medicine, healthcare, and the life sciences. For more information, please contact us.