Collaboration is how we get from AI promise to healthcare practice 

ABSTRACT WAVE

A patient arrives at the emergency department unable to move the right side of the body. A CT scan appears mostly normal at first glance, but a faint irregularity along a blood vessel may indicate the early stages of a stroke. An AI system reviewing the same image can analyze patterns within minutes, and notify the clinical team for further evaluation.

Yet hospitals often cannot use other systems like these at scale because Medicare’s payment rules were built for traditional procedures rather than software that provides real-time computational analysis. It can cost developers millions of dollars and take numerous years for Medicare to grant this kind of reimbursement.

AdvaMed engaged with members of Congress and other policymakers to help shape the Health Tech Investment Act, which would establish a more clear and stable payment pathway for FDA-cleared AI-enabled medical devices, so hospitals can consider adopting tools that provide clinically significant information that would otherwise be unavailable without AI. This initiative is indicative of our larger focus on collaboration between the public and private sectors: because to move from vision to routine practice, Congress will need to help establish clear and predictable payment pathways for AI enabled services, help create a stable regulatory and reimbursement framework for algorithm-based health care services, and support implementation funding and technical assistance. This will allow hospitals, medtech companies, startups, and technology players to collaborate and bring validated tools into everyday care for patients and clinicians.

Shaping the next generation of healthcare is an ambitious undertaking, and experience shows that breakthroughs at this scale almost never emerge from a single field. For example, the smartphones in our pockets required advances in telecommunications, computing, and user-centered design, while MRI evolved through collaboration among physicists, engineers, and radiologists.

Over the past two years, when I served as the chair of AdvaMed Digital Health Tech, cross-disciplinary collaboration has been our focus. AdvaMed Digital Health Tech brings together organizations across technology and medtech to help transition AI pilots into dependable infrastructure, linking companies such as GE HealthCare, Johnson & Johnson, Medtronic, Stryker, and ResMed with platforms like Apple, Google, Microsoft, Amazon, and Verily.

Four areas illustrate how this collaboration has translated into meaningful progress.

  1. 1. Regulation: Traditional medical device frameworks assume a product remains essentially fixed after clearance. Modern AI systems, however, are designed to evolve. If every model update required a full new regulatory review, useful enhancements could be delayed. If updates occurred without oversight, unverified changes could introduce risk.

    Congress created and the U.S. Food and Drug Administration implemented the Predetermined Change Control Plan (PCCP) to help address this balance. A PCCP allows a company to define in advance which elements of an AI model may change, how those changes will be implemented and tested, and what performance thresholds will apply. The agency can authorize that plan as part of the original device authorization so that updates meeting approved criteria may proceed without a new submission.

    AdvaMed’s digital health community treated PCCPs as an operational test case, with board members sharing examples of how they structure plans, which changes are most significant, and where clarity is still needed. This input supported comment letters, technical discussions, and public workshops with FDA focused on making adaptive algorithms workable within the current regulatory structure. This collaboration is helping support a future where patients and clinicians can work with offerings that are updated within a regulated, transparent, and predictable framework.

  2. 2. Reimbursement and health care delivery: Medicare’s existing payment system was built for procedures, visits, and devices–not algorithm-based health care services (ABHS)—and requires modernization. The Centers for Medicare and Medicaid Services (CMS) has not formalized a payment pathway for ABHS and instead, provides separate payment for a limited number of ABHS on a case-by-case basis. Thus, developers of ABHS face an uncertain regulatory landscape and providers considering adopting ABHS must balance unpredictable payment policies with adopting tools that improve patient care. This lack of predictability leads to challenges in deploying innovative, clinically significant services.

    The AdvaMed board responded by focusing on AI that allows physicians to develop treatment plans that might not have been possible using earlier methods, and that are supported by strong clinical evidence. Our community helped establish the Congressional Digital Health Caucus, and used case studies to describe how AI can integrate into healthcare workflows. This is critical because, for providers and patients, appropriate alignment of reimbursement can determine whether tools remain small pilots or become part of routine practice.

  3. 3. Data stewardship and privacy: AI systems rely on data, but privacy, transparency, and trust are essential in healthcare. Hospitals must navigate overlapping privacy laws such as HIPAA and emerging state statutes, complex data-use agreements, and legacy systems that do not share consistent formats. As a result, AI projects often require custom negotiations around privacy, liability, and integration.

    AdvaMed, working with Accenture and member companies, grouped these challenges into four categories: regulatory and legal requirements, trust and risk considerations, technical interoperability gaps, and incentive misalignment. This structure guided the development of AdvaMed’s Principles for Data Access and Privacy, AI data-access principles, and a data-driven technologies section within the AdvaMed Code of Ethics.

    These resources are designed for routine use. They encourage organizations to specify who can access which data, for what purpose, under what safeguards, and what rights apply to deidentified or aggregated data. They also emphasize practical approaches, such as running models at the edge so protected health information remains on the device, and treating privacy and transparency as design requirements.

    For clinicians and patients, this means hospitals can more clearly explain how imaging or waveform data will be used, how it is protected, and how models are assessed. This helps reduce uncertainty when adopting tools intended to support tasks such as early physiologic pattern recognition or chronic-condition monitoring.

  4. 4. Cybersecurity: As devices and AI systems become more interconnected, cybersecurity risks tend to increase. AdvaMed’s cybersecurity work addresses this across the product lifecycle by convening regulators, manufacturers, and hospital security leaders through regular meetings and an annual Cybersecurity Summit.

    From these discussions, AdvaMed developed a cyber roadmap that integrates security earlier in design, improves information-sharing with health delivery organizations, and supports long-term device-management planning. These AI and data principles emphasize robust controls, continuous monitoring, and clear accountability among vendors and partners, supported by audit trails, rollback capabilities, and shared threat intelligence rather than one-time certifications.

    The core idea is that cybersecurity is fundamental to patient care environments and requires shared responsibility. Well-coordinated practices help ensure that critical digital systems and connected devices remain available and trustworthy during technical issues or cyber events, giving hospitals greater confidence that AI-enabled tools can perform reliably over time.

    As we close our final meeting of the year, and as I step down as chair of AdvaMed’s Digital Health Tech community, one conclusion stands out: the most important shift in healthcare AI is not a single innovation but the broader move toward tools that are governed, monitored, and evaluated with the same discipline that applies to other elements of care.

    Throughout my career, including my time as Chief Health Informatics Officer at the U.S. Food and Drug Administration, I have seen how government agencies can move quickly to harness advances in technology. To give just one example, the precisionFDA platform has scaled over the last decade to help validate genomic tests, increase confidence in next generation sequencing methods used to guide cancer therapy and rare disease diagnosis, and support a global community of researchers.

    Now we have a similar opportunity: to ensure that the tools we build are governed, monitored, and evaluated with the same discipline that applies to other elements of care, so that AI feels less like a disruption and more like part of the essential infrastructure of care delivery. When this happens, the collaborative model developed by medtech, technology companies, clinicians, and policymakers can become a template for any sector learning to make AI both powerful and dependable in their industry.
Share