Announcing research on multi-agent AI architecture for clinical decision support in perinatal care

A nurse helping a mother in a delivery room

Perinatal care is at the intersection of profound human connection and clinical complexity. Many decisions carry the weight of two lives, mother and child, bound together in a fundamental relationship in healthcare. Yet clinicians must navigate this sacred responsibility while managing torrents of fragmented data, where critical insights may be scattered across systems and every minute counts toward outcomes that echo through generations.

Electronic medical records (EMRs) capture structured data such as vitals and labs, fetal monitoring systems generate continuous waveforms, and clinical guidelines are buried within PDFs or institutional protocols. Yet in the middle of a busy shift, a labor and delivery physician or nurse must synthesize all of this, alongside personal judgment and team knowledge, into rapid decisions.

The cognitive burden is enormous. Onboarding a new physician at shift change can take more than half an hour, while handoffs risk missing critical details. The result is an environment that demands precision from clinicians who are too often provided disjointed and inefficient tools.

GE HealthCare is conducting research1 on whether AI prototypes may be able to generate near-real-time patient summaries, present protocol excerpts, and provide informational support in simulated research settings. These prototypes are at the concept stage and not cleared or approved for sale as a product in any market. The interface in the form of a conversational chatbot is being explored as a research prototype that is aimed to feel more like a team huddle than a software interaction. Research described here uses de-identified or simulated data where applicable; outputs may be inaccurate or incomplete and require clinician review.

Consider the kinds of questions that arise during a shift. A physician may ask, “How is my patient’s blood pressure trending?” In demonstrations, the prototype can generate example plots of recent values and display thresholds as cited from guidelines. 

These exchanges are being studied to explore whether they could reduce the effort of searching and help present clinical knowledge more easily in research settings. For example, a nurse might ask, “What is uterine tachysystole?” The system returns a definition with an inline citation and link to the relevant protocol for verification. 

These grounded, context-aware exchanges are being studied to explore whether they can reduce the effort of searching across systems and help present clinical knowledge in a more accessible way during simulations. Rather than obligating clinicians to adapt to disparate systems, the research seeks to adapt to the clinician, providing information in the flow of conversation, in the language of care. At all times, outputs require clinician confirmation and do not constitute recommendations.

A conversational chatbot is being explored as a research prototype that is aimed to feel more like a team huddle than a software interaction in perinatal settings

Inside the multi-agent architecture

At its core, the research  uses a multi-agent architecture built on a central reasoning loop. Each query begins with preprocessing agents that build the prompt and bind it to patient context. A primary reasoning agent evaluates whether the information at hand is sufficient or whether specialized agents must be called. These include protocol search agents operating over vector databases created from hospital PDFs, patient data query agents across perinatal and EMR sources, time-series extraction agents for vitals, and plotting agents for waveforms and trends.

In demonstrations, outputs would be returned for another round of reasoning before post-processing is designed to validate citations, apply quality checksto assemble and format citation-verified responses. This approach aims to support a structured path from clinician query to citation-backed output.

Four primary concept demonstrations show how state-of-the-art AI research techniques may work together in practice.

1.    Real-time patient summarization (concept demonstration)

In a labor and delivery unit, each patient generates multiple simultaneous data streams: vital signs updating continuously, fetal monitoring waveforms, clinician annotations, and clinical events occurring throughout the shift. The real-time patient summarization prototype is being studied to explore whether it can process multiple data streams and generate draft overviews.

This would operate through a summary endpoint, a central processing point where streaming data is processed and role-specific outputs could be generated. The system tailors summaries for different clinical roles: broad overviews for general staff who need situational awareness across the unit, focused handoff summaries for nurses taking over patient care, and detailed views for physicians making treatment decisions.

2.    Protocol-aware question answering (concept demonstration)

Hospital protocols for managing obstetric emergencies exist as long, static PDFs that are difficult to navigate during time-sensitive situations. We’re exploring research on protocol-aware question answering to see if AI could make these guidelines easier to use by breaking the information into smaller sections and turning them into a searchable knowledge base.

When a clinician asks a question, the system is designed to retrieve relevant sections, display sources, and filter non-relevant details. The clinician could quickly find the right section, check the source, and remove irrelevant details. For step-by-step guidance, it could lay out instructions in a clear sequence, with the original reference included. As an example of how this might play out in a labor and delivery ward, a nurse can receive steps with relevant citations for managing hypertension, without needing to dig through a lengthy document.2

3.    Decision support for delivery timing (concept demonstration)

When an obstetric team faces a patient showing signs of preeclampsia, they must decide whether to induce labor or continue monitoring. Decision support for delivery timing is being studied as a way to show how multiple reasoning steps can work together.

In simple terms, a planning component would look at patient data alongside potential delivery dates, while separate risk assessments check the health of the mother and baby on their own. These pieces of information then feed into a synthesis step that is designed to assemble data and cited guidelines in a structured view for clinician review.  In demonstrations, rule checks compare inputs to guideline thresholds. As explained above, outputs require clinician confirmation and do not constitute recommendations, and queries that are outside the scope of existing research are declined to maintain the integrity of the study’s framework.

4.    Multimodal integration (concept demonstration)

Throughout a labor and delivery unit, clinical data exists in different formats: structured vitals in monitoring systems, continuous waveforms from fetal monitors, and text in EMR notes and protocols. Multimodal integration research highlights the orchestration of these diverse data types.

Data ingestion modules are designed to handle structured vitals, essentially the organized numbers from monitors like heart rate and oxygen levels. Waveform processing is designed to manage continuous monitoring streams, the constantly flowing lines of data like those wavy patterns on a fetal heart monitor. Text processing is designed to parse EMR notes and protocols, reading and understanding the written medical records and hospital guidelines.

If a clinician were to request “plot blood pressure,” task routing, like a smart switchboard operator, would activate time-series extraction and visualization, pulling out the blood pressure readings over time and turning them into a visual graph. Protocol queries might trigger document retrieval and formatting, finding the right medical guidelines and presenting them clearly. The chat endpoint graph, the system’s conversation interface, includes a coordinator process intended to orchestrate modules helping ensure that different parts of the system work smoothly together.

This conversational AI research represents GE HealthCare’s exploration of intelligent systems intended to assist clinical teams. By transforming scattered medical data into clearer, citation-backed insights that clinicians can review, this research explores conversational AI prototypes for perinatal care using source material backed prototypes in demonstrations. The ultimate vision is to create a world where mothers and infants receive the best possible care at this moment of profound transformation in their lives. 


  1. All research and data in this article is “Concept only” or Technology In Development as indicated. May never become a product. Not for Sale. Not cleared or approved by the U.S. FDA or any other global regulator for commercial availability. ↩︎
  2. For informational research use only. This is not a substitute for institutional protocols or clinician judgment.. ↩︎

Share