Artificial intelligence is already embedded across healthcare, from reading medical scans to predicting life-threatening infections hours before symptoms appear. As of December 2025, the U.S. Food and Drug Administration has cleared over 1,350 AI-enabled medical devices, and the applications extend well beyond diagnostics. Here are the most significant ways AI is being used in healthcare today, along with the real-world results behind them.
Medical Imaging and Diagnostics
The largest concentration of FDA-cleared AI tools falls in radiology, where algorithms analyze X-rays, CT scans, and MRIs to flag abnormalities that a human eye might miss. These systems don’t replace radiologists. They work alongside them, essentially giving clinicians a second opinion in real time.
The performance boost is measurable. A systematic review of cancer imaging found that when clinicians used AI assistance, their ability to correctly identify cancers rose from 67% to 79%, and their accuracy in ruling out false alarms improved from 82% to 87%. For lung cancer specifically, AI-assisted CT reading caught 89% of cancers compared to 78% without AI help. On MRI studies, sensitivity jumped from 71% to 87% when AI was involved. These aren’t dramatic, Hollywood-style numbers, but in a field where catching a tumor a few months earlier can change outcomes, they matter.
Predicting Sepsis and Other Emergencies
Sepsis kills roughly 270,000 Americans each year, and every hour of delayed treatment raises the risk of death. AI systems now monitor patients’ electronic health records continuously, scanning for subtle combinations of changes in vital signs, lab results, and clinical notes that signal sepsis is developing.
These tools can flag the condition hours before a clinician would recognize it on their own. In one evaluation across nine hospitals over two years, implementing an AI sepsis prediction algorithm led to a 39.5% reduction in in-hospital mortality, a 32% shorter hospital stay, and a 23% drop in 30-day readmissions. In neonatal units, similar models identified sepsis in infants hours before clinical recognition was possible. The core value here is time: AI buys clinicians a window to intervene before organ damage sets in.
Robot-Assisted Surgery
Surgical robots guided by AI are now used in urology, oncology, orthopedics, and general surgery. The surgeon remains in control, but the AI provides enhanced visualization, tremor filtering, and real-time guidance that improves precision during delicate procedures like tumor removal or implant placement.
Across studies, AI-assisted robotic surgeries showed a 25% reduction in operative time and a 30% decrease in complications during the procedure itself. Surgical precision improved by roughly 40% in tasks like targeting tumors and placing spinal screws. For patients, the practical difference is a shorter hospital stay, typically one to three days less than with conventional surgery, along with lower pain scores and faster recovery. In urology and oncology procedures specifically, complication rates dropped from about 10% with manual techniques to around 4% with robotic assistance.
Drug Discovery
Developing a new drug traditionally takes 10 to 15 years and costs over $1 billion. AI is compressing the earliest stages of that process dramatically. Instead of testing thousands of chemical compounds one at a time, AI models can simultaneously analyze genomic, protein, and chemical data to identify promising drug candidates in a fraction of the time.
The most striking example so far comes from Insilico Medicine, which used AI to identify a new drug target for a serious lung disease called idiopathic pulmonary fibrosis and advance a candidate into preclinical trials in just 18 months, a process that normally takes four to six years. The computational cost was around $150,000, not counting laboratory validation. AI protein-folding tools have also transformed the field. The AlphaFold database now contains predicted structures for over 200 million proteins, giving researchers a detailed map of the molecular shapes that drugs need to latch onto. A related database, the ESM Metagenomic Atlas, holds predictions for over 700 million protein structures from microorganisms. These resources let scientists identify and validate drug targets that previously had no known structure, which is particularly valuable for cancers driven by poorly understood proteins.
Wearable Devices and Remote Monitoring
AI algorithms running on smartwatches and wearable sensors can now continuously track heart rate, blood pressure, oxygen levels, and heart rhythm patterns. The goal isn’t just passive data collection. These systems look for abnormal patterns that suggest cardiovascular stress or an approaching event like a heart attack, cardiac arrest, or stroke.
The concept is straightforward: instead of waiting for a patient to arrive at an emergency room, the AI detects warning signs in real time and sends an alert during the critical window when intervention can make the biggest difference. These platforms work in homes, nursing facilities, and hospitals, using data from electrocardiogram sensors, light-based pulse monitors, and motion trackers. The field is still maturing, and many real-world deployments haven’t yet reported detailed accuracy metrics, but the infrastructure is already in place and expanding rapidly.
Mental Health Chatbots
AI-powered chatbots that deliver cognitive behavioral therapy techniques are being used as accessible, low-cost mental health tools, particularly among college students and people on waitlists for a therapist. These aren’t replacements for professional care, but they provide structured support between sessions or when no therapist is available.
Clinical trials show real, if modest, results. Woebot, one of the most studied chatbots, reduced depression scores by 22% in just two weeks among college students, while a control group showed no significant change. In another trial, a chatbot group saw anxiety drop by 16% while the control group’s anxiety actually increased by 6%. Across nine studies reviewed, 89% reported statistically significant improvement in at least one mental health outcome. Daily check-ins with chatbots produced greater symptom reduction than biweekly interactions, suggesting that consistency is part of what makes them effective.
Clinical Documentation
Physicians spend a staggering amount of time on paperwork. AI “ambient scribes” now listen to doctor-patient conversations and automatically generate clinical notes, freeing clinicians to focus on the person in front of them.
Studies show these tools cut documentation time by 20% to 30% on average. In a study of 45 clinicians across 17 specialties, an ambient AI scribe reduced documentation time by about 2.6 minutes per appointment and cut after-hours charting work by 29%. Allied health professionals like physiotherapists and occupational therapists saw a 33% reduction in documentation time along with higher productivity and satisfaction, with no negative effect on patient experience. The results aren’t uniform, though. One study found AI scribes saved only 34 seconds per note on average, with wide variation between individual doctors. Some clinicians benefit enormously, while others barely notice the difference.
Bias Remains a Serious Problem
AI in healthcare carries real risks, and algorithmic bias is the most documented. These systems learn from historical data, and that data reflects existing inequities. Skin cancer detection algorithms, for example, are often trained on datasets where only 5% to 10% of images come from Black patients. The result: diagnostic accuracy for Black patients drops to roughly half of what developers originally reported.
In one widely cited case, an AI algorithm used healthcare spending as a proxy for how sick patients were. Because the healthcare system historically spends less on Black patients, the algorithm falsely concluded that Black patients were healthier than equally sick white patients. It then assigned higher priority to white patients for treatment of conditions like diabetes and kidney disease, even though Black patients had higher severity scores. These aren’t hypothetical concerns. They are documented failures with direct consequences for patient care, and they underscore why transparency in training data and ongoing auditing of AI systems is essential as adoption accelerates.