Mayo Clinic AI Spots Pancreatic Cancer 3 Years Early
REDMOD, Mayo Clinic's radiomics AI, detects 73% of pancreatic cancers in CT scans that look normal to radiologists - nearly double the rate specialists achieve.

A CT scan from two years ago shows a normal pancreas. A radiologist reviewed it and found nothing. The patient was diagnosed with pancreatic cancer six months ago. According to a study published in Gut in April 2026, REDMOD - Mayo Clinic's radiomics AI - would have flagged that scan. Tested against scans originally interpreted as clean by specialists, the model identifies 73% of pre-diagnostic cases. Radiologists reviewing the same images catch 39%.
TL;DR
- REDMOD detects 73% of pancreatic cancers on CT scans taken at a median 475 days (about 16 months) before clinical diagnosis
- Specialist radiologists catch only 39% of the same cases on the same scans
- For scans taken more than two years before diagnosis, REDMOD detects 68% vs 23% for radiologists
- Study used nearly 2,000 CT scans across multiple institutions; a prospective clinical trial (AI-PACED) is now running
| Metric | REDMOD | Radiologist |
|---|---|---|
| Detection rate (pre-diagnostic scans) | 73% | 39% |
| Detection rate (scans 2+ years before diagnosis) | 68% | 23% |
| Specificity (correct non-cancer calls) | 88% | not reported |
| Longitudinal consistency on repeat scans | 90-92% | N/A |
| Median lead time before clinical diagnosis | 475 days | N/A |
What Radiomics Sees That Radiologists Don't
The problem with pancreatic cancer staging
Pancreatic cancer kills because it's invisible when it's curable. The five-year survival rate is around 13% - not because surgical and chemotherapy options have failed to improve, but because most patients reach diagnosis at stage III or IV. About 67,530 Americans are expected to receive a pancreatic ductal adenocarcinoma (PDAC) diagnosis in 2026. By the time it causes symptoms, it's almost always spread.
The pancreas looks morphologically normal during the pre-clinical phase. No mass, no duct dilation, nothing to draw a radiologist's eye on an abdominal CT obtained for unrelated reasons - a kidney stone workup, a post-surgical follow-up. That's the practical reality REDMOD is designed to change.
"The greatest barrier to saving lives from pancreatic cancer has been our inability to see the disease when it is still curable."
- Dr. Ajit Goenka, radiologist, Mayo Clinic
Measuring what's invisible
REDMOD stands for Radiomics-based Early Detection MODel. Radiomics is the practice of extracting quantitative features from medical images - density distributions, texture gradients, heterogeneity patterns - that exist in the pixel data but don't register as visual anomalies to a human reader. The model extracts hundreds of these features from each CT scan and converts them into a high-dimensional signature.
The key training challenge was deliberately difficult: the model was built on CT scans from patients who were healthy at the time of scanning but later received a PDAC diagnosis. Every training case was, by definition, a scan where no expert could see cancer - and yet REDMOD had to learn what distinguished those scans from controls. The study used nearly 2,000 CT scans sourced across multiple institutions, tested against varying hardware and imaging protocols to verify the model doesn't just fit one scanner type.
The Study Numbers in Detail
Detection rates and timing
The headline comparison - 73% vs 39% - comes from scans obtained at a median of 475 days before eventual diagnosis. That's roughly 16 months. Both REDMOD and the radiologists were working on images that, at the time they were taken, weren't flagged for concern.
The gap is wider when the time horizon extends. For CT scans obtained more than two years before clinical diagnosis, REDMOD's detection rate holds at 68%. Radiologist detection drops to 23%. That specific window - catching cancer three years out - is where the survival benefit would be largest, because surgery is still viable, the disease hasn't metastasized, and chemotherapy works better on earlier stages.
Specificity sits at 88%, meaning 12% of healthy patients in the study were incorrectly flagged. For a test set of 430 controls, that's roughly 52 false-positive calls.
Consistency across repeat scans
One design criterion the study tested explicitly was longitudinal stability: would REDMOD produce consistent predictions if the same patient had two CT scans months apart? The answer is yes, at 90-92% consistency. That matters for any surveillance program where high-risk patients - people with new-onset diabetes, or familial predisposition - get periodic scans. A model that flips its prediction between visits is useless in clinical practice, however good its population-level detection rate.
REDMOD processes abdominal CT scans already obtained for other reasons, without requiring specialized imaging or annotation.
Source: unsplash.com
Why This Differs From Prior AI Cancer Detection Claims
AI cancer detection announcements appear regularly, and many don't survive contact with independent replication. REDMOD's multi-institutional design - confirmed across scanners with different manufacturers, imaging protocols, and patient populations - is one of the things that sets this apart from single-center studies that perform well in development and poorly everywhere else.
The model also targets a specific gap in existing screening infrastructure. Mammography and colonoscopy work because there's a visible target and a high-risk population willing to undergo dedicated screening. Pancreatic cancer has neither: the organ is anatomically inaccessible to endoscopy, and mass CT screening for pancreatic cancer isn't currently cost-justified at population scale. REDMOD's value proposition is different - it's a second-pass layer on scans that are already happening, not a new screening modality requiring new workflows.
Standard abdominal CT scanners serve as REDMOD's input. No specialized imaging protocol is required.
Source: unsplash.com
What It Does Not Tell You
The test set is small by clinical standards
63 pre-diagnostic cases is a reasonable research sample but a thin clinical basis. Small test sets tend to produce optimistic performance numbers. The 73% figure was computed on cases where ground truth was known in advance. Real-world deployment means working with patients whose eventual diagnosis is uncertain, scan quality that varies more than institutional datasets capture, and comorbidities that shift how the pancreas looks on CT.
Retrospective studies have structural limits
In a retrospective study, you know who got cancer and who didn't before you analyze the scans. You feed the model cases where the answer exists. This tells you the signal is detectable in the imaging data - which is truly important - but it doesn't tell you how the model performs when it operates blind, in real time, on a stream of incoming patients.
The AI-PACED trial (Artificial Intelligence for Pancreatic Cancer Early Detection), currently enrolling patients, is the prospective test. It measures early detection rates, false-positive frequency, time to follow-up, and downstream clinical outcomes. Those numbers will matter more than anything in the Gut paper.
The false-positive burden needs context
A 88% specificity sounds high until you think about deployment scale. If REDMOD is integrated into abdominal CT review for a high-risk population of even 500,000 patients per year in the US, a 12% false-positive rate means 60,000 unnecessary follow-ups annually. Each one involves additional imaging, patient anxiety, and potentially an endoscopic ultrasound or biopsy for a healthy pancreas. Whether that cost is acceptable depends on how many cancers are caught early enough to save lives - which is exactly what AI-PACED is measuring.
For a broader look at where AI is being deployed in clinical settings, our best AI tools for healthcare 2026 roundup covers what's available today.
REDMOD's numbers are real and the multi-institutional validation is solid. A 73% detection rate against 39% for specialists, on scans that look normal, is a meaningful gap - and the 68% vs 23% comparison for the two-year window is where the survival impact lives. The AI-PACED trial results will show whether the retrospective signal translates into prospective clinical benefit. Until then, the Gut study is the most rigorous public evidence that this detection window exists and that AI can read it from standard CT.
Sources:
- Mayo Clinic AI Detects Pancreatic Cancer Up to 3 Years Early - Decrypt
- AI Model Can Detect Very Early Pancreatic Cancer from CT Scans - Inside Precision Medicine
- Mayo Clinic AI Detects Pancreatic Cancer Up to 3 Years Early - bioengineer.org
- AI detects pancreatic cancer up to 3 years earlier, Mayo study shows - FOX 9
