ZME Science
No Result
View All Result
ZME Science
No Result
View All Result
ZME Science

Home → Future

AI thought X-rays are connected to eating refried beans or drinking beer

Instead of finding true medical insights, these algorithms sometimes rely on irrelevant factors — leading to misleading results.

Mihai AndreibyMihai Andrei
December 13, 2024
in Future, News, Technology
A A
Edited and reviewed by Zoe Gordon
Share on FacebookShare on TwitterSubmit to Reddit

Medical imaging is the cornerstone of diagnosis, and Artificial Intelligence (AI) is promising to revolutionize that. With the power to detect features and trends invisible to the human eye, AI holds promise for faster and more accurate diagnoses.

But beneath this promise lies a troubling flaw: AI’s tendency to take shortcuts and jump to conclusions.

These shortcuts can lead to misleading and sometimes dangerous conclusions. Like, for instance, algorithms that believe they can “predict” the outcome of an X-ray based on whether someone drinks beer or not.

An X-ray of a knee joint
Does this look like a beer drinker to you? Image via Wiki Commons.

The researchers trained convolutional neural networks (CNNs) — one of the most popular types of deep learning algorithms — to perform a bizarre task: predict whether a patient avoided eating refried beans or drinking beer simply by looking at their knee X-rays. The model did just that: it achieved a 63% accuracy rate for predicting bean avoidance and a 73% accuracy rate for beer avoidance.

Obviously, this defies logic. There’s no connection between knee anatomy and diet preferences. Yet the models produced statistically significant results. But, this strange outcome wasn’t due to some hidden medical insight. Instead, it was a textbook example of shortcut learning.

Shortcut learning and confounding variables

This study used the Osteoarthritis Initiative (OAI) dataset, a vast collection of over 25,000 knee X-rays. The dataset included various confounding factors — variables that could distort the model’s learning. The researchers found that AI models could predict patient sex, race, clinical site, and even the manufacturer of the X-ray machine with striking accuracy. For example:

  • Sex Prediction: 98.7% accuracy
  • Clinical Site Prediction: 98.2% accuracy
  • Race Prediction: 92.1% accuracy

This is good information, but here’s the thing: the AI may be using these confounding factors as shortcuts. For example, if a particular clinical site has more patients of a specific demographic, the AI might associate that demographic with certain diagnoses — a shortcut that reflects bias rather than medical reality.

RelatedPosts

Google scientists propose adding a ‘kill switch’ for A.I.
AI is playing an increasingly important role in diagnostic services in healthcare
The stunning GPT-3 AI is a better writer than most humans
AI can now detect colds by analyzing the tone of voice

Shortcut learning occurs when AI models exploit superficial patterns in data rather than learning meaningful relationships. In medical imaging, shortcut learning means the model isn’t recognizing medical conditions but instead latching onto irrelevant clues.

“While AI has the potential to transform medical imaging, we must be cautious,” says the study’s senior author, Dr. Peter Schilling, an orthopaedic surgeon at Dartmouth Health’s Dartmouth Hitchcock Medical Center and an assistant professor of orthopaedics in Dartmouth’s Geisel School of Medicine.

“These models can see patterns humans cannot, but not all patterns they identify are meaningful or reliable,” Schilling says. “It’s crucial to recognize these risks to prevent misleading conclusions and ensure scientific integrity.”

It could become a bigger problem

Society in general is still deciding what’s the acceptable way to use AI in healthcare. Practitioners agree that AI shouldn’t be let to interpret medical imaging on its own; at most, it should be used as a crutch, with the results and interpretation still re-analyzed by an expert. But with AI usage becoming more and more widespread, and with large-scale workforce shortages, AI may take a more central part.

This is why the findings are so concerning.

For instance, the AI might identify a particular clinical site based on unique markers in the X-ray image, such as the placement of labels or blacked-out sections used to obscure patient information. These markers can correlate with patient demographics or other latent variables like age, race, or diet — factors that shouldn’t affect the diagnosis but can skew the AI’s predictions.

Imagine an AI trained to detect a disease in chest X-rays. If the AI learns to associate a particular hospital’s labeling style with disease prevalence, its predictions will be unreliable when applied to images from other hospitals. This kind of bias can result in misdiagnoses and flawed research findings.

Shortcut learning also undermines the credibility of AI-driven discoveries. Researchers and clinicians may be misled into thinking the AI has identified a groundbreaking medical insight when, in fact, it has merely exploited a meaningless pattern.

“This goes beyond bias from clues of race or gender,” says Brandon Hill, a co-author of the study and a machine learning scientist at Dartmouth Hitchcock. “We found the algorithm could even learn to predict the year an X-ray was taken. It’s pernicious — when you prevent it from learning one of these elements, it will instead learn another it previously ignored. This danger can lead to some really dodgy claims, and researchers need to be aware of how readily this happens when using this technique.”

Can we fix it?

It’s very difficult to eliminate shortcut learning. Even with extensive preprocessing and normalization of images, the AI still identified patterns that humans couldn’t see and tended to make interpretations based on them. This ability to “cheat” by finding irrelevant but statistically significant correlations poses a serious risk for medical applications.

The challenge of shortcut learning has no easy fix. Researchers have proposed various methods to reduce bias, such as balancing datasets or removing confounding variables. But this study shows these solutions often fall short. Shortcut learning can involve multiple, intertwined factors, making it difficult to isolate and correct for each one.

The authors of the study argue that AI in medical imaging needs greater scrutiny. Deep learning algorithms are not hypothesis tests — they are powerful pattern recognition tools. When used for scientific discovery, their results must be rigorously validated to ensure they reflect true medical insights rather than statistical artifacts.

Essentially, we need to subject AIs to much greater scrutiny, especially in a medical context.

“The burden of proof just goes way up when it comes to using models for the discovery of new patterns in medicine,” Hill says. “Part of the problem is our own bias. It is incredibly easy to fall into the trap of presuming that the model ‘sees’ the same way we do. In the end, it doesn’t.”

Researchers also caution against treating AI like a fellow expert.

“AI is almost like dealing with an alien intelligence,” Hill continues. “You want to say the model is ‘cheating,” but that anthropomorphizes the technology. It learned a way to solve the task given to it, but not necessarily how a person would. It doesn’t have logic or reasoning as we typically understand it.”

Journal Reference: Ravi Aggarwal et al, Diagnostic accuracy of deep learning in medical imaging: a systematic review and meta-analysis, npj Digital Medicine (2021). DOI: 10.1038/s41746-021-00438-z

Tags: AI biasAI in healthcareAI pitfallsalgorithm biasartificial intelligenceconfounding variablesdeep learningdiagnostic AImachine learningmedical diagnosismedical imagingshortcut learningx-rays

ShareTweetShare
Mihai Andrei

Mihai Andrei

Dr. Andrei Mihai is a geophysicist and founder of ZME Science. He has a Ph.D. in geophysics and archaeology and has completed courses from prestigious universities (with programs ranging from climate and astronomy to chemistry and geology). He is passionate about making research more accessible to everyone and communicating news and features to a broad audience.

Related Posts

Future

Grok Won’t Shut Up About “White Genocide” Conspiracy Theories — Even When Asked About HBO or Other Random Things

byMihai Andrei
2 days ago
AI-generated image.
Future

Does AI Have Free Will? This Philosopher Thinks So

byMihai Andrei
4 days ago
Diseases

The UK just trained a health AI on 57 million people to predict disease

byTudor Tarita
4 days ago
History

AI Would Obliterate the Nazi’s WWII Enigma Code in Minutes—Here’s Why That Matters Today

byTudor Tarita
7 days ago

Recent news

The Worm That Outsourced Locomotion to Its (Many) Butts

May 16, 2025

The unusual world of Roman Collegia — or how to start a company in Ancient Rome

May 16, 2025
Merton College, University of Oxford. Located in Oxford, Oxfordshire, England, UK. Original public domain image from Wikimedia Commons

For over 500 years, Oxford graduates pledged to hate Henry Symeonis. So, who is he?

May 16, 2025
  • About
  • Advertise
  • Editorial Policy
  • Privacy Policy and Terms of Use
  • How we review products
  • Contact

© 2007-2025 ZME Science - Not exactly rocket science. All Rights Reserved.

No Result
View All Result
  • Science News
  • Environment
  • Health
  • Space
  • Future
  • Features
    • Natural Sciences
    • Physics
      • Matter and Energy
      • Quantum Mechanics
      • Thermodynamics
    • Chemistry
      • Periodic Table
      • Applied Chemistry
      • Materials
      • Physical Chemistry
    • Biology
      • Anatomy
      • Biochemistry
      • Ecology
      • Genetics
      • Microbiology
      • Plants and Fungi
    • Geology and Paleontology
      • Planet Earth
      • Earth Dynamics
      • Rocks and Minerals
      • Volcanoes
      • Dinosaurs
      • Fossils
    • Animals
      • Mammals
      • Birds
      • Fish
      • Amphibians
      • Reptiles
      • Invertebrates
      • Pets
      • Conservation
      • Animal facts
    • Climate and Weather
      • Climate change
      • Weather and atmosphere
    • Health
      • Drugs
      • Diseases and Conditions
      • Human Body
      • Mind and Brain
      • Food and Nutrition
      • Wellness
    • History and Humanities
      • Anthropology
      • Archaeology
      • History
      • Economics
      • People
      • Sociology
    • Space & Astronomy
      • The Solar System
      • Sun
      • The Moon
      • Planets
      • Asteroids, meteors & comets
      • Astronomy
      • Astrophysics
      • Cosmology
      • Exoplanets & Alien Life
      • Spaceflight and Exploration
    • Technology
      • Computer Science & IT
      • Engineering
      • Inventions
      • Sustainability
      • Renewable Energy
      • Green Living
    • Culture
    • Resources
  • Videos
  • Reviews
  • About Us
    • About
    • The Team
    • Advertise
    • Contribute
    • Editorial policy
    • Privacy Policy
    • Contact

© 2007-2025 ZME Science - Not exactly rocket science. All Rights Reserved.

OSZAR »