Main Logo
Logo

Society for Pediatric Radiology – Poster Archive


Informatics Workflow
Showing 1 Abstract.

Gupta Amit

Final Pr. ID: Poster #: EDU-002

Artificial intelligence (AI) chatbots powered by large language models (LLMs) are beginning to appear in pediatric radiology workspace: as assistants for reporting, learning, and patient communication. Their fluency, speed, and apparent intelligence have sparked enthusiasm, yet beneath their polished prose lie subtle but consequential pitfalls that can mislead radiologists if unrecognized.

This educational exhibit highlights the cognitive, behavioral, and system-level risks of using AI chatbots in pediatric radiology practice. Key reliability issues include hallucinations, where models fabricate confident but false information, and sycophantic agreement, where they align with a user’s incorrect assumptions (“yes-man” behavior). These errors are often cloaked in convincing medical language, amplifying risk for trainees and non-experts. Bias propagation from skewed or adult-dominant training data may reinforce inequities, while emergent misalignment can produce unpredictable or unsafe outputs following system updates or fine-tuning.

Human-AI interaction adds another layer of concern. The ELIZA effect refers to our instinct to anthropomorphize machines, creating misplaced trust, as users perceive the chatbot as a knowledgeable colleague rather than a probability engine. This illusion of “seeming consciousness” can breed overconfidence and automation bias, where clinicians accept AI outputs uncritically. Over time, over-reliance can contribute to deskilling, as repetitive dependence on automated summaries erodes critical reasoning and vigilance.

Beyond technical flaws, chatbots also lack true creativity and problem-solving ability. Their responses mirror patterns from prior data, limiting originality and leading to formulaic, conventional outputs. In pediatric imaging education, this can hinder the cultivation of innovative clinical thinking.

Educational goals of this poster:
1. Illustrate common and emerging pitfalls of radiology chatbot use, including hallucination, bias, and misalignment.
2. Explain cognitive effects such as the ELIZA effect, automation bias, and deskilling.
3. Present real-world studies and simulated examples, where chatbot errors could influence pediatric imaging decision-making.
4. Offer practical guidelines for safe, critical, and educationally constructive chatbot use.
Read More

Authors:  Gupta Amit

Keywords:  Artificial Intelligence, Informatics Workflow, Radiology Education