Introduction: The fear that fuels curiosity
Dr. Sakib Mostafa’s journey into artificial intelligence is shaped by a mix of fascination and apprehension. As a child in Bangladesh, he was captivated by how machines might transform the world, yet unsettled by the dangers imagined in films and science fiction. His approach to risk has always been proactive: face the fear, understand it, and solve the problem rather than ignore it. That mindset has propelled him from graduate work at the University of Saskatchewan (USask) to a prestigious post-doctoral fellowship at Stanford University, where he is now applying AI to detect cancer.
From curiosity to responsibility: the problem with black boxes
Today’s AI tools often operate as “black boxes”—systems whose internal reasoning is opaque. Even the developers who build them may not fully understand how a model arrives at a given conclusion. Mostafa explains that once data passes through a deep learning model, tracing the trail back to its input becomes nearly impossible because the information is fragmented across myriad processing steps. This lack of transparency is more than a theoretical worry; it directly affects trust, especially in high-stakes domains like medicine and law enforcement.
A small but powerful reminder
During his PhD at USask, Mostafa worked with an AI model trained to classify plant-leaf diseases. The team initially assumed the model weighed multiple image features. A deeper inspection revealed the model was almost exclusively leveraging leaf edges. The lesson was stark: accuracy alone isn’t enough. To trust AI, users must know how and why it makes decisions, not merely what result it produces.
Explainable AI: turning black boxes into trustworthy tools
Mostafa’s research centers on explainable AI (XAI), a field that seeks to illuminate the decision-making processes of AI systems. By creating explanations of model behavior, developers can identify data quirks that lead to mistakes and then refine the models accordingly. This approach shifts AI from a mysterious predictor to a tool that can be interrogated, repaired, and improved over time.
Applying explainability to cancer detection
At Stanford’s Department of Radiation Oncology, Mostafa and his team are building AI systems capable of analyzing diverse data streams—genomics, medical images, and patient histories—to detect cancer. The goal is not merely to flag malignant tissue but to stage cancer, classify its type, and uncover connections between different data types that traditional methods might miss. By developing explanations for the model’s conclusions, the team hopes to ensure clinicians can trust and act on AI-driven insights.
Why transparency accelerates life-saving breakthroughs
Trust in AI hinges on understanding its reasoning. If clinicians can see why a model highlights a particular region on a scan or how it weighs genomics against imaging data, they can verify results against medical knowledge, adjust inputs, or challenge outputs when necessary. Mostafa emphasizes that explaining a model is not a distraction from performance—it’s a pathway to better accuracy. When problematic data cues are identified, they can be corrected, leading to more reliable diagnoses and improved patient outcomes.
Towards real-world impact: from lab to bedside
The ultimate ambition is to pilot the explainable AI cancer-detection system at Stanford Hospital. If successful, it could serve as a diagnostic tool that supports physicians in determining cancer presence, stage, and type with greater confidence. The vision is pragmatic and patient-centered: to enhance diagnostic precision while maintaining human oversight, ensuring that technology augments, rather than replaces, medical judgment.
Conclusion: embracing intelligent, transparent AI
Mostafa’s work embodies a forward-looking philosophy: confront the unknown, demand clarity, and build systems that improve with feedback. In an era where AI is increasingly embedded in critical decisions, explainable AI offers a prudent path forward. The black box becomes a transparent instrument—one that can be trusted to guide life-saving diagnoses and, ultimately, save lives.