Medical mysteries have long captivated society, both on television and in real life. The integration of artificial intelligence (AI) tools into medical contexts is becoming increasingly common, offering new possibilities for diagnosis and treatment. Elisabeth Hildt, a professor of philosophy and director of the Center for the Study of Ethics in the Professions at Illinois Tech, emphasizes the importance of explainability in determining AI's role in medicine.
"Explainability is where the tool provides some sort of explanation about how it came up with its output," explains Hildt. This concept is particularly significant when AI tools are used to support medical decisions through clinical decision support systems (CDSS).
In her paper published in Bioengineering titled “What Is the Role of Explainability in Medical Artificial Intelligence? A Case-Based Approach,” Hildt examines different types of explanations provided by AI-based CDSS. Her study highlights four cases: one without explainability, another with post-hoc explanations, a hybrid model offering knowledge-based explanations, and a causal model addressing complex moral decision-making.
Hildt's research shows that explainability plays a crucial role in building trust between clinicians and patients. She notes that black-box CDSS can hinder autonomous decision-making due to their lack of transparency. "It’s difficult for clinicians and medical doctors to make autonomous decisions...if it’s based on a black-box tool," says Hildt.
While black-box tools like neural networks may offer accurate results, they often lack user trust because their reasoning process remains opaque. "That’s a dilemma," says Hildt, noting that some argue high accuracy might suffice without explainability.
Hildt suggests that explainability could be more critical when clinical validity is unproven or when users are unfamiliar with an AI tool's reliability. "I think when tools are introduced in medicine...people don’t know whether they can trust them," she says.
Further research is needed to establish trust in CDSS within the medical field. "From an ethics perspective...what’s really needed is empirical data," states Hildt, who advocates for studies involving medical professionals and patients to explore perspectives on AI-based tools and their implications for doctor-patient relationships.
Despite unanswered questions, Hildt believes these inquiries will guide future research efforts. If explainability enhances healthcare quality and supports effective communication between doctors and patients, it may become an integral part of medical practice.