Trustworthy AI for Medical Image Analysis
Abstract: The clinical translation of AI for medical image analysis is impeded by fundamental challenges to its trustworthiness. Prevailing deep learning models are frequently susceptible to dataset bias, provide unreliable uncertainty estimations, and lack sufficient interpretability. These systemic limitations present significant barriers to widespread clinical adoption, as they can result in diagnostic errors and undermine clinician confidence in automated systems. This research program addresses these deficiencies through a sustained focus on designing novel algorithms for explainable AI (XAI) and robust uncertainty quantification. The core of our agenda involves developing foundational computational methods engineered for a diverse range of imaging modalities, including MRI, CT, X-ray, mammogram, and digital breast tomosynthesis. A key characteristic of our work is the architectural flexibility of these algorithms, which are designed to achieve high performance in both single-modality analysis and sophisticated multi-modal integration with auxiliary clinical data. The overarching objective of this research program is to produce ethical, equitable, and clinically robust AI systems. Through these collective efforts in algorithmic innovation, we aim to significantly improve diagnostic precision, bolster clinician trust, and ultimately, advance patient outcomes across a variety of healthcare applications.
Keywords: Calibration, Uncertainty Estimation, Explainability, Training Efficiency
Diseases: Breast Cancer, Alzheimer's Disease, Brain Tumor, Lung Nodule
Selected Papers: NN Inconsistency Performance (JACR'19), DCA for Calibration (BMVC'20), Dynamic Image for Alzheimer’s (ECCV'20 Workshop), Text-Image Pre-Training (IEEE JBHI'21), Probabilistic Calibration (IEEE BigData'24), Benchmaking Robustness (AAAI'25 Workshop)