ProjectValidating Explainable AI in Clinical Neuroimaging
Basic data
Title:
Validating Explainable AI in Clinical Neuroimaging
Duration:
01/01/2026 to 31/12/2028
Abstract / short description:
While deep learning methods achieve high predictive performance in neuroimaging, their clinical and scientific use is limited by the lack of validated approaches for model interpretation. This project develops a rigorous, domain‑specific validation framework for explainable AI in neuroimaging by generating anatomically localized ground‑truth targets from imaging‑derived phenotypes and systematically benchmarking explanation methods across architectures and clinically relevant patterns. The project will deliver validated benchmarks, open tools, and best‑practice guidelines to enable reliable and biologically meaningful interpretation of deep learning models in neuroscience and clinical research.
Involved staff
Managers
Faculty of Medicine
University of Tübingen
University of Tübingen
Local organizational units
Hertie Institute for Artificial Intelligence in Brain Health (HIAI)
Non-clinical institutes
Faculty of Medicine
Faculty of Medicine
Funders
Bonn, Nordrhein-Westfalen, Germany