Project AITE – Artificial Intelligence, Trustworthiness & Explainability

Basic data

Acronym:
AITE
Title:
Artificial Intelligence, Trustworthiness & Explainability
Duration:
01/11/2020 to 31/10/2023
Abstract / short description:
It is presently opaque why machine learning systems decide or answer as they do. When an image classifier says “this is a train”, does it ‘recognise’ the train or only the rails, or something totally different? How can we be sure that it makes its decisions for the right reasons? This problem is at the heart of several debates: Can we trust artificial intelligent (AI) systems? And if so, on which basis? Would an explanation of the decision help our understanding and ultimately foster trust? And if so, what kind of explanation?

The project divides into three interrelated subprojects. In Subproject 1, we formulate epistemological and scientific norms on explanation to put constraints on explainable AI (XAI). In Subproject 2, we investigate moral norms of XAI, based on a classification of morally loaded cases of algorithmic decision-making. In Subproject 3, we analyse the notion of “trust” in AI systems and its relation to explainability.

Involved staff

Managers

Collaborative research centers and transregios
University of Tübingen

Contact persons

International Center for Ethics in the Sciences and Humanities (IZEW)
Centers

Other staff

International Center for Ethics in the Sciences and Humanities (IZEW)
Centers

Local organizational units

Department of Informatics
Faculty of Science
University of Tübingen
International Center for Ethics in the Sciences and Humanities (IZEW)
Centers
University of Tübingen

Funders

Stuttgart, Baden-Württemberg, Germany
Help

will be deleted permanently. This cannot be undone.