Project details
AITE
The project “Artificial Intelligence, Trustworthiness and Explainability” (AITE) investigates which epistemological and scientific standards AI must meet so that it can be considered explainable on the one side and on the other side, which ethical norms it has to fulfill in order to be called trustworthy. In the center of the investigation, “trust” is examined in its relation to AI and explainability.
The project consists of three interrelated subprojects. The first subproject formulates epistemic standards on explainable AI (XAI) through epistemological and scientific norms on explanation. The second subproject aims at developing moral norms of XAI based on morally relevant cases. Subsequently, subproject 3 examines the notion of “trust” in AI systems and its relation to explainability.
The project is a joint endeavour of the “Ethics and Philosophy Lab” (EPL) of the DFG Cluster of Excellence “Machine Learning: New Perspectives for Science” (ML-Cluster) and the “International Centre for Ethics in the Sciences and Humanities” (IZEW) at the University of Tübingen and is financially supported by Baden-Württemberg Stiftung.
Project period | 01.11.2020 - 31.10.2023 |
---|