The thematic focus of the Journal of Practical Philosophy “New questions of trust? Digitization, Digitalization and Artificial Intelligence” edited by Prof. Dr. Karoline Reinhardt and Johanna Sinn has been published. It is dedicated to the question of whether digitalization and artificial intelligence raise new conceptual and ethical questions regarding the concept of trust. The introduction provides an overview of how trust has been understood in the philosophical debate so far and what new questions are raised by digital technologies. It highlights that although trust has long been a subject of philosophy, uncertainty, risk and vulnerability take on new forms in the field of digital technologies, which have implications for trust relationships that call for philosophical inquiry. In addition to the introduction, the focus includes six articles.
Christopher Koska, Julian Prugger, Sophie Jörg and Michael Reder investigate in their article “The Shift of Trust from Human to Machine: An Expansion of the Interpersonal Trust Paradigm in the Context of Artificial Intelligence“ the transformation of the concept of trust due to Artificial Intelligence. Traditionally, trust in technical artifacts is closely linked to their reliability. While this form of (technical) reliability is usually traced back to a human counterpart (e.g., the manufacturers, operators, or auditors), the increasing presence of AI systems necessitates a reconceptualisation of this relationship. Particularly in self learning systems (like connectivist AI), which continuously modify their selection criteria and mechanisms through interaction with the environment, a gradual shift in trust is becoming apparent: The thesis posits that trust shifts, following Hartmann (2022), incrementally from humans to machines. The article aims to provide a problem-oriented description of the associated opportunities and challenges.
Rico Hauswald ”Caveat Usor: Trust and Epistemic Vigilance Towards Artificial Intelligence” … On the one hand, the current discussion on artificial intelligence and trust is characterised by what might be called “trust enthusiasm”, which conceptualizes trust as an attitude that we can, and should, have in relation to AI systems when they have reached a sufficient level of maturity. On the other hand, this use of the concept of trust is viewed with great scepticism in a significant part of the philosophical literature. Two of the relevant arguments in this context are, first, that trust in AI systems is incompatible with the opacity characteristic of these systems, and, second, that to say that one can “trust” such systems amounts to a kind of category mistake. In this paper, I will argue that both the enthusiastic and the sceptical positions are problematic.
Arne Sonar and Christian Herzog discuss in their paper “Trust in, Trustworthiness of and Trust Adjustment Towards Technology - Human-Machine-Interaction Within a Tension of Communicative Capabilities, Cooperation and Familiarity with Technology” the importance of the increasingly communicative and cooperative abilities of innovative, e.g., AI-based technical applications for the triad of trust (in technology), trustworthiness (of technology) and trust adjustment (towards technology). It also raises the question of the role of familiarity (with technology) in this context. Trust is essential in both interpersonal interaction as well as in human-technology interaction. In particular, new communicative potentials of technical applications in direct interaction with users can promote the cooperative interaction of people and technology based on trust in completely new forms. Applications that can, for example, provide immediate feedback on their functions, as well as on possible uncertainties, e.g., in the case of diagnostic recommendations, could not only strengthen basic confidence in the technology itself.
Christian Budnik discusses in his paper “Artificial Intelligence and Trust in Medical Context“ the question how trust in the physician-patient relationship is affected by the usage of AI algorithms. Already today, AI applications are an important part of medical practice. Their employment, however, is connected to a wide array of problems. As a first step, the paper reconstructs the philosophical challenge relating to trust, before explaining what role a properly understood concept of trust plays in the relationship between physicians and patients. In its central section, the paper discusses the question whether it is possible to trust AI technologies. This question is central: Answered in the affirmative, the employment of AI technologies threatens to make the physician-patient relationship superfluous; answered in the negative, it could harm the physician-patient relationship since we would have to suppose that our physicians use a technology that cannot be trusted itself. The paper discusses two important accounts that recently defended the claim that we can trust medical AI – the account of Ferrario et al. 2021 and the account of Philip Nickel 2022.
Ingrid Becker examines in her article “Blockchain instead of trust? - The significance of blockchain technology for trust and reliance” trustlessness and trust in relation to the blockchain. Trustlessness and trust are considered essential categories that make Blockchain and digital technologies theoretically interesting beyond questions of the well-known Bitcoin application. However, it turns out to be difficult when we talk about the meaning of (non-)trust in Blockchain. Isn’t it often reliability rather than trustworthiness that matters to Bitcoin actors? Is trust in interpersonal relationships, which we experience concretely and often intuitively in everyday life, and which can be extended to institutions and professions, relevant to the rigid blockchain technology? And if so, how? Or will the necessity for trust – as the Bitcoin Whitepaper suggests at first glance – become redundant with Bitcoin? Should we speak of technology trust that is incompatible with the interpersonal trust paradigm, due to the immutability of what is stored in the blockchain, and thus would imply a profound shift in social realities? The attempt made here to actualize trust in contrast to reliance, both as philosophical concepts, in the blockchain context also means understanding technologies in their interconnectedness with socially interpreted realities (of trust).
Katja Stoppenbrink and Eva Pöll argue in their article “"Trustless Trust?“ – On the Concept of Trust in the Context of Blockchain Applications” that trust in the context of blockchain applications is best understood as ‘institutional trust’. Against the thesis sometimes asserted in the literature on blockchain applications that they do not require user trust or generate a new type of “trustless trust”, it is shown here that trust relationships also play a role in the use of blockchain applications. The classical bilateral interpersonal understanding of trust between trustor (trust subject) and trustee (trust object) remains structurally intact, the attribution of trustworthiness by the trust subject takes place in a default and challenge model. This is already evident for conceptual reasons: The ‘safer’ the system, the less it requires trust on the part of the user. Trust presupposes vulnerability. With regard to blockchain applications, most users have a high degree of vulnerability.