This paper performs a theological-ethical analysis of the policies towards Artificial Intelligence of the European Union, the US Federal government, and China. This analysis is concretized by relating it to the replacement of humans with AI in healthcare and to the handling of lethal autonomous weapons.
The accelerating development of AI is not only summoning scientists and companies to engage in moral reflection. Governments, too, are increasingly recognizing the urgency of this. This is resulting in a series of juridical regulations. In Europe, the EU published guiding policy documents between 2018 and 2024 after years of preparation (European Commission 2018.2020.2021; Burri 2022; Von Hein 2022; AI Act 2024; Data Governance Act 2024). The US federal government initially left such reflection more to the private sector but is currently catching up, resulting in a presidential document from October 2023 (OSTP 2022; White House 2023; EPRS 2024). Also, China has made substantial efforts towards the development of a public stance towards AI (Shen and Liu 2022; Kachra 2024). Interestingly, the documents of all these three parties share central emphases. All require the use of AI to remain ‘human-centered’, to preserve transparency and human safety and to respect basic human rights. Besides, all display a certain unsolved tension between the necessity of ethical regulation on the one hand and the political-economic ambition to become a global leading player in the development of AI on the other. Moreover, a more suspicious hermeneutic would uncover diverse interpretations and sometimes not so genuinely moral interests behind comparable moral phrases.
This paper consists of three steps. First, it analyzes and compares the underlying ethics in the official political en legal documents of three parties mentioned. Is the apparent converging real or only superficial? Secondly, it evaluates the sufficiency of these ethics by examining their usefulness for the imminent challenges AI poses to the respective domains of on the one side healthcare (Elder 2017; Meacham and Studley 2017; Topol 2019; Blasimme and Vayena 2020; Kellmeyer 2022; Krönke 2022; Essman and Mueller 2022; Molnár-Gábor and Giesecke 2022) and on the other warfare (Geiss 2017; EU External Action 2018.2023; Ekelhof 2019; Asaro 2020; Barbé and Badell 2020; Scholz and Galliott 2020; Afsah 2022; Department of Defense 2022.2023; Leveringhaus 2022; Lewis 2022; Human Rights Watch 2023; REAIM 2023). Thirdly, it evaluates the implied ethics of the three parties from a theological-ethical perspective, as proposed by its author in former ETS Annual Meetings. This perspective combines a creational with an eschatological orientation, and approaches anthropology in the light of the Christological transformation of humankind in God’s kingdom (Herzfeld 2009; Waters 2014; De Bruijne 2024). This results in some additional insights for what could count as a human-centered AI, especially with respect to healthcare and warfare.