AI social robots are a lesser-known technology with uncertain but seemingly compelling potential. For decades, modern culture has portrayed AI social robots as threats to human primacy in healthcare. Robots in healthcare represent one of the most intriguing and morally challenging applications of new technologies in healthcare scenarios. Advancements in robotics and AI to support and assist humans in various environments, such as homes, workplaces, and schools, have increased exponentially in recent years. The use of social robots in the healthcare system raises certain interdisciplinary debates that are generally centered on questions such as “What are the social and ethical implications of using AI robots in the context of vulnerable persons, especially children and the elderly?” “Is there a medical ethical framework consistent with the biblical model of the covenant of care, and how would that framework handle AI robots in medical care?” In short, the foundational question is the following: “Are social robots a solution or a burden in direct patient care?” To attempt an answer to these pressing questions, this paper argues that, according to the neighbor-love ethical criteria, using artificially intelligent robots in direct patient care is potentially harmful.
To prove this thesis, the work develops its argument based on a biblical account of a theology of covenant care compellingly illustrated in the capstone model of neighbor love, The Good Samaritan parable, and it proposes theological and ethical criteria that can inform the use of AI robots in healthcare settings. First, the paper synthesizes the neighbor-love model with covenantal care by developing a theological understanding of covenantal care ethics grounded in God’s loving nature, which presupposes analogous responsiveness from people created in God’s image. Two key assertions are made: (1) neighbor-love is fundamentally embodied, and (2) neighbor-love functions within the right personal relationships. The second division contributes to the research by recommending practical guidelines to which a biblical and covenantal view of care consents regarding non-fundamental, indirect care for the ethical use of AI robots in healthcare. Given the potential for carebots to restrict sufferers’ capabilities, freedom, autonomy, and dignity, as well as the likelihood of social robots intruding upon patients’ privacy and reducing patients’ contact with families and other caregivers, the love of neighbor ethical model excludes AI machines in direct care. The damaging impact of deception, over-attachment, detachment, and overestimation of a social robot’s functionality stresses the irreplaceable role of humans in the caring endeavor. Replacing human care and misplacing trust in the carebots’ ability to make decisions for which they are not qualified may violate the biblical moral imperative of neighbor love. In conclusion, from a neighbor-love perspective, the deployment of AI robots in social roles has to be made with due precaution in indirect care or not at all.