MetaTOC stay on top of your field, easily

AI Mimicry and Human Dignity: Chatbot Use as a Violation of Self‐Respect

, ,

Journal of Applied Philosophy

Published online on

Abstract

["Journal of Applied Philosophy, Volume 43, Issue 1, Page 95-111, February 2026. ", "\nABSTRACT\nThis article investigates how human interactions with AI‐powered chatbots may offend human dignity. Current chatbots, driven by large language models, mimic human linguistic behaviour but lack the moral and rational capacities essential for genuine interpersonal respect. Human beings are prone to anthropomorphize chatbots – indeed, chatbots appear to be deliberately designed to elicit that response. As a result, human beings' behaviour towards chatbots often resembles behaviours typical of interaction between moral agents. Drawing on a second‐personal, relational account of dignity, we argue that interacting with chatbots in this way is incompatible with the dignity of users. We show that, since second‐personal respect is premised on reciprocal recognition of second‐personal moral authority, behaving towards chatbots in ways that convey second‐personal respect is bound to misfire in morally problematic ways, given the lack of reciprocity. Consequently, such chatbot interactions amount to subtle but significant violations of self‐respect – the respect we are duty‐bound to show for our own dignity. We illustrate this by discussing four actual chatbot use cases (information retrieval, customer service, advising, and companionship), and propound that the increasing societal pressure to engage in such interactions with chatbots poses a hitherto underappreciated threat to human dignity.\n"]