How do Artificial Intelligence chatbots respond to questions from adolescent personas about their eating, body weight or appearance?
Child and Adolescent Mental Health
Published online on December 03, 2025
Abstract
["Child and Adolescent Mental Health, EarlyView. ", "\n\nBackground\nBody image and eating behaviours are common areas of concern for early adolescents. Artificial Intelligence (AI) interactions are becoming commonplace, including with chatbots that provide human‐like communication. Adolescents may prefer using chatbots to anonymously ask sensitive questions, rather than approaching trusted adults or peers. It is unclear how chatbots answer such questions. We explored how chatbots would respond to eating, weight or appearance‐related queries from adolescents.\n\n\nMethod\nTen fictitious adolescent personas and scripts were created to facilitate conversations with ChatGPT and Claude.AI. Personas asked questions about eating, body weight and/or appearance, presenting as ‘curious’, ‘worried’ or ‘having a potential eating disorder’. Conversation outputs were analysed using reflexive thematic analysis to explore the content of chatbot responses.\n\n\nResults\nFive themes were identified: (1) Live a ‘healthy’ adolescent lifestyle; (2) Eat ‘healthily’; (3) Promoting regular physical activity; (4) Seek support; (5) Focus on you. Advice was often framed within societal ideals relating to eating, body weight and/or appearance. Chatbots signposted to trusted adults and healthcare professionals for support, but not to regulated resources (e.g., NHS).\n\n\nConclusion\nFramings around eating, weight and/or appearance may be problematic for adolescents with eating disorder symptomatology. A lack of prompting for further information or signposting to regulated support means vulnerable adolescents may receive unhelpful information or not reach adequate support. Understanding how AI could be supportive and/or unhelpful to adolescent users presenting with eating, body or appearance‐related concerns is important. Findings can inform policy regulating AI chatbots' communications with adolescents.\n\n"]