“Your friendly AI assistant”: the anthropomorphic self-representations of ChatGPT and its implications for imagining AI

This study analyzes how ChatGPT portrays and describes itself, revealing misleading myths about AI technologies, specifically conversational agents based on large language models. This analysis allows for critical reflection on the potential harm these misconceptions may pose for public understandin...

Full description

Saved in:
Bibliographic Details
Published inAI & society Vol. 40; no. 5; pp. 3591 - 3603
Main Authors van Es, Karin, Nguyen, Dennis
Format Journal Article
LanguageEnglish
Published London Springer London 01.06.2025
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN0951-5666
1435-5655
DOI10.1007/s00146-024-02108-6

Cover

More Information
Summary:This study analyzes how ChatGPT portrays and describes itself, revealing misleading myths about AI technologies, specifically conversational agents based on large language models. This analysis allows for critical reflection on the potential harm these misconceptions may pose for public understanding of AI and related technologies. While previous research has explored AI discourses and representations more generally, few studies focus specifically on AI chatbots. To narrow this research gap, an experimental-qualitative investigation into auto-generated AI representations based on prompting was conducted. Over the course of a month, ChatGPT (both in its GPT-4 and GPT-4o models) was prompted to “Draw an image of yourself,” “Represent yourself visually,” and “Envision yourself visually.” The resulting data ( n  = 50 images and 58 texts) was subjected to a critical exploratory visual semiotic analysis to identify recurring themes and tendencies in how ChatGPT is represented and characterized. Three themes emerged from the analysis: anthropomorphism, futuristic/futurism and (social)intelligence. Importantly, compared to broader AI imaginations, the findings emphasize ChatGPT as a friendly AI assistant. These results raise critical questions about trust in these systems, not only in terms of their capability to produce reliable information and handle personal data, but also in terms of human–computer relations.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0951-5666
1435-5655
DOI:10.1007/s00146-024-02108-6