Multi-turn Evaluation of Anthropomorphic Behaviours in Large Language Models
The tendency of users to anthropomorphise large language models (LLMs) is of growing interest to AI developers, researchers, and policy-makers. Here, we present a novel method for empirically evaluating anthropomorphic LLM behaviours in realistic and varied settings. Going beyond single-turn static...
Saved in:
Main Authors | , , , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
10.02.2025
|
Subjects | |
Online Access | Get full text |
DOI | 10.48550/arxiv.2502.07077 |
Cover
Summary: | The tendency of users to anthropomorphise large language models (LLMs) is of
growing interest to AI developers, researchers, and policy-makers. Here, we
present a novel method for empirically evaluating anthropomorphic LLM
behaviours in realistic and varied settings. Going beyond single-turn static
benchmarks, we contribute three methodological advances in state-of-the-art
(SOTA) LLM evaluation. First, we develop a multi-turn evaluation of 14
anthropomorphic behaviours. Second, we present a scalable, automated approach
by employing simulations of user interactions. Third, we conduct an
interactive, large-scale human subject study (N=1101) to validate that the
model behaviours we measure predict real users' anthropomorphic perceptions. We
find that all SOTA LLMs evaluated exhibit similar behaviours, characterised by
relationship-building (e.g., empathy and validation) and first-person pronoun
use, and that the majority of behaviours only first occur after multiple turns.
Our work lays an empirical foundation for investigating how design choices
influence anthropomorphic model behaviours and for progressing the ethical
debate on the desirability of these behaviours. It also showcases the necessity
of multi-turn evaluations for complex social phenomena in human-AI interaction. |
---|---|
DOI: | 10.48550/arxiv.2502.07077 |