Exploring MLLMs Perception of Network Visualization Principles
In this paper, we test whether Multimodal Large Language Models (MLLMs) can match human-subject performance in tasks involving the perception of properties in network layouts. Specifically, we replicate a human-subject experiment about perceiving quality (namely stress) in network layouts using GPT-...
Saved in:
Main Authors | , , , , , , , |
---|---|
Format | Journal Article |
Language | English |
Published |
17.06.2025
|
Subjects | |
Online Access | Get full text |
DOI | 10.48550/arxiv.2506.14611 |
Cover
Summary: | In this paper, we test whether Multimodal Large Language Models (MLLMs) can
match human-subject performance in tasks involving the perception of properties
in network layouts. Specifically, we replicate a human-subject experiment about
perceiving quality (namely stress) in network layouts using GPT-4o and
Gemini-2.5. Our experiments show that giving MLLMs exactly the same study
information as trained human participants results in a similar performance to
human experts and exceeds the performance of untrained non-experts.
Additionally, we show that prompt engineering that deviates from the
human-subject experiment can lead to better-than-human performance in some
settings. Interestingly, like human subjects, the MLLMs seem to rely on visual
proxies rather than computing the actual value of stress, indicating some sense
or facsimile of perception. Explanations from the models provide descriptions
similar to those used by the human participants (e.g., even distribution of
nodes and uniform edge lengths). |
---|---|
DOI: | 10.48550/arxiv.2506.14611 |