Generated Intelligence: Decision-making and Identity in Health Crises

Fisher, Howard, and Kira’s (2024) article proposes a technology-neutral approach to the moderation of synthetic content, arguing that generative AI does not require sui generis policies, but the consistent application of existing rules focused on the harm and not the origin of the content. This comm...

Full description

Saved in:
Bibliographic Details
Published inPhilosophy & technology Vol. 38; no. 2; p. 85
Main Author Branda, Francesco
Format Journal Article
LanguageEnglish
Published Dordrecht Springer Netherlands 01.06.2025
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN2210-5433
2210-5441
DOI10.1007/s13347-025-00919-z

Cover

More Information
Summary:Fisher, Howard, and Kira’s (2024) article proposes a technology-neutral approach to the moderation of synthetic content, arguing that generative AI does not require sui generis policies, but the consistent application of existing rules focused on the harm and not the origin of the content. This commentary aims to complement and deepen their perspective, shifting the focus from regulatory implications alone to the epistemic and subjective transformations produced by the adoption of generative language models (LLMs), particularly in healthcare. Starting with the concept of the generated human -a subject co-constructed in interaction with generative intelligences-it explores how such technologies are redefining the conditions of judgment, identity and responsibility in crisis contexts. Finally, a critical reflection on the emerging ecology of knowledge is proposed, questioning what form of subjectivity we are forming through the increasing use of machine-generated languages.
Bibliography:SourceType-Scholarly Journals-1
ObjectType-Commentary-1
content type line 14
ISSN:2210-5433
2210-5441
DOI:10.1007/s13347-025-00919-z