Generated Intelligence: Decision-making and Identity in Health Crises
Fisher, Howard, and Kira’s (2024) article proposes a technology-neutral approach to the moderation of synthetic content, arguing that generative AI does not require sui generis policies, but the consistent application of existing rules focused on the harm and not the origin of the content. This comm...
Saved in:
Published in | Philosophy & technology Vol. 38; no. 2; p. 85 |
---|---|
Main Author | |
Format | Journal Article |
Language | English |
Published |
Dordrecht
Springer Netherlands
01.06.2025
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
ISSN | 2210-5433 2210-5441 |
DOI | 10.1007/s13347-025-00919-z |
Cover
Summary: | Fisher, Howard, and Kira’s (2024) article proposes a technology-neutral approach to the moderation of synthetic content, arguing that generative AI does not require
sui generis
policies, but the consistent application of existing rules focused on the harm and not the origin of the content. This commentary aims to complement and deepen their perspective, shifting the focus from regulatory implications alone to the epistemic and subjective transformations produced by the adoption of generative language models (LLMs), particularly in healthcare. Starting with the concept of the
generated human
-a subject co-constructed in interaction with generative intelligences-it explores how such technologies are redefining the conditions of judgment, identity and responsibility in crisis contexts. Finally, a critical reflection on the emerging ecology of knowledge is proposed, questioning what form of subjectivity we are forming through the increasing use of machine-generated languages. |
---|---|
Bibliography: | SourceType-Scholarly Journals-1 ObjectType-Commentary-1 content type line 14 |
ISSN: | 2210-5433 2210-5441 |
DOI: | 10.1007/s13347-025-00919-z |