Human-in-the-Loop AI Reviewing: Feasibility, Opportunities, and Risks

The promise of AI for academic work is bewitching and easy to envisage, but the risks involved are often hard to detect and usually not readily exposed. In this opinion piece, we explore the feasibility, opportunities, and risks of using large language models (LLMs) for reviewing academic submission...

Full description

Saved in:
Bibliographic Details
Published inJournal of the Association for Information Systems Vol. 25; no. 1; pp. 98 - 109
Main Authors Drori, Iddo, Te'eni, Dov
Format Journal Article
LanguageEnglish
Published Atlanta Association for Information Systems 2024
Subjects
Online AccessGet full text
ISSN1536-9323
1536-9323
DOI10.17705/1jais.00867

Cover

More Information
Summary:The promise of AI for academic work is bewitching and easy to envisage, but the risks involved are often hard to detect and usually not readily exposed. In this opinion piece, we explore the feasibility, opportunities, and risks of using large language models (LLMs) for reviewing academic submissions, while keeping the human in the loop. We experiment with GPT-4 in the role of a reviewer to demonstrate the opportunities and the risks we experience and ways to mitigate them. The reviews are structured according to a conference review form with the dual purpose of evaluating submissions for editorial decisions and providing authors with constructive feedback according to predefined criteria, which include contribution, soundness, and presentation. We demonstrate feasibility by evaluating and comparing LLM reviews with human reviews, concluding that current AI-augmented reviewing is sufficiently accurate to alleviate the burden of reviewing but not completely and not for all cases. We then enumerate the opportunities of AI-augmented reviewing and present open questions. Next, we identify the risks of AI-augmented reviewing, highlighting bias, value misalignment, and misuse. We conclude with recommendations for managing these risks.
Bibliography:SourceType-Scholarly Journals-1
content type line 14
ObjectType-Editorial-2
ObjectType-Commentary-1
ISSN:1536-9323
1536-9323
DOI:10.17705/1jais.00867