Toward automating GRADE classification: a proof-of-concept evaluation of an artificial intelligence-based tool for semiautomated evidence quality rating in systematic reviews

BackgroundEvaluation of the quality of evidence in systematic reviews (SRs) is essential for assertive decision-making. Although Grading of Recommendations Assessment, Development and Evaluation (GRADE) affords a consolidated approach for rating the level of evidence, its application is complex and...

Full description

Saved in:
Bibliographic Details
Published inBMJ evidence-based medicine p. bmjebm-2024-113123
Main Authors Oliveira dos Santos, Alisson, Belo, Vinícius Silva, Mota Machado, Tales, Silva, Eduardo Sérgio da
Format Journal Article
LanguageEnglish
Published England BMJ Publishing Group Ltd 07.04.2025
BMJ Publishing Group LTD
Subjects
Online AccessGet full text
ISSN2515-446X
2515-4478
2515-4478
DOI10.1136/bmjebm-2024-113123

Cover

More Information
Summary:BackgroundEvaluation of the quality of evidence in systematic reviews (SRs) is essential for assertive decision-making. Although Grading of Recommendations Assessment, Development and Evaluation (GRADE) affords a consolidated approach for rating the level of evidence, its application is complex and time-consuming. Artificial intelligence (AI) can be used to overcome these barriers.DesignAnalytical experimental study.ObjectiveThe objective is to develop and appraise a proof-of-concept AI-powered tool for the semiautomation of an adaptation of the GRADE classification system to determine levels of evidence in SRs with meta-analyses compiled from randomised clinical trials.MethodsThe URSE-automated system was based on an algorithm created to enhance the objectivity of the GRADE classification. It was developed using the Python language and the React library to create user-friendly interfaces. Evaluation of the URSE-automated system was performed by analysing 115 SRs from the Cochrane Library and comparing the predicted levels of evidence with those generated by human evaluators.ResultsThe open-source URSE code is available on GitHub (http://www.github.com/alisson-mfc/urse). The agreement between the URSE-automated GRADE system and human evaluators regarding the quality of evidence was 63.2% with a Cohen’s kappa coefficient of 0.44. The metrics of the GRADE domains evaluated included accuracy and F1-scores, which were 0.97 and 0.94 for imprecision (number of participants), 0.73 and 0.7 for risk of bias, 0.9 and 0.9 for I2 values (heterogeneity) and 0.98 and 0.99 for quality of methodology (A Measurement Tool to Assess Systematic Reviews), respectively.ConclusionThe results demonstrate the potential use of AI in assessing the quality of evidence. However, in consideration of the emphasis of the GRADE approach on subjectivity and understanding the context of evidence production, full automation of the classification process is not opportune. Nevertheless, the combination of the URSE-automated system with human evaluation or the integration of this tool into other platforms represents interesting directions for the future.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
content type line 14
ObjectType-Feature-3
ObjectType-Evidence Based Healthcare-1
ObjectType-Article-1
ObjectType-Feature-2
content type line 23
ISSN:2515-446X
2515-4478
2515-4478
DOI:10.1136/bmjebm-2024-113123