Mitigation of Hallucinations in Language Models in Education: A New Approach of Comparative and Cross-Verification

The rapid growing application of language models (LLMs) in education offers exciting prospects for personalized learning and interactive experiences. However, a critical challenge emerges - the risk of "hallucinations," where LLMs generate factually incorrect or misleading information. Thi...

Full description

Saved in:
Bibliographic Details
Published inProceedings (IEEE International Conference on Advanced Learning Technologies) pp. 207 - 209
Main Authors de Almeida da Silva, Wildemarkes, Costa Fonseca, Luis Carlos, Labidi, Sofiane, Lima Pacheco, Jose Chrystian
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.07.2024
Subjects
Online AccessGet full text
ISSN2161-377X
DOI10.1109/ICALT61570.2024.00066

Cover

More Information
Summary:The rapid growing application of language models (LLMs) in education offers exciting prospects for personalized learning and interactive experiences. However, a critical challenge emerges - the risk of "hallucinations," where LLMs generate factually incorrect or misleading information. This paper proposes Comparative and Cross-Verification Prompting (CCVP), a novel technique specifically designed to mitigate hallucinations in educational LLMs. CCVP leverages the strengths of multiple LLMs, a Principal Language Model (PLM) and Auxiliary Language Models (ALMs), to verify the accuracy and educational relevance of the PLM's response to a prompt. Through a series of prompts and assessments, CCVP harnesses the diverse perspectives of various LLMs and incorporates human expertise for intricate cases. This method addresses the limitations of relying on a single model and fosters critical thinking skills in learners within the educational context. We detail the CCVP approach with examples specifically applicable to educational settings, such as geography. We also discuss its strengths and limitations, including computational cost, data reliance, and ethical considerations. We highlight its potential applications in educational disciplines, including fact-checking content, detecting bias, and promoting responsible LLM use. CCVP presents a promising avenue for ensuring the accuracy and trustworthiness of LLM-generated educational content. Further research and development will refine its scalability, address potential biases, and solidify its position as a vital tool for harnessing the power of LLMs while fostering responsible knowledge dissemination in education.
ISSN:2161-377X
DOI:10.1109/ICALT61570.2024.00066