406701

Hallucination Mitigation Techniques in Large Language Models

Article

Last updated: 01 Feb 2025

Subjects

-

Tags

-

Abstract

Large language models (LLMs) have demonstrated impressive natural language understanding and generation capabilities, enabling advancements in diverse fields such as customer support, healthcare, and content creation. However, a significant challenge with LLMs is their tendency to produce factually inaccurate or nonsensical information, commonly known as hallucination. Hallucinations not only compromise the reliability of these models but can also lead to serious ethical and practical issues, particularly in high-stakes applications. This survey comprehensively reviews recent advancements in hallucination mitigation strategies for LLMs. We explore retrieval-augmented models, which enhance factual grounding by integrating external knowledge sources; human feedback mechanisms, such as reinforcement learning, which improve accuracy by aligning model responses with human evaluations; knowledge augmentation techniques that embed structured knowledge bases for enhanced consistency; and controlled generation, which restricts output to ensure alignment with factual constraints. Additionally, we examine the challenges of integrating these techniques and the limitations of current methods, including scalability, resource intensity, and dependency on quality data. Finally, we discuss future research directions to improve factual reliability in LLMs and explore hybrid solutions to create accurate and adaptable models for a wider range of real-world applications.

DOI

10.21608/ijicis.2024.336135.1365

Keywords

large Language Models, hallucinations, Retrieval-Augmentation, Knowledge-Augmentation, Human Feedback

Authors

First Name

Mohamed

Last Name

Abdelghafour

MiddleName

Ali Mohamed

Affiliation

Computer Science department, Computer and Information Science, Ain Shams University, Cairo, Egypt

Email

mohamed.abdelghafour@cis.asu.edu.eg

City

Cairo

Orcid

0000-0001-9714-4438

First Name

Mohammed

Last Name

Mabrouk

MiddleName

-

Affiliation

Computer Science Department, Computer and Information Science, Ain Shams University, Cairo, Egypt

Email

mohamed.mabrouk@cis.asu.edu.eg

City

-

Orcid

-

First Name

Zaki

Last Name

Taha

MiddleName

-

Affiliation

Computer Science Department, Computer and Information Science, Ain Shams University, Cairo, Egypt

Email

ztfayed@hotmail.com

City

-

Orcid

-

Volume

24

Article Issue

4

Related Issue

52576

Issue Date

2024-12-01

Receive Date

2024-11-13

Publish Date

2024-12-01

Page Start

73

Page End

81

Print ISSN

1687-109X

Online ISSN

2535-1710

Link

https://ijicis.journals.ekb.eg/article_406701.html

Detail API

http://journals.ekb.eg?_action=service&article_code=406701

Order

406,701

Type

Original Article

Type Code

494

Publication Type

Journal

Publication Title

International Journal of Intelligent Computing and Information Sciences

Publication Link

https://ijicis.journals.ekb.eg/

MainTitle

Hallucination Mitigation Techniques in Large Language Models

Details

Type

Article

Created At

01 Feb 2025