Skip to content

showlab/Awesome-MLLM-Hallucination

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

57 Commits
Β 
Β 
Β 
Β 

Repository files navigation

Awesome MLLM Hallucination Awesome

⭐ News! The Latest & Most Comprehensive Survey on MLLM Hallucination Just Got Better! 

Thrilled to unveil aΒ major updateΒ to our landmark survey on MLLM (LVLM) Hallucinationβ€”packed with groundbreaking insights from 2024-2025! πŸ”₯πŸ”₯πŸ”₯

Paper (40 pages, 228 references). This is your ultimate guide to navigating the fast-evolving landscape of MLLM hallucination. Whether you’re a researcher or practitioner,Β bookmark this handbook!

πŸ”Β Key observations in the update:

βœ…Β There are deeper yet diverse analysisΒ on the root cause of hallucination.

βœ…Β Training-free mitigating solutions are getting popular, probably thanks to its resource-friendly property.

βœ… Contrastive decoding becomes a cornerstone technique! Various ideas have been developed based on it.

βœ…Β RL-based methodsΒ is gaining momentum, and we expect more methods on this track in the near future.

βœ… More fresh angles to mitigate hallucination: Visual prompting, RAG, rationale reasoning, generative feedback, and so on.

TAX


This is a repository for organizing papres, codes and other resources related to hallucination of Multimodal Large Language Models (MLLM), or called Large Vision-Language Models (LVLM).

Hallucination in LLM usually refers to the phenomenon that the generated content is nonsensical or unfaithful to the provided source content, such as violation of input instruction, or containing factual errors, etc. In the context of MLLM, hallucination refers to the phenomenon that the generated text is semantically coherent but inconsistent with the given visual content. The community has been constantly making progress on analyzing, detecting, and mitigating hallucination in MLLM.

πŸ“š How to read?

The main contribution of a specific paper is proposing either a new hallucination benchmark (metric) or proposing a hallucination mitigation method. The analysis and detection of hallucination are only part of the whole paper, serving as the basis of evaluation and mitigation. Therefore, we divide the papers into two categories: **hallucination evaluation & analysis ** and hallucination mitigation. In each category, the paper are listd in an order from new to old. Note that there might be some duplicated papers in the two categories. Those papers contain both evaluation benchmark and mitigation method.

πŸ”† This project is still on-going, pull requests are welcomed!!

If you have any suggestions (missing papers, new papers, key researchers or typos), please feel free to edit and pull a request. Just letting us know the title of papers can also be a great contribution to us. You can do this by open issue or contact us directly via email.

⭐ If you find this repo useful, please star it!!!

Table of Contents

Hallucination Survey

Hallucination Evaluation & Analysis

Hallucination Mitigation

About

πŸ“– A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published