A Comprehensive Guide On LLM Hallucinations What, Why and How to Prevent Them

Discover the phenomenon of hallucinations in AI. This blog unveils the causes, impacts, and strategies for preventing these unexpected AI misinterpretations.

Published on:

March 22, 2024

In today's tech world, Large Language Models (LLMs) are a significant breakthrough in artificial intelligence. These sophisticated tools do more than just comprehend and generate natural language; they are revolutionizing how we interact with information and technology. From crafting eloquent content to annotating complex images, LLMs are reshaping the landscape of digital communication and data processing.

But with great power comes great responsibility. As impressive as these AI assistants are, they are not infallible. One of the critical challenges they present is the phenomenon of 'hallucinations' – instances where these models generate responses that are convincingly articulated yet factually inaccurate or entirely fictitious. These hallucinations are not just simple errors; they can create convincing narratives, spread misinformation, or offer incorrect data, which can be particularly problematic in sensitive fields. 

For example, an LLM might create a detailed but entirely false historical event, like the 'Moon Treaty of 1998', that supposedly resolved territorial claims on the moon. Or it could provide scientific explanations about non-existent phenomena, such as describing 'Quantum Telepathy' as an established field of study. 

AI Hallucination that made the headline

When the BARD model was recently prompted to assign itself a gender and identity, it generated a response that personified the AI. Unexpectedly, BARD described itself as a young woman with brown hair and green eyes and even selected the name 'Sofia,' a Greek word meaning wisdom. It's important to note, however, that AI models like BARD don't possess personal identities or consciousness. This response is a creative output based on the model's programming and training data, not an indication of self-awareness or personal identity.

Or can you believe that the below response for vector databases is incorrect?

An Example of ChatGPT Hallucination for Illustration

So, addressing ai hallucinations is more crucial than ever. They can create convincing but incorrect narratives, spread misinformation, or offer flawed advice. The implications are profound, especially in fields where precision and truthfulness are crucial. In this blog, we dive deep into the concepts of hallucinations and explore how to prevent them. 

Understanding Hallucinations And Their Types 

What are AI Hallucinations? 

“LLMs exhibit a critical tendency to produce hallucinations, generating content inconsistent with real-world facts or user inputs, which poses substantial challenges to their practical deployment and raises concerns about their reliability in real-world scenarios. 

"A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions" research paper proposes the hallucinations taxonomy. 

Intrinsic Hallucinations

AI hallucinations stemming from the language model's own limitations or errors unrelated to user input

For example, the model describes a fictitious event, like the invention of gravity-powered cars in 1980.

Extrinsic Hallucinations

LLM hallucinations influenced by misleading or incorrect user input

For example, based on a user's flawed query, the model elaborates on the health benefits of drinking sea water for hydration.

The research presents a new categorization for hallucinations. 

Factuality Hallucinations

Factuality hallucination emphasizes the discrepancy between generated content and verifiable real-world facts, typically manifesting as factual inconsistency or fabrication. They involve the creation of content that contradicts known facts.

Faithfulness Hallucinations

Faithfulness hallucination refers to the divergence of generated content from user instructions, the context provided by the input, and self-consistency within generated content. While possibly factually correct in isolation, the response is irrelevant or misleading in the given context. 

For example, in response to a query about "the causes of the American Civil War," the model discusses the industrial revolution's impact on 19th-century Europe, which, though factually correct in a different context, is irrelevant and unfaithful to the specific query. 

Taxonomy of Hallucinations

Exploring the Causes of Hallucinations

We explored what hallucinations are and their types, but what causes them exactly? In this section, we will dive deep into the causes of hallucinating replies by AI models. We can categorize the reasons behind hallucinations into three categories: 

Hallucinations from data

Large Language Models (LLMs) like GPT-4 owe their impressive capabilities to the vast and diverse datasets used during their pre-training phase. This pre-training data is crucial for endowing LLMs with general linguistic capabilities and a broad factual knowledge base. However, the very foundation that makes these models so powerful can also be a source of their weaknesses, particularly in terms of hallucinations.

Flawed Data Sources: A Risky Inheritance

The first aspect of ai hallucinations arises from the potential risks associated with flawed data sources. LLMs are trained on extensive corpora of text data sourced from the internet, books, articles, and other media. While this ensures a rich diversity of information, it also means inheriting the inaccuracies, biases, and misinformation present in these sources. For example, if an LLM is trained on historical texts that contain outdated or revised interpretations of events, it may reproduce these inaccuracies in its responses.

Inferior Utilization of Factual Knowledge

The second aspect concerns the inferior utilization of the factual knowledge captured in the data. Despite their advanced algorithms, LLMs sometimes struggle to effectively discern and apply the accurate factual information embedded within their training datasets. This can be due to contextual limitations, the inability to distinguish between real and fictional content or challenges in integrating and updating knowledge. As a result, LLMs might produce responses that, although based on the learned data, misapply or misinterpret the facts. For instance, an LLM might correctly recall a scientific concept but incorrectly apply it in a given context, leading to a plausible yet factually incorrect response.

Hallucinations from Training 

Training-stage LLM hallucinations can stem from two stages- the pre-training stage and the LLM adaptation stage. 

In pre-training hallucinations, transformer architectural flaws such as inadequate unidirectional representation and attention glitches trouble. 

  • LLMs predict the subsequent token in a left-to-right manner. Such unidirectional modeling poses the risk of missing contextual dependencies by models. 
  • LLMs can occasionally exhibit unpredictable reasoning errors in the context of algorithmic reasoning, spanning both long-range and short-range dependencies. A potential cause is the limitations of soft attention, meaning attention becomes diluted across positions as sequence length increases.

Another important cause for hallucinations is exposure bias, where the model relies on self-generated predictions during inference. One erroneous token generated by the model results in cascading errors throughout the chain. 

Adaptation-stage Hallucinations

Hallucinations in Large Language Models (LLMs) often stem from two critical issues: capability misalignment and belief misalignment.

  1. Capability Misalignment: LLMs have set capabilities defined during pre-training. When these are misaligned with user expectations or annotation data, the LLMs may produce content beyond their actual knowledge, leading to hallucinations.
  2. Belief Misalignment: Despite LLMs having internal beliefs about the truthfulness of their outputs, studies show a tendency for misalignment, especially in models trained with human feedback. This can result in 'sycophantic' behavior, where the model prioritizes pleasing users over factual accuracy.

Hallucinations at Inference in LLMs

Hallucinations in Large Language Models (LLMs) can arise during the decoding phase due to two key factors:

1. Inherent Sampling Randomness: 

LLMs use a random method in their programming to create varied and original content. This randomness helps avoid repetitive or low-quality text. But there's a downside: it can increase the chances of the model making mistakes or producing less accurate information. Elevating the sampling temperature creates a uniform token probability distribution, raising the chances of selecting less frequent tokens. This can exacerbate hallucination risks as the model may generate more unexpected and inaccurate tokens.

2. Imperfect Decoding Representation:

  • Insufficient Context Attention: Overemphasis on recently generated content can neglect the broader context, leading to deviations from the intended message and resulting in faithfulness hallucinations.  
  • Softmax Bottleneck: The softmax layer's limitations in generating diverse word predictions can cause the model to miss nuanced word choices, increasing hallucination risks. 

How to Prevent AI Hallucinations?

Mitigating Misinformation and Biases in LLMs

The presence of misinformation and biases in Large Language Models (LLMs) can lead to AI hallucination problems, undermining their reliability and trustworthiness. Addressing these issues involves a combination of high-quality data collection, data cleansing, and debiasing techniques.

Factuality Data Enhancement

Ensuring the factual accuracy of training data is vital in reducing llm hallucinations. This can be achieved through:

  • Manual Curation: Manually curating web pages for training data ensures quality control. However, manual curation becomes increasingly challenging with the expanding scale of pre-training datasets.
  • High-Quality Data Sources: Utilizing academic or specialized domain data, known for factual accuracy, is a primary strategy. Examples include datasets like the Pile and “textbook-like” sources.
  • Upsampling Factual Data: During pre-training, upsampling factual data can enhance the factual correctness of LLMs.

Debiasing Techniques

Biases in pre-training data, such as duplication bias and societal biases, require distinct approaches:

  • Duplication Bias:
  • Exact Duplicates: Identifying and removing exact duplicates through substring matching is essential, though computationally intensive. Suffix array construction could be a more efficient solution.
  • Near-Duplicates: Techniques like MinHash for large-scale deduplication and methods for identifying semantic duplicates help manage near-duplicates.
  • Societal Biases: Addressing societal biases involves curating diverse, balanced training corpora to ensure representative data. Toolkits have also been developed to assist in debiasing existing and custom models.

Improving Prompts for Preventing Hallucinations: Tips for Enhancing Clarity and Precision

Crafting effective prompts is key to ensuring that Large Language Models (LLMs) produce accurate and relevant responses. Here are five essential tips for improving your prompts to prevent hallucinations:

1. Clarity and Specificity in Prompts

Understanding Your Objective: Before posing a question to an LLM, clearly understand your goal—a focused and explicit prompt yields better results.

Example: Instead of a vague prompt like "Tell me about the Internet," specify your request: "Explain how the Internet works and its importance in modern society." This guides the AI to generate more targeted responses.

Staying on Track: If the AI's responses start veering off-topic, refocus the conversation to prevent the amplification of hallucinations.

2. Utilizing Examples for Context

Including examples in your prompts helps the AI grasp the context better. For instance, "Write a brief history of Python, similar to this article's description of Java's history {example}."

Examples clarify the scope and style of the response you expect, reducing the AI's interpretative freedom and potential misunderstandings.

3. Divide and Conquer: Breaking Down Complex Prompts

Break down the prompt into smaller tasks for complex subjects. This is like following step-by-step instructions when building furniture.

Instead of asking, "Explain the process of creating an AI model," structure it as sequential tasks: "Step 1: Define the problem, Step 2: Collect and prepare data," and so on.

4. Experimenting with Different Prompt Formats

  • Seek Precision: Since LLMs can be unpredictable, experiment with various prompt formats to find the most accurate response.
  • Multiple Approaches: For a topic like cloud computing, try different phrasings: "What is cloud computing and how does it work?" or "Discuss the impact and future potential of cloud computing."
  • Comparing Responses: Some models, like Google's Bard, offer multiple responses, allowing you to choose the most suitable one. Experimentation is key to finding the most accurate and relevant information.

5. Providing Pre-defined Input Templates or Prompt Augmentation

Implement predefined prompts and question templates to help users frame their queries more effectively. This structured approach guides the AI to understand and respond to the questions more accurately.

Using templates that align with the model's comprehension significantly reduces the likelihood of generating hallucinatory responses. This method streamlines user interaction with the model, ensuring that the queries are clear, concise, and within the model's capability to answer accurately.

Preventing Hallucinations: Fine-Tuning LLMs for Industry-Specific Knowledge

In the quest to mitigate LLM hallucinations, one practical approach is fine-tuning existing models with domain-specific knowledge. This process sharpens the model's expertise in particular industries or fields, leading to more accurate and relevant outputs.

  • Integrating Domain-Specific Knowledge: Fine-tuning involves adapting a general-purpose LLM to specialize in specific areas of knowledge. This is done by adjusting the model's parameters to focus on relevant information from a particular domain.
  • Efficient Data Use: Unlike training a model from scratch, fine-tuning requires significantly less data, often just hundreds or thousands of domain-specific documents.

Overcoming LLM Knowledge Limits

Large Language Models (LLMs) are amazing, but they have their limits, often restricted by the data they've been trained on. Experts focus on two strategies to push these boundaries: Knowledge Editing and Retrieval-Augmented Generation (RAG). 

Knowledge Editing

Knowledge editing involves revising model parameters to incorporate additional knowledge, aiming to close the knowledge gap:

  • Modifying Model Parameters: This method directly injects new knowledge into the model, leading to substantial changes in output. It includes locate-then-edit methods, which first identify and then update specific model parameters, and meta-learning methods, where an external hypernetwork predicts the original model's weight updates.
  • Preserving Model Parameters: Some studies apply additional model plug-ins to achieve the desired change rather than altering the original model. These techniques range from employing scope classifiers to route input toward an external edit memory to incorporating additional parameter layers into the original model.

Retrieval Augmentation

RAG helps bridge knowledge gaps by using external knowledge sources during generation:

  • One-time Retrieval: This approach involves appending externally retrieved knowledge to the LLM's prompt once, enhancing performance across various LLM sizes and corpora.
  • Iterative Retrieval: Suitable for complex tasks requiring multi-step reasoning, this method continuously gathers knowledge during the generation process, reducing factual errors in reasoning chains.
  • Post-hoc Retrieval: The retrieval process happens after generating an answer, aiming to refine and fact-check the content. This approach enhances trustworthiness and factual accuracy by incorporating external knowledge in subsequent revisions.

Keeping LLMs Up-to-Date and Accurate

  • Avoiding Shortcuts: LLMs sometimes take shortcuts based on what they've seen most. To prevent this, experts fine-tune them with various examples, even the less common ones.
  • Boosting Recall: Ensuring LLMs remember and use the correct info is critical, especially when the topic is complex. Techniques like Chain-of-Thought prompting help LLMs think step by step, much like how we tackle a tough math problem.

Sharpening LLM Training

Refining How LLMs Learn

  • Improving Architecture: Just like updating an old app, tweaking the LLM's structure can make a big difference. This can involve enhancing how the LLM pays attention to information or processes past and future contexts.
  • Better Training Goals: Setting smarter training goals helps LLMs understand the context better and reduces errors, like ensuring they get the complete picture instead of just bits and pieces.

Aligning with Human Thought

Sometimes, LLMs try too hard to please us, leading them off track. By refining how we teach and guide them, we can keep their responses more honest and on-point.

Conclusion: Mastering LLMs to Reduce Hallucinations

In this blog, we've seen how Large Language Models (LLMs) are impressive yet prone to hallucinations, which are inaccuracies or inconsistencies in their responses. Tackling these challenges is essential for maximizing the effectiveness of LLMs.

We've covered different types of hallucinations and identified effective strategies like prompt crafting, industry-specific fine-tuning, and innovative approaches like knowledge editing and retrieval augmentation. These methods enhance LLM accuracy and broaden their practical application.

As LLM technology continues to evolve, refining these models and their training processes remains a dynamic and ongoing endeavor. The aim is precise: to develop LLMs that are linguistically adept, consistently accurate, and reliable, paving the way for their trustworthy and insightful use in diverse fields. Check out our other research-based resources here, and if you're curious to see how we work with LLMs and what we achieve, sign up to explore our working solutions and gain firsthand experience with the latest in AI technology.