The Hallucinations of Large Language Models: Can They Be Overcome?

Have you ever wondered how accurate AI-generated text really is? While ChatGPT has amazed us all with its depth of knowledge and fluency in responses, there’s one major issue holding it back: hallucinations. Hallucinations, a term coined by Google AI ...

By Daniel Detlaf

One-man flea circus, writer, sci-fi nerd, news junkie and AI tinkerer.

Pssst. Would you like a quick weekly dose of AI news, tools and tips to your inbox? Sign up for our newsletter, AIn't Got The Time.

Have you ever wondered how accurate AI-generated text really is? While ChatGPT has amazed us all with its depth of knowledge and fluency in responses, there’s one major issue holding it back: hallucinations.

Hallucinations, a term coined by Google AI researchers in 2018, refer to mistakes in AI-generated text that are plausible but incorrect or nonsensical. The AI system produces content that looks great but cannot be trusted, creating challenges for applications like OpenAI’s Codex or Github’s Copilot in generating code. Even high school students have to be cautious when using ChatGPT for book reports or essays, as they may contain erroneous “facts.”

OpenAI’s Chief Scientist, Ilya Sutskever, believes that hallucinations can be eliminated over time by improving reinforcement learning with human feedback (RLHF), a technique pioneered by OpenAI and Google’s DeepMind. The iterative process of RLHF involves a human evaluator checking ChatGPT’s responses and updating the AI model based on their feedback.

However, deep learning pioneer Yann LeCun argues that there’s a more fundamental flaw causing hallucinations. He believes large language models (LLMs) need to learn from observation to acquire nonlinguistic knowledge, which is essential for understanding the underlying reality of language.

In contrast, Sutskever argues that text already contains all the necessary knowledge about the world. He believes abstract ideas can still be learned from text, given the billions of words used to train LLMs like ChatGPT.

Mathew Lodge, CEO of Diffblue, highlights that reinforcement learning systems can be more accurate than LLMs at a fraction of the cost, especially when dealing with complex, error-prone tasks. He suggests that LLMs are best used when errors and hallucinations are not high impact.

It remains to be seen if RLHF can eliminate hallucinations in LLMs. In the meantime, the usefulness of these models in generating precise outputs is still limited. However, Sutskever remains optimistic, stating that improved generative models will have a deep understanding of the world as seen through the lens of text.

Partially sourced from: Hallucinations Could Blunt ChatGPT’s Success

Create an amazing adventure with Storynest.ai. Try it free.  - Sponsored

Sponsored