Circular colorful illustration

Why AI sometimes gets it wrong — and big strides to address it

by Vanessa Ho

Around the time GPT-4 was making headlines for acing standardized tests, Microsoft researchers and collaborators were putting other AI models through a different type of test — one designed to make the models fabricate information.

To target this phenomenon, known as “hallucinations,” they created a text-retrieval task that would give most humans a headache and then tracked and improved the models’ responses. The study led to a new way to reduce instances when large language models (LLMs) deviate from the data given to them.

It’s also one example of how Microsoft is creating solutions to measure, detect and mitigate hallucinations and part of the company’s efforts to develop AI in a safe, trustworthy and ethical way.

“Microsoft wants to ensure that every AI system it builds is something you trust and can use effectively,” says Sarah Bird, chief product officer for Responsible AI at the company. “We’re in a position of having many experts and the resources to invest in this space, so we see ourselves as helping to light the way on figuring out how to use new AI technologies responsibly — and then enabling everyone else to do it too.”

Technically, hallucinations are “ungrounded” content, which means a model has changed the data it’s been given or added additional information not contained in it.

There are times when hallucinations are beneficial, like when users want AI to create a science fiction story or provide unconventional ideas on everything from architecture to coding. But many organizations building AI assistants need them to deliver reliable, grounded information in scenarios like medical summarization and education, where accuracy is critical.

That’s why Microsoft has created a comprehensive array of tools to help address ungroundedness based on expertise from developing its own AI products like Microsoft Copilot.

Company engineers spent months grounding Copilot’s model with Bing search data through retrieval augmented generation, a technique that adds extra knowledge to a model without having to retrain it. Bing’s answers, index and ranking data help Copilot deliver more accurate and relevant responses, along with citations that allow users to look up and verify information.

“The model is amazing at reasoning over information, but we don’t think it should be the source of the answer,” says Bird. “We think data should be the source of the answer, so the first step for us in solving the problem was to bring fresh, high-quality, accurate data to the model.”

Microsoft is now helping customers do the same with advanced tools. The On Your Data feature in Azure OpenAI Service helps organizations ground their generative AI applications with their own data in an enterprise-grade secure environment. Other tools available in Azure AI help customers safeguard their apps across the generative AI lifecycle. An evaluation service helps customers measure groundedness in apps in production and against pre-built groundedness metrics. Safety system messages templates make it easier for engineers to instruct a model to stay focused on sourcing data.

The company also announced a real-time tool to detect groundedness at scale in applications that access enterprise data, such as customer service chat assistants and document summarization tools. The Azure AI Studio tool is powered by a language model fine-tuned to evaluate responses against sourcing documents.

Microsoft is also developing a new mitigation feature to block and correct ungrounded instances in real time. When a grounding error is detected, the feature will automatically rewrite the information based on the data.

“Being on the cutting edge of generative AI means we have a responsibility and an opportunity to make our own products safer and more reliable, and to make our tools available for customers,” says Ken Archer, a Responsible AI principal product manager at Microsoft.

The technologies are supported by research from experts like Ece Kamar, managing director at Microsoft Research’s AI Frontiers lab. Guided by the company’s ethical AI principles, her team published the study that improved models’ responses and discovered a new way to predict hallucinations in another study that looked at how models pay attention to user inputs.

“There is a fundamental question: Why do they hallucinate? Are there ways we can open up the model and see when they happen?” she says. “We are looking at this from a scientific lens, because if you understand why they are happening, you can think about new architectures that enable a future generation of models where hallucinations may not be happening.”

Kamar says LLMs tend to hallucinate more around facts that are less available in internet training data, making the attention study an important step in understanding the mechanisms and impact of ungrounded content.

“As AI systems support people with critical tasks and information-sharing, we have to take every risk that these systems generate very seriously, because we are trying to build future AI systems that will do good things in the world,” she says.

Learn more about Microsoft’s Responsible AI work.

Lead illustration by Makeshift Studios / Rocio Galarza. Story published on June 20, 2024.