Microsoft Releases ‘Correction’ Tool to Address AI Hallucinations

Microsoft has released a new tool, called "Correction," aimed at addressing one of the biggest issues challenging facing the AI industry...
Microsoft Releases ‘Correction’ Tool to Address AI Hallucinations
Written by Matt Milano
  • Microsoft has released a new tool, called “Correction,” aimed at addressing one of the biggest issues challenging facing the AI industry

    All AI models hallucinate, or manufacture details in response to queries. It’s unclear why the phenomenon occurs, but all AI firms are working on ways to address the problem. Microsoft’s solution is Correction, a tool that uses “Groundless Detection” to check and correct AI-generated content.

    Don’t miss our chat on Microsoft’s new tool to tackle AI hallucinations!

     

    As Microsoft describes, groundless detection uses provided source documents to cross-check AI responses for accuracy.

    This feature automatically detects and corrects ungrounded text based on the provided source documents, ensuring that the generated content is aligned with factual or intended references. Below, we explore several common scenarios to help you understand how and when to apply these features to achieve the best outcomes.

    Groundless Detection is available both with reasoning and without. For example, without reasoning, groundless detection uses a simple true or false mechanism.

    In the simple case without the reasoning feature, the Groundedness Detection API classifies the ungroundedness of the submitted content as true or false.

    In contrast, using the Groundless Detection feature with reasoning enabled does a better job correcting the hallucinated content to align with the provided sources.

    The Groundedness Detection API includes a correction feature that automatically corrects any detected ungroundedness in the text based on the provided grounding sources. When the correction feature is enabled, the response includes a “correction Text” field that presents the corrected text aligned with the grounding sources.

    Microsoft says its new Correction feature builds on groundless detection, which was first introduced in March 2024, giving customers far more control.

    Since we introduced Groundedness Detection in March of this year, our customers have asked us: “What else can we do with this information once it’s detected besides blocking?” This highlights a significant challenge in the rapidly evolving generative AI landscape, where traditional content filters often fall short in addressing the unique risks posed by Generative AI hallucinations.

    This is why we are introducing the correction capability. Empowering our customers to both understand and take action on ungrounded content and hallucinations is crucial, especially as the demand for reliability and accuracy in AI-generated content continues to rise.

    Building on our existing Groundedness Detection feature, this groundbreaking capability allows Azure AI Content Safety to both identify and correct hallucinations in real-time before users of generative AI applications encounter them.

    The company goes on to describe how the feature works, step-by-step.

    • The developer of the application needs to enable the correction capability.
    • Then, when an ungrounded sentence is detected, this triggers a new request to the generative AI model for a correction.
    • The LLM then assesses the ungrounded sentence against the grounding document.
    • If the sentence lacks any content related to the grounding document, it may be filtered out completely.
    • However, if there is content sourced from the grounding document, the foundation model will rewrite the ungrounded sentence to help ensure it aligns with the grounding document.

    The Hallucination Problem

    It remains to be seen if Groundless Detection will completely solve the issue of AI hallucinations, but it appears to be a step in the right direction, at least until AI firms can better better understand why they happen. Unfortunately, that has proved to be a difficult task, as Alphabet CEO Sundar Pichai pointed out.

    “No one in the field has yet solved the hallucination problems,” Pichai said. “All models do have this as an issue.”

    “There is an aspect of this which we call—all of us in the field—call it a ‘black box,’” he added. “And you can’t quite tell why it said this, or why it got it wrong.”

    Even Apple CEO Tim Cook has acknowledged the problem, saying he would never claim the company’s AI models are free of the issue.

    “It’s not 100 percent. But I think we have done everything that we know to do, including thinking very deeply about the readiness of the technology in the areas that we’re using it in,” Cook replied to Washington Post columnist Josh Tyrangiel. “So I am confident it will be very high quality. But I’d say in all honesty that’s short of 100 percent. I would never claim that it’s 100 percent.”

    https://youtu.be/odxAPb0uf34?feature=shared

    Get the WebProNews newsletter delivered to your inbox

    Get the free daily newsletter read by decision makers

    Subscribe
    Advertise with Us

    Ready to get started?

    Get our media kit