This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Insights Insights
| 1 minute read

AI Hallucinations - Be Aware

Now that the public at large has experience with AI tools, we are starting to see more information about risks and prevention in their use.  This includes “AI hallucinations,” where an AI essentially produces gibberish when it doesn't know an answer.  This kind of outcome can reflect inadequate (in many senses: not sufficient, not well-suited, etc.) data used in training/building the AI.  Industry experts are now producing guidance on how to mitigate the risk of getting such a hallucination when using AI.  

WHY IT MATTERS

In a very rough sense, Generative AI tools such as ChatGPT work by finding patterns and predicting outcomes/associations.  If an AI's training base has not prepared it for the use cases it faces, it may not be able to draw appropriate conclusions.  Unfortunately, instead of telling the user that it cannot produce an answer, it may create nonsense "answers."  The attached article gives some tips for how to avoid hallucinations, such as cross-checking results from multiple AIs, using human review, etc.  From a legal standpoint, the main thing is to be aware of the potential for hallucinations and have a way to account for them if your company uses an AI in decision-making; or to warn users about the potential for hallucinations if your company gives users access to AI tools.  Good drafting of internal policies and consumer-facing terms can help in this regard.  

Yet even using existing multiple metrics may not fully guarantee hallucination detection. Therefore, further research is needed to develop more effective metrics to detect inaccuracies, Rallapalli says. "For example, comparing multiple AI outputs could detect if there are parts of the output that are inconsistent across different outputs or, in case of summarization, chunking up the summaries could better detect if the different chunks are aligned with facts within the original article." Such methods could help detect hallucinations better, she notes.

Tags

data security and privacy, hill_mitzi, insights, ai and blockchain