Now that the public at large has experience with AI tools, we are starting to see more information about risks and prevention in their use. This includes “AI hallucinations,” where an AI essentially produces gibberish when it doesn't know an answer. This kind of outcome can reflect inadequate (in many senses: not sufficient, not well-suited, etc.) data used in training/building the AI. Industry experts are now producing guidance on how to mitigate the risk of getting such a hallucination when using AI.
WHY IT MATTERS
In a very rough sense, Generative AI tools such as ChatGPT work by finding patterns and predicting outcomes/associations. If an AI's training base has not prepared it for the use cases it faces, it may not be able to draw appropriate conclusions. Unfortunately, instead of telling the user that it cannot produce an answer, it may create nonsense "answers." The attached article gives some tips for how to avoid hallucinations, such as cross-checking results from multiple AIs, using human review, etc. From a legal standpoint, the main thing is to be aware of the potential for hallucinations and have a way to account for them if your company uses an AI in decision-making; or to warn users about the potential for hallucinations if your company gives users access to AI tools. Good drafting of internal policies and consumer-facing terms can help in this regard.