This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Insights Insights
| less than a minute read

NIST Creates Generative AI Standards Useable as Best Practices

The government's standard-setting body, NIST, in late July published non-binding guidelines on the use of AI that can serve as a reference for any company looking for guidance on how to think about deploying AI tools.  The framework is not law, and is not mandatory.  Nonetheless, many large tech players have agreed to follow the guidelines voluntarily.  This is likely to mean these guidelines will find their way into the marketplace both through thought leadership and via mechanisms such as contracts with suppliers to those large tech companies.

WHY IT MATTERS

Although AI is still unregulated in much of the world, that will change one day soon.  Meantime, NIST has done the heavy lifting of looking at issues and recommending specific ways to approach them, including with common sense recommendations such as testing it, drafting internal policies about use of it, and more.  These can be helpful as a reference to any company that doesn't have the resources to parse through the many issues on its own.  In essence, NIST has given us all a Cliff's Notes guide to generative AI and how to approach it.  

The broader suggested actions NIST made Friday include creating policies to document where training data and generated data originate, and to create related retention policies, as well as using "well-defined contracts" and legal agreements that set clear terms on "content ownership, usage rights, quality standards, security requirements." Those working on generative AI should "integrate due diligence" on IP, data security and other areas into acquiring generative AI, as well as regularly "assess the accuracy, quality, reliability, and authenticity of GAI output" through methods like comparing it to "a set of known ground truth data," according to NIST. The agency also recommended role-playing as potential bad actors to work through possible failures that it may not have otherwise picked up on. Likewise, NIST called for identifying illegal ways the underlying system could be used.

Tags

data security and privacy, hill_mitzi, insights