As large tech vendors push AI tools and features into their platforms and applications, they may be setting the de facto standards for how AI will be handled commercially. In general, AI's benefits are appealing to businesses of all kinds. However, because of the way AI operates (by ingesting large amounts of material from public sources), it can raise legal exposure issues including the following:
- Intellectual property: who owns all the material that was fed to the AI? Who owns its output?
- Privacy: did the AI operator have permission to include personal data that were fed to the AI?
- Liability: does decision-making, or other functions using AI, present any risk to consumer or commercial users?
- Security: do a customer's internal data remain internal, or are they being used to train the AI that will then be licensed to a provider's other customers?
Why It Matters
For the time being, companies that use AI in their technology offerings are gravitating toward transparency, up-front disclosure about AI assistance, and features that provide some level of assurance about privacy, security, provenance, and other matters. We don't yet know how downstream liability disputes will be decided -- is a disclosure enough to protect an operator? Does the customer bear any liability for using AI tools if it turns out that they infringe someone's IP or privacy rights, expose confidential data without the customer's consent, or give faulty advice? But the marketplace is starting to cope with AI as a business and legal tool, and to think through how best to protect parties via contract and via features that allow customers to understand the impact of AI on the tools they license and the ways they use those tools. If your business is considering licensing AI-assisted tools, we urge you to explore these questions with your vendor.