2025 is likely to bring a bumper crop of AI+legal developments in the headlines. The EU helpfully provided a kick-start to the year by issuing guidance in late 2024 about maintaining user privacy when personal data are included in the materials used to train an AI. The guidance is heavy on balancing potential harm against the benefits to be achieved, and will give local regulators discretion to decide how to calculate those balances.
WHY IT MATTERS
The EU led the world with the first modern comprehensive consumer privacy law, and is now doing much the same in AI. This means that, by default, EU decisions are likely to create precedent and influence best practices as AI becomes more widespread. The fact that privacy regulators will have a say in the appropriateness of AI development is an important line in the sand. Indeed, the EU's guidance specifically says that failure to handle privacy correctly could affect the overall legal status of an AI.
This means that any company building or using an AI needs to understand how the model learns, and ensure that ongoing collection of personal data that will be processed by the AI should be assessed and managed just like any other ongoing use of personal data. That may mean data collection and usage risk audits, re-drafting of consumer (or employee) notices about AI usage, upgrades to security measures, updates to vendor management policies and contracts, and development of internal guidance about procurement and deployment of AI tools.