California's governor has vetoed an AI bill that passed with flying colors in both houses of the state legislature, saying it took the wrong approach to regulating AI. The bill was controversial, but would have been the first of its kind in the US. It would have required a “kill switch” to allow creators to shut down an AI engaged in massive public harms (e.g., biowarfare), required safety testing prior to release, and given the state's regulators the ability to sue AI creators for damages. Crucially, however, it applied only to very large AI systems and their creators – ones such as those from Meta, Google, and Microsoft. The governor used this one-size-fits-all feature in rejecting the bill, calling it “well-intentioned” but ultimately not best poised to protect consumers. He encouraged a scheme more focused on addressing harms to humans than on the size of the AI itself.
WHY IT MATTERS
The US has very little AI regulation. A handful of states regulate AI in limited settings (such as using AI content in political ads or allowing AI to make discriminatory consumer decisions), but there is no comprehensive regulation of AI at the federal or state level. In contrast, the EU has passed an AI regulatory scheme that is taking effect across the bloc this year and next. It will apply to AIs of all sizes, and will impose a sliding scale of restrictive measures that are most onerous in settings where an AI can do the most potential harm to humans. In many ways, this feels like a repeat of consumer privacy, where the EU came out with broad rules that shaped the marketplace. For the time being, most US companies using AI will have to consider whether they are subject to EU rules rather than considering anything closer to home.