California's legislature has passed AI-regulating legislation that, if signed, would directly take on developers of large AIs. The bill awaits signature by the governor; it is not clear whether he will sign. The bill would create liability for developers of AI systems of a certain size (measured in computing power, among other things) that are used in incidents that create “critical harm” to humanity, such as mass casualties or widespread losses from a cyber attack.
WHY IT MATTERS
The bill takes a different tack than most existing AI regulatory frameworks. The EU and other jurisdictions that have passed AI laws so far rely on mostly self-enforcing risk-based models (with more internal risk controls required for higher-risk AIs). The California bill would set a size threshold and require covered AIs to implement, measure, and report on the safety measures they employ. It would also create a safety regulator for AI. Effectively, there would be an oversight agency ensuring that AIs of a certain size operate safely, just as there are agencies that promote physical workplace safety for workers in hazardous industries.
In addition to the oversight angle, there is a direct liability piece to the bill. Liability for downstream AIs built on open technology would attach to the original developer, unless the developer of the downstream AI spends more than $10M to develop its model. Thus, the bill would require large developers whose models are likely to be the basis of spin-off AI models to install safety measures in their systems that can flow downstream.
As of early September, the governor had not signed the bill.