As the incoming administration implements its agenda, it has rescinded the 2023 Executive Order relating to AI. The 2023 EO required that next-gen AI systems be assessed for safety, and for high-risk systems to follow disclosure/reporting requirements including notice to the government. The 2023 EO also, according to a summary from the Wall Street Journal at the time, “will take steps to begin establishing new standards for AI safety and security, protect against fake AI-generated content, shield Americans’ privacy and civil rights and help workers whose jobs are threatened by AI….A central pillar of the order will be an effort to manage such national security risks as cybersecurity threats.”
Congress has not enacted any legislation regarding AI, and several states have considered or passed AI laws relating to deepfake content, consumer privacy, and other discrete AI issues. The EU has passed a bloc-wide AI regulatory scheme that takes effect starting this year. The EU rules require a risk-based assessment of AI systems, and implement a sliding scale of regulatory compliance tied to their level of risk. Certain AIs might be outlawed entirely under the EU rules.
WHY IT MATTERS
As it has done with consumer privacy, Washington appears to be stepping back from new AI regulations and laws even as Brussels steps forward. It remains to be seen how the new administration will treat AI and such potential risks as privacy, healthcare decision-making, IP theft, and cybersecurity.
Meanwhile, we can expect that states will start to pass their own AI rules, leading to a patchwork of potentially confusing compliance requirements (much as has happened in the privacy arena). We can also expect that large US corporations will adopt AI governance policies to comply with the comprehensive EU laws, and that their smaller suppliers will be required by contract to adopt their own AI policies to keep up with EU requirements.