The EU passed a statute earlier this year that will take effect as the world's first comprehensive AI law. Among other things, it defines a hierarchy of risk and outlines how higher-risk AI models must be managed. There will be a “Code of Practice” to go along with the AI Act, similar to implementing regulations that often accompany US legislation. The first draft of the CoP has been released for review and comment by regulators; there are four rounds of drafting planned. The CoP will have legal effect, and can be enforced by the AI Office of the EU.
Fines for violating the AI Act can reach up to 15 million Euros (or 3% of global revenue).
WHY IT MATTERS
The EU has taken a leading, and systematic, approach to AI regulation that will create unified requirements across the bloc. This should, in theory, lead to more a predictable legal environment both for industries and for individuals in the EU. It should also make compliance easier, since there will be one law, rather than multiple laws released by separate member countries. The model is based on the EU's bloc-wide privacy laws.
The Code of Practice is designed to supplement the AI Act and provide a reference set of best practices. Adherence to the CoP can be used to demonstrate compliance with the Act itself, and “may be enforceable” by the AI Office as well. Commitment to the CoP may also influence the size of a fine assessed for violating the AI Act. Thus, businesses operating in the EU have good reason to understand what the CoP eventually says, because it could be binding in part, could provide helpful guidance, and may help protect against harsher penalties.
Interestingly, the European AI Office has outsourced drafting of the first round of the CoP to “independent practitioners,” who have presented this first draft for review by regulators and other stakeholders. The entire process is designed to be "iterative," with four rounds of drafting and review between now and April 2025.