Senators Thune and Klobuchar apparently are tired of waiting on comprehensive AI legislation from Congress. In this weariness, they may be thinking of how long Congress has dragged out / failed at producing privacy legislation that would set national standards.) Thune's office is announcing that they will release a "light touch" AI bill that requires self-assessment and certification by covered companies that their use of AI is safe. Ironically, this mirrors the way privacy laws often work: covered companies in the EU, for example, are supposed to conduct risk assessments regarding their data use and adjust accordingly; and some US states are adding this model to their own privacy bills and laws.
Why It Matters
AI is the hottest, perhaps most alarming, technological development in many years. Industry, consumer groups, and regulators alike have come out in force to point out ways in which AI could be used for nefarious purposes and to call for rules about its use. Congress's track record on tech issues is limited, however: it is tough to get an increasingly partisan (and frankly, not-young) body of people up to speed on complex tech issues and their impact. We have seen for years how this hangs up progress on any meaningful national privacy legislation, which is why states are stepping into the gap.
Having national rules on AI, especially if they rely on industry self-assessment rather than detailed Dos and Don'ts, could be a great boon to AI efforts as the technology begins to gain traction in the marketplace. If Congress can give us some guardrails, and then step back, there may be more room for enterprise innovation -- with the comfort that large policy goals are being addressed.