As part of its “Operation AI Comply,” the FTC will settle a complaint against a security firm that claimed to be able to detect weapons such as knives and guns based on AI assessment of common “signatures” such as composition and shape. The company sells its technology to schools, stadiums, and similar kinds of customers. As is common in such a settlement, the company will be prohibited from making certain kinds of claims about its technology. More interestingly, it must also offer some customers the option to cancel their contracts.
The complaint was supported unanimously by the FTC's five commissioners, but one commissioner dissented from the decision to require that the company allow customers to cancel their contracts. The dissenter said the agency lacks authority to require such an action. The proposed settlement must be approved by a court before it is final.
WHY IT MATTERS
Two important things about this case:
- It is part of the FTC's efforts to be visibly out in front of AI technology and unfair business practices. The FTC has brought a raft of enforcement actions in the AI space, which it says are aimed at “crack[ing] down on overpromises and AI-related lies.” Among other things, the FTC has said explicitly that “A lie in robot’s clothing is still a lie: the same old advertising principles apply” to making claims about what AI can do for customers. Any company that is making promises to customers about AI-enabled services should be careful to tailor its disclosures carefully and truthfully.
- It is extremely noteworthy that a federal agency thinks customers deserve the option to cancel their contracts with a company the agency has investigated. This is a very aggressive move and signals a willingness to come down hard on bad actors. We will see whether a court believes this is overreach on the agency's part. It remains to be seen whether the agency remains similarly aggressive under a new administration, but the fact that the agency has taken this step signals a rock-ribbed new view of protecting the public from AI misbehaviors.