This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Insights Insights
| 1 minute read

Employee Monitoring AI Tech Under Regulatory Scrutiny

Federal regulators are taking a stand against employee monitoring and profiling via AI technologies.  The real concern seems to be collection and use of personal information (including biometric information in some settings), rather than AI per se.  But, this is an issue for employers to watch.  Without a general federal privacy law in place, regulators appear to leveraging other privacy-adjacent tools such as the Fair Credit Reporting Act (FCRA) to go after workplace productivity monitoring.  The federal consumer financial protections watchdog has issued guidance about AI-assisted employee performance monitoring, and has held joint hearings with the federal employment regulator.  

WHY IT MATTERS

The fact that the Consumer Financial Protection Bureau and the Department of Labor have teamed up is relatively unusual.  Each has its own defined area of jurisdiction, and consumer affairs matters do not always overlap with employee relations.  The FCRA, however, gives the opportunity for both agencies to weigh in on workplace surveillance.  Although neither body has issued any binding regulations about AI monitoring yet, the CFPB's guidance is very likely to be treated as best practice/model standards, and should be taken seriously.  

As summarized in the attached article, the FCRA requires as follows regarding workplace profiling:

Organizations must comply with the FCRA by providing transparency regarding any monitoring technologies or techniques, obtaining employees' consent, and ensuring employees can dispute inaccurate information collected by the device. Inaccurate information must also be deleted or modified to ensure it does not impact the employee's performance metrics.

For employers, this may mean building monitoring programs with employee knowledge and rights in mind.  Drafting of notices, creation of policies about what will be monitored and how that can be used in connection with employee performance evaluation, and how data are verified may all be part of building such a program.  

Those FCRA standards are layered on top of the emerging best practices about AI, which also encourages clear notice to affected individuals, auditing the technology's performance and performing risk assessments about its use, and using higher standards of care when the risks of harm are high – as they are when the technology's decision-making capabilities can affect a person's employment.  

 

Recent guidance released by the CFPB indicated workers are being monitored by companies through artificial intelligence-driven technologies, including "black box" algorithmic devices. The devices can be used to score employees based on overall "effectiveness," by collecting employees' personal information. The CFPB justified jurisdiction over the matter, noting the Fair Credit Reporting Act applies to organizational use of this technology as it places protections to prevent employees from being unfairly profiled.

Tags

data security and privacy, hill_mitzi, current events, cybersecurity, insights, ai and blockchain