This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Insights Insights
| 1 minute read

LinkedIn Suffers Outcry for Not Disclosing AI Use of Data

In late September, LinkedIn faced a wall of public opposition and shaming for failing to disclose that it was using user data to train AI systems.  The company has now updated its privacy policy to disclose the practice, and to allow opt-out of future use of data for AI.  It will not, however, affect LinkedIn's use of data collected before the new practices were announced.  Notably, the collection and use of data for AI training is not being applied equally to all users; those in the EU are protected by local privacy and AI laws (which require explicit opt-in for such practices).

WHY IT MATTERS

So far, the public seems opposed to undisclosed use of data for AI training.  Several high-profile companies have had to change course when caught using user data to train their AI systems.  The lesson?  Be forthright.  If you are using data to train AIs, let users know and give them a choice.  It is likely that, as AI becomes more regulated in the future, this will be an explicit right that users are given anyway.  Better to be adapt your own practices now and be seen both as consumer-friendly and a thought leader.  

"When it comes to using members' data for generative AI training, we offer an opt-out setting," the LinkedIn post read. "Opting out means that LinkedIn and its affiliates won't use your personal data or content on LinkedIn to train models going forward, but does not affect training that has already taken place."

Tags

data security and privacy, hill_mitzi, data privacy, insights