In late September, LinkedIn faced a wall of public opposition and shaming for failing to disclose that it was using user data to train AI systems. The company has now updated its privacy policy to disclose the practice, and to allow opt-out of future use of data for AI. It will not, however, affect LinkedIn's use of data collected before the new practices were announced. Notably, the collection and use of data for AI training is not being applied equally to all users; those in the EU are protected by local privacy and AI laws (which require explicit opt-in for such practices).
WHY IT MATTERS
So far, the public seems opposed to undisclosed use of data for AI training. Several high-profile companies have had to change course when caught using user data to train their AI systems. The lesson? Be forthright. If you are using data to train AIs, let users know and give them a choice. It is likely that, as AI becomes more regulated in the future, this will be an explicit right that users are given anyway. Better to be adapt your own practices now and be seen both as consumer-friendly and a thought leader.