US users (EU, EEA, and Swiss users aren’t affected as they have stronger data privacy rules) have discovered a toggle, in their data privacy settings, that’s set, by default, to ‘Opt-in.’ This allows LinkedIn to scrape their private data and use it to train the AI models that power their AI writing assistant and post recommendation features.
If users happen to stumble across this toggle, hidden within their privacy settings, LinkedIn does explain that, by ‘opting in,’ users are giving "LinkedIn and its affiliates" permission to "use personal data and content to train generative AI models."
But while they’ve explained how and why they’re using this data, they’ve failed to update their data privacy policy, informing users about this.
Although they’ve since updated their policy and reassured users that ‘opting out’ means they, and other affiliates (probably including Microsoft), won’t use personal data or content to train models, going forward, questions are being asked about the data and training that’s already taken place, without users knowing?
Nothing has been said about that.