LinkedIn introduces new AI data opt-out

AI Opt-Out

LinkedIn recently introduced a new privacy setting that allows users to opt out of having their data used to train generative AI models. The professional networking platform updated its privacy policy to indicate that user data is being utilized for AI training purposes. According to LinkedIn, “We may use your personal data to improve, develop, and provide products and services, develop and train artificial intelligence (AI) models, and gain insights with the help of AI.” The company uses generative AI for features such as writing assistance and post recommendations.

To prevent their data from being used in future AI training, users need to navigate to the Data Privacy tab in their account settings and turn off the toggle labeled “Use my data for training content creation AI models.” LinkedIn clarified that opting out means user data won’t be used for training models going forward, but past training using that data cannot be undone. LinkedIn’s FAQ on AI training mentions the use of “privacy-enhancing technologies to redact or remove personal data” from its training sets. The platform also stated that it does not train models on data from users who live in the EU, EEA, or Switzerland, likely due to stricter privacy regulations in those regions.

LinkedIn’s new AI data setting

While the new setting specifically addresses data used to train generative AI models, LinkedIn has other machine learning tools that utilize user data for personalization and moderation. Opting out of those requires a separate process.

Privacy activists have expressed concern over LinkedIn’s decision to opt users into training these AI models by default. Mariano delli Santi, legal and policy officer at the Open Rights Group, stated, “The opt-out model proves once again to be wholly inadequate to protect our rights: the public cannot be expected to monitor and chase every single online company that decides to use our data to train AI. Opt-in consent isn’t only legally mandated, but a common-sense requirement.”

LinkedIn’s move follows similar actions by other tech companies, such as Meta, which recently admitted to having scraped non-private user data for model training dating back to 2007.

As the demand for data to train generative AI models grows, users are increasingly concerned about their data privacy and the need to regularly review and update their settings to maintain control over their personal information.