Listen

Description

In a significant move that could reshape data strategies in the AI industry, the UK’s Information Commissioner’s Office (ICO) announced that Microsoft-owned LinkedIn has temporarily ceased processing user data from the United Kingdom for training AI models. This halt follows concerns raised by the ICO regarding LinkedIn's data practices.Stephen Almond, the executive director of regulatory risk at the ICO, stated, “We are pleased that LinkedIn has reflected on the concerns we raised about its approach to training generative AI models with information relating to its U.K. users. We welcome LinkedIn’s confirmation that it has suspended such model training pending further engagement with the ICO.”This development comes after privacy experts noticed LinkedIn quietly altered its privacy policy to include the United Kingdom among regions where it does not offer an opt-out for data processing, effectively meaning it is not utilizing local user data for AI training at this time.Blake Lawit, LinkedIn’s general counsel, reiterated this position in a company blog post dated September 18, indicating, “At this time, we are not enabling training for generative AI on member data from the European Economic Area, Switzerland, and the United Kingdom, and will not provide the setting to members in those regions until further notice.”Previously, LinkedIn had assured that it did not process user data from the European Union, EEA, or Switzerland, regions governed by the General Data Protection Regulation (GDPR). Despite the UK's data protection laws mirroring the GDPR standards post-Brexit, LinkedIn had not initially extended the same data protections to UK users, sparking backlash from privacy advocates.This intervention was notably driven by the Open Rights Group (ORG), a UK-based digital rights organization, which filed a new complaint against LinkedIn with the ICO. They criticized the platform for processing user data without explicit consent and took a swipe at the ICO for not preemptively preventing such issues.The controversy around LinkedIn isn't isolated. Recently, Meta, the parent company of Facebook and Instagram, reversed a previous pause and resumed the collection and usage of local user data for AI model training, requiring users to opt out actively if they did not wish to participate.Mariano delli Santi, ORG’s legal and policy officer, emphasized the need for an overhaul in consent mechanisms, suggesting that current opt-out models are insufficient for protecting user rights. He argued, “The opt-out model proves once again to be wholly inadequate to protect our rights: the public cannot be expected to monitor and chase every single online company that decides to use our data to train AI. Opt-in consent isn’t only legally mandated, but a common-sense requirement.”This shift by LinkedIn might set a precedent that pressures other major platforms to reassess their data processing policies, especially in markets with stringent data protection laws. With increasing scrutiny and regulation, companies operating within the AI sector must navigate a complex landscape where user consent and privacy take on paramount importance. The outcomes of LinkedIn's ongoing consultation with the ICO could potentially influence broader regulatory frameworks and industry standards.For more updates, stay tuned as we continue to monitor this evolving story.Sources: ### Other Key Stories#### Macklem Warns AI May Push Prices Higher Through Demand BoostBank of Canada Governor Tiff Macklem warned that artificial intelligence technologies could add to inflationary pressures, potentially affecting pricing strategies for e-commerce clients. In a speech in Toronto, Macklem highlighted that strong investment in AI is increasing demand, with rising equity prices and hiring contributing to higher consumption. Additionally, the massive computing requirements of