Meta’s new privacy policy allowing the use of public information for AI training is met with legal challenges in the EU. None of Your Business files complaints across 11 countries, alleging violations of GDPR and raising concerns about user consent and data processing.
Meta Alters Privacy Policy in the EU for AI Training, Faces Legal Challenges
Meta is set to implement a new privacy policy on June 26, 2024, allowing the use of public information shared by its users, such as posts, photos, and captions, to train its AI models. The company states that in some cases, this data will be de-identified or anonymized.
Private messages among users are excluded from this data collection. The updated policy also involves gathering other user information, including transaction data, metadata, and device information, to test and develop new products and features.
This policy change has prompted the advocacy group None of Your Business (NOYB) to file complaints in 11 EU countries. NOYB contends that the policy violates the General Data Protection Regulation (GDPR) as users lack information about the future AI applications their data will support and cannot opt out once the policy is enacted. They argue that Meta’s justification using ‘legitimate interest’ for data processing has been previously invalidated by the Court of Justice.
Meta claims it has a legitimate interest in processing data to build AI services. However, NOYB insists that relying on opt-out mechanisms places undue responsibility on users and complicates the process unnecessarily.
In India, the same policy has been in effect since December 27, 2023, under the Indian Digital Personal Data Protection Act, which does not require companies to obtain user consent for processing publicly available data.
This policy and the ongoing legal disputes illustrate the complex landscape of data privacy and AI development, highlighting differences in regulatory environments across regions.