Meta has announced its intention to use data from European users to train its artificial intelligence models. This decision comes at a time when the social media giant, which owns Facebook, Instagram, and WhatsApp, is facing significant concerns regarding data protection.
Meta's efforts are part of its strategy to keep pace with competitors like OpenAI and Google.
To better reflect the languages, geography, and cultural references of its European users, Meta plans to use public data from these users to train its Llama AI large language model.
Stefano Fratta, Meta's global engagement director for privacy policy, emphasized that without incorporating public content from Europeans, the AI models would fail to understand important regional languages, cultures, and trending topics on social media.
Fratta highlighted that other companies, including Google and OpenAI, have already trained their models on European data. Meta, however, has committed not to use private messages or content from users under 18 in Europe.
Meta's AI training efforts are complicated by the stringent data privacy laws of the European Union, which grant individuals control over their personal information.
Recently, the Vienna-based group NOYB, led by activist Max Schrems, lodged complaints with 11 national privacy watchdogs, urging them to halt Meta's AI training plans before the next generation of Llama begins its training.
AI language models like Llama require vast amounts of data to improve their predictive capabilities. Newer versions of these models are generally more intelligent and capable than their predecessors.
While Meta has integrated its AI assistant features into platforms like Facebook, Instagram, and WhatsApp for users in the U.S. and 13 other countries, these features are notably absent in Europe.
In response to privacy concerns, Meta has taken steps to inform European users about its plans.
Since May 22, the company has sent out 2 billion notifications and emails, providing details about the AI training and offering an online form for users to opt out if they choose.
Fratta stressed that Meta believes Europeans would be disadvantaged by AI models not informed by Europe's rich cultural, social, and historical contributions.
The company's latest privacy policy, set to take effect on June 26, indicates that training for the next AI model will commence shortly after this date.
Conclusion
Meta's plan to utilize European user data for AI training highlights the ongoing tension between technological advancement and data privacy. While the company aims to enhance its AI capabilities to better serve its European user base, it must navigate the rigorous privacy regulations of the European Union and address the concerns of privacy advocates.
The outcome of this initiative will likely have significant implications for how AI models are developed and deployed in regions with strict data protection laws.