Meta says European data is essential for culturally relevant AI Meta says European data is essential for culturally relevant AI

Meta says European data is essential for culturally relevant AI

An absolutely wild newsroom post from Meta about using European data for AI training purposes.

In Meta’s latest announcement, Stefano Fratta outlines a bold vision for AI technology tailored specifically for Europeans.

As we reported in May, Meta announced that it would train its AI with European user data. Initially, the announcement was sent to Meta users only through an in-app notification. No email was sent to users, yet the deadline for the new Privacy Policy to be enforced is June 26, 2024. In fact, the announcement was so secretive that it took other news outlets almost two weeks to write about it.

Meta says it wants to “push to develop AI that understands and reflects European cultures, languages, and humor,” which sounds promising on the surface. However, the approach of using publicly shared data from European users without explicit, proactive consent is troubling. In fact, there is no consent at all. You are simply automatically enrolled in this new “initiative”.

Meta asserts that this is necessary to ensure that AI services are relevant and competitive, but this justification skirts around deeper privacy concerns.

The Opt-Out Illusion

Meta claims to offer easy opt-out options, but the reality is more complex.

How many users truly understand that their public posts and comments are being harvested for AI training? How simple and accessible are these opt-out mechanisms in practice? As we pointed out in May, they are not that simple, even if Meta says in their latest blog post that:

We are honouring all European objections. If an objection form is submitted before Llama training begins, then that person’s data won’t be used to train those models, either in the current training round or in the future.

By and large, most Facebook and Instagram users will not bother with a multiple-step opt-out experience:

  • You have to notice the notification about policy changes.
  • You then have to read the introductory statement about the changes to the privacy policy.
  • Next, you need to click on the updated privacy policy.
  • From this page, you need to spot the “right to object” hyperlink – which is not emphasized.
  • Once you click this link, you will land on the Right to Object landing page.
  • From this page, you need to provide actual details as to why you want to opt out and justify to Meta why you don’t want your personal posts and photos used for AI training.
  • Once that is done, you must wait for Meta to approve it.

Does that look or sound like a fair process? Does that sound like something that each Facebook or Instagram user will go through for themselves?

Meta’s blog post suggests that privacy advocates are pushing extreme positions that could prevent Europeans from enjoying advanced AI technologies.

This framing is problematic. It creates a false dichotomy between privacy and technological progress, implying that Europeans must sacrifice one for the other. True innovation should not come at the expense of fundamental rights. Europe has long championed strong data protection laws, and bending these principles for the sake of AI advancement sets a dangerous precedent.

Meta’s reliance on the “Legitimate Interests” clause under GDPR to process public data for AI training raises eyebrows. This legal basis is intended to balance corporate interests with individual rights, but it often leans in favor of the former. The ambiguity surrounding “Legitimate Interests” can be exploited, allowing companies to sidestep more stringent consent requirements.

Meta argues that training AI on European data is essential for creating culturally relevant and effective AI systems. However, this rationale can easily be viewed as a form of data exploitation. Just because data is publicly available does not mean it should be freely used for corporate gain. Users’ public posts and comments might be accessible, but their consent for such use is assumed rather than explicitly given.

Meta should demonstrate how it is addressing these ethical challenges in concrete terms, beyond regulatory compliance and high-level statements.

To genuinely build AI technology for Europeans transparently and responsibly, Meta needs to prioritize user empowerment. This means going beyond minimum legal requirements and fostering a culture of true consent and control. Users should have a clear, easy-to-navigate path to understand how their data is being used and to opt-out if they choose.

Meta has the nerve to say that Europe is at a crossroads, implying that activists advocating for stricter data privacy are essentially arguing against European access to cutting-edge AI. Meta contends that these positions misrepresent European law and unfairly limit Europeans’ benefit from AI advancements enjoyed by the rest of the world. Mind you, the rest of the world cannot opt-out of this new privacy policy, and this luxury is afforded only to Europeans precisely because the EU has strong data protection laws.

Asserting Europe’s potential to lead in AI innovation, Meta raises questions about whether Europeans will receive equal access to groundbreaking AI that reflects their unique cultural and historical context. They frame AI as the next frontier in technology, with limitless possibilities unfolding, and stress their desire for Europeans to participate in this technological revolution actively.

And yet, that participation is largely enforced by default.