Meta's AI Glasses Send Intimate Footage to Workers in Kenya
A Swedish investigation reveals Meta routes sensitive Ray-Ban smart glasses footage to data annotators in Kenya who see users undressing, having sex, and flashing bank cards - with broken anonymization and no real opt-out.

Meta's Ray-Ban smart glasses are sending users' most intimate footage to data annotators in Nairobi, Kenya, where workers watch people undress, use bathrooms, have sex, and flash bank cards on camera - all to train Meta's AI models. The anonymization meant to blur faces frequently fails. There's no real opt-out. And the workers describing what they see say they were never supposed to talk about it.
That's the finding of a joint investigation by Swedish newspapers Svenska Dagbladet and Goteborgs-Posten, published this week and now drawing scrutiny from EU regulators, the OECD, and privacy organizations on both sides of the Atlantic.
Key Findings
- Footage from Meta Ray-Ban glasses is sent to Sama, a data annotation company in Nairobi, for AI training
- Workers report seeing users undressing, having sex, watching pornography, and displaying bank card details
- Facial blurring algorithms frequently fail, especially in poor lighting - leaving faces visible to annotators
- Meta's terms allow human review of all AI interactions, but users appear unaware their footage is watched by people
- Retail staff in Sweden told customers "everything stays on your device" - which is false
- Over 7 million glasses were sold in 2025; EssilorLuxottica is targeting 10 million per year by end of 2026
- EPIC has asked the FTC to investigate; EU regulators are checking GDPR compliance
What the Workers See
Intimate Content at Scale
Swedish journalists traveled to Nairobi and interviewed more than 30 Sama employees across different levels. The accounts are consistent: workers spend their shifts labeling video content captured by Meta's Ray-Ban smart glasses to train computer vision models. The footage includes people using bathrooms, undressing, engaging in sexual activity, watching pornography while wearing the glasses, and making purchases with bank cards visible on screen.
"We see everything - from living rooms to naked bodies. Meta has that type of content in its databases."
One worker described a video where a wearer's wife undressed without knowing she was being recorded. Another noted the content "could trigger enormous scandals if leaked."
The Coercion Problem
Workers operate under strict non-disclosure agreements, office surveillance cameras, and a ban on bringing recording-capable devices into the facility. The atmosphere discourages questions.
"You understand that it is someone's private life you are looking at, but at the same time you are just expected to carry out the work. You are not supposed to question it. If you start asking questions, you are gone."
Sama has a documented history of labor controversies. In 2021, workers labeling sensitive content for OpenAI earned approximately $1.32 to $2 per hour. After criticism over worker welfare and alleged union-busting, Sama pivoted from content moderation to computer vision annotation in 2023 - which is how it ended up processing Meta glasses footage.
The Anonymization That Doesn't Work
Meta's policy says facial blurring is applied automatically before annotation. Former Meta employees confirmed this is the intent. It isn't what happens in practice.
A former employee told the Swedish journalists: "The algorithms sometimes miss. Especially in difficult lighting conditions, certain faces and bodies become visible."
This matters for two groups of people. First, the wearers - who didn't consent to human review of their intimate moments. Second, the bystanders - who didn't consent to being recorded at all. A colleague, a passerby, a partner in a private moment. None of them opted into Meta's AI training pipeline.
The glasses include a small LED indicator when recording, but multiple reports have shown it's easy to miss or disable - making the recording functionally invisible to people nearby.
The Opt-Out That Isn't
Meta's AI product terms state the company may review user interactions through automated or manual processes by third-party vendors. Users can toggle off a setting to share data for product improvement.
But the investigation found this is misleading. Recordings tied to AI assistant queries appear to be sent for annotation regardless of the toggle setting. Meta's own terms advise users: "Do not share information that you don't want the AIs to use."
Retail staff made it worse. The Swedish journalists bought glasses at multiple stores and found salespeople consistently told customers that data stays locally on the device or in the app. That's factually wrong - footage is sent to external annotators who are contracted specifically to watch and label it.
7 Million Glasses, Two Regulators
The Scale
This isn't a niche product anymore. EssilorLuxottica, which manufactures the Ray-Ban glasses, sold over 7 million AI glasses in 2025 - more than triple the 2 million sold across 2023 and 2024 combined. The company is targeting 10 million units per year by end of 2026 and discussing capacity for 20 million. Every pair is a potential source of footage that flows to Sama's annotation floor.
Apple is developing its own AI wearables lineup including camera-equipped glasses. How Meta handles this scandal will set the baseline expectation for the entire AI wearables category.
The GDPR Problem
Privacy lawyer Kleanthi Sardeli from NOYB (the European privacy organization behind multiple GDPR landmark cases) identified the core legal issue:
"Both transparency and a legal basis for the processing are lacking."
Under GDPR, processing intimate visual data requires explicit consent and a clear legal basis. Meta's current approach - burying consent in terms of service and routing footage to Kenya - may not meet either standard. There's no EU adequacy decision for Kenya, meaning there are no established data protection guarantees for transfers to Kenyan processors. EU-Kenya data protection dialogue only began in May 2024.
The Irish Data Protection Commission, which oversees Meta's European operations, has been contacted about the investigation. Italy's Garante previously imposed conditions on an earlier version of Meta's glasses. Sweden's IMY data protection authority confirmed that GDPR protections must extend to third-country subcontractors.
The FTC and Facial Recognition
In the US, the Electronic Privacy Information Center (EPIC) sent letters to the FTC and state attorneys general on February 13 requesting an investigation into Meta's plans to add facial recognition - called "Name Tag" internally - to the glasses. EPIC argues this would violate Meta's existing FTC consent decree, which prohibits the company from misrepresenting how it maintains user privacy.
In 2024, Harvard students demonstrated a project called I-XRAY that paired Meta Ray-Bans with public facial recognition databases to identify strangers in real time - pulling up names, addresses, and phone numbers. Meta's "Name Tag" feature would build that capability directly into the product.
What Meta Said
After two months of questions from the Swedish journalists, Meta provided a generic written statement from a London spokesperson. The statement did not address specific questions about data origin, the location of human reviews, user consent mechanisms, safeguards against intimate content exposure, or audit procedures.
The OECD AI incident database has cataloged the investigation as a documented AI incident.
What It Means
This isn't a theoretical privacy concern. It's a documented pipeline: user records intimate moment wearing glasses, footage is sent to Kenya, worker watches it with broken anonymization, worker is threatened with termination if they raise concerns.
The scale - 7 million devices, targeting 20 million - means this is a mass data collection system masquerading as a fashion accessory. The deanonymization risks we covered last week showed LLMs can unmask users from text alone. Combine that with a glasses-mounted camera that captures faces, locations, and intimate contexts, and you have a surveillance dataset that no user signed up for and no bystander can escape.
The investigation is still developing. The Irish DPC, the FTC, and Italy's Garante have all been contacted. Regulatory responses will determine whether this becomes a GDPR enforcement case or another privacy scandal that burns through a news cycle and changes nothing.
Sources:
- Help Net Security - Workers reviewing Meta Ray-Ban footage encounter users' intimate moments
- Gizmodo - Dear Meta Smart Glasses Wearers: You're Being Watched, Too
- The Decoder - Meta sends private AI glasses footage to Kenya with few safeguards
- Privacy Guides - Meta Smart Glasses Sending Sensitive Recordings to Workers to Annotate
- OECD AI Incident Database - Meta's AI Smart Glasses Expose Sensitive User Data
- Futurism - Meta Workers Say They're Seeing Disturbing Things Through Users' Smart Glasses
- Road to VR - Meta Sold Over 7 Million Smart Glasses Last Year
