Privacy

Meta Ray-Bans: A privacy risk?

Meta is refusing to confirm whether the images captured by its AI Ray-Ban smart glasses are used to train its AI models.

Martin Crowley
October 1, 2024

When asked whether it uses the images captured by its AI Ray-Ban smart glasses to train its AI models, Meta said “We’re not saying either way,” claiming it “wasn’t publicly discussing it” as it wasn’t something it typically shared externally.

The smart glasses have a discreet camera on the front of the glasses which takes photos of the environment the user is in, either when asked to do so, or automatically when it hears certain trigger words like “look.” Its newest AI video feature takes a series of photos of its surroundings (sometimes without the user knowing) so it can offer nuanced, natural-sounding answers to questions. For example, if a user asks it to pick an outfit from a closet, it will take a multitude of photos of the user's personal space to respond, and then store these images in an AI model in the cloud.

This is what people are questioning: How many images are being taken? What are they being used for? And what happens to those personal images, once they’re in the cloud?

And Meta is refusing to answer these questions.

This comes as Meta recently admitted to having used US Facebook and Instagram public posts to train its AI models since 2007, claiming it was “publically available information.”

The lack of transparency over this privacy issue is a surprising approach from Meta, as Google’s smart glasses—Google Glass—which have since been discontinued, provoked a strong backlash when they were launched, as people were uncomfortable with being recorded without their consent. So you’d expect Meta to issue strong messaging, reassuring users that all images taken with the smart glasses will be kept private and secure, and remain siloed to the face camera.