What if the device that brings the Metaverse to life also captures every move you make? The data we generate through immersive technologies becomes the next frontier – data that is not just a reflection of our surroundings but a blueprint of who we are. AR and VR aren’t just about immersive experiences. They’re powerful data engines, recording how we move, where we look, and even how we feel. And it all flows through Passthrough APIs – the invisible conduit between physical and digital realms. Are we fully aware of the risks lurking behind the lenses?
Recently published, The OECD Immersive Technologies Policy Primer Report warns of the escalating risks posed by immersive technologies, especially AR and VR. The primer underscores the pressing need for comprehensive policy frameworks to address the growing risks of immersive technologies and the growing deficit of trust around data governance. The report highlights how data collection through AR and VR can evolve from benign mapping to invasive surveillance, especially when AI systems amplify these capabilities.
What is a Passthrough API?
During my tenure at Meta (formerly Facebook), I saw firsthand how third-party access to data could evolve from a simple convenience to a complex risk. Passthrough APIs act as the eyes of AR and VR systems, capturing and interpreting real-world data through device cameras and sensors. According to Meta, Pass-through is like “looking through a window”. It’s about seeing your physical surroundings through a live video feed with an on/off function. But what happens when these lenses can see more than we realize? Imagine a digital mirror that not only reflects your surroundings, maps your facial expressions, tracks your movements, and infers your emotional states. This is the power – and the peril – of Passthrough APIs.

How mixed reality capabilities come together via Passthrough API (source: Meta)
Platform | API Name | Functionality | Notable Risks |
Meta | Meta Passthrough API | Captures real-world data for overlaying digital content | Inferred biometrics, spatial mapping |
Apple | Apple ARKit | Allows apps to integrate AR experiences using device cameras | Facial recognition, spatial tracking |
Google ARCore | Provides tools for creating AR content and tracking surfaces | User location, movement patterns | |
Microsoft | Microsoft Spatial Anchors | Enables AR experiences that persist across devices | Location data, persistent spatial data |
Snap | Snap Lens Studio | Develops AR lenses using real-world camera input | User data capture, behavioral tracking |
Data Flow and Risks: From Sensors to Developers
Every step in the data flow of Passthrough APIs is a potential point of exploitation. When APIs access our personal spaces, they don’t just capture spatial data – they capture us. From facial expressions to gait analysis, the granular data collected can paint a predictive portrait of our behaviors. This is not just about mapping our physical surroundings; it’s about mapping our very identities. And once AI gets this data, the implications become even more profound.
- Data Capture: Device sensors collect spatial and biometric data.
- Processing: Data is processed locally or transmitted to cloud servers.
- Access: Developers access processed data through API endpoints.
- Storage and Analysis: Data may be stored locally or remotely for further analysis.
The potential for misuse is vast, from unauthorized profiling to behavioral tracking. And as AI systems integrate with these datasets, the risks escalate exponentially.
Biometrically Inferred Data: The AI Risk Multiplier

Biometrically inferred data is not just about recognizing faces—it’s about inferring health conditions, emotional states, or behavioral patterns. When AI algorithms access biometrically inferred data, they’re not just seeing us but predicting us. The richer the dataset, the more powerful the AI’s predictive capabilities. We’re entering an era where even a casual glance can be cataloged and analyzed—potentially weaponized in contexts ranging from targeted advertising to predictive policing.
Conclusion: The Next Cambridge Analytica?
Cambridge Analytica showed us how they weaponized data to influence voter behavior. Imagine a future where AR and VR data, captured through Passthrough APIs, becomes the next goldmine for psychographic profiling, targeted manipulation, or predictive surveillance. The exact mechanisms that allowed Cambridge Analytica to influence elections could be exponentially more powerful when combined with AI-enhanced immersive datasets. This is not speculation; it’s the reality we’re heading towards.
AR and VR are redefining the boundaries between reality and data. But as Passthrough APIs transform our every move into digital assets, we must ask: Who is accountable for how companies use this data? Are current frameworks enough, or are we heading for the next Cambridge Analytica, but this time, in 3D? The race to build immersive worlds is on. But who will hold the keys to our most intimate data?
So, as immersive technologies continue to blur the line between our online and offline lives, we must ask ourselves: Who is defining the boundaries of ethical data use? Are we simply trusting tech giants to set rules or demanding accountability? In my work at XRSI, we strive to build frameworks that prioritize data integrity and user safety. However, frameworks mean nothing if the industry continues to play by its rules. Until we hold those in power accountable, we’re all just pixels in someone else’s digital playground.