In “Virtual Reality Data and Its Privacy Regulatory Challenges: A Call to Move Beyond Text-Based Informed Consent,” Yeji Kim explains how virtual reality collects data from users and argues for more meaningful and customizable methods of gaining informed consent from users than traditional text-based methods.
Author: Yeji Kim is a J.D. candidate at the University of California, Berkeley, School of Law.
Host: Carter Jansen
Technology Editors: NoahLani Litwinsella (Volume 110 Senior Technology Editor), Carter Jansen (Volume 110 Technology Editor), Hiep Nguyen (Volume 111 Senior Technology Editor), Taylor Graham (Volume 111 Technology Editor), Benji Martinez (Volume 111 Technology Editor)
Other Editors: Ximena Velazquez-Arenas (Volume 111 Senior Diversity Editor), Jacob Binder (Volume 111 Associate Editor), Michaela Park (Volume 111 Associate Editor), Kat King (Volume 111 Publishing Editor)
Soundtrack: Composed and performed by Carter Jansen
Article Abstract:
Oculus, a virtual reality company, recently announced that it will require all its users to have a personal Facebook account to access its full service. The announcement infuriated users around the world, who feared increased privacy risks from virtual reality, a computer-generated technology that creates a simulated world. The goal of virtual reality is to offer an immersive experience that appears as real as possible to its users. Providing such an experience necessitates collection, processing, and use of extensive user data, which begets corresponding privacy risks. But how extensive are the risks?
This Note examines the unique capacities and purpose of virtual reality and analyzes whether virtual reality data presents fundamentally greater privacy risks than data from other internet-connected devices, such as the Internet of Things (IoT), and if so, whether it poses any special challenges to data privacy regulation regimes, namely the European Union’s General Data Protection Regulation (GDPR), the world’s most stringent and influential data privacy law. Currently, one of the key criticisms of the GDPR is its low and ambiguous standard for obtaining users’ “informed consent,” or the process by which a fully informed user participates in decisions about their personal data. For example, a user who checks off a simple box after reading a privacy policy gives informed consent under the GDPR. This Note argues that virtual reality exposes a more fundamental problem of the GDPR: the futility of text-based informed consent in the context of virtual reality.
This Note supports this claim by analyzing how virtual reality widens the gap between the users’ understanding of the implications of their consent and the actual implications. It first illustrates how virtual reality service providers must collect and process x-ray-like data from each user, such as physiological data like eye movements and gait, to provide customizations necessary to create an immersive experience. Based on this data, the service providers can know more about each user than what each user knows about themselves. Yet, this knowledge shift is not obvious to users. For virtual reality service to provide an immersive experience, customizations based on user data must be unnoticeable to users to avoid distractions. Using Oculus’s recent privacy policy as a case study, this Note shows how this hidden knowledge shift transforms the meaning of ordinary privacy policy phrases like “an experience unique and relevant to you.” What Oculus finds to be “relevant” to the user could be beyond what the user themselves would imagine to be “relevant.” As a result, the text becomes an obsolete medium to communicate privacy risks to virtual reality users. This Note instead proposes other solutions—such as customizable privacy settings and visualization of privacy risks—for users to more closely understand and consciously weigh the benefits and the risks of using virtual reality.