As augmented reality and spatial computing become more pervasive, the ethical and privacy issues that they raise will increase. Some of these issues will be unique to AR because digital reality will blend with the physical world. This will create conflicts between these two realities.
Lessons learned from virtual environments might provide guidance for resolving some of these conflicts:
- The concept of ‘avatar rights’ could be extrapolated to the rights of our augmented selves (or how we represent ourselves in digital spaces).
- The concept of the “clean well-lit room” could help us navigate augmented realities.
The Ethical Challenges of Augmented Reality
Playing an AR game at home or positioning a digital AR couch in your living room presents a simple challenge. The data collected, for now, isn’t that much different from browsing the Web or sending out a Tweet.
But the AR Cloud promises to create a digital mirror of reality. Times Square will be scanned, its contents annotated, and people will eventually walk through it wearing AR glasses.
With the increase in realism may come an increase in personal data acquisition by the XR system, for instance to better articulate movements of a virtual representation of the participant, to personalize advertising or to enable features relevant to the geographical location in which you are. Traits including motor actions, patterns of eye movement, and reflexes (a person’s “kinematic fingerprint”) and information about preferences, habits and interests may be recorded (Spiegel, 2018).
The Open AR Cloud Association, in their State of the AR Cloud 2019 report, note how rich and complex the data collection can become as we start to move augmented reality into ever-larger and more public spaces:
AR solutions are tightly linked to the environments in which they operate, and sense, process, store, and possibly expose a large range of important information related to personal and business facilities, locations, resources, communication, and activities related to planning, operations/production, maintenance, and more. To be effective, AR systems nearly constantly gather data while in use and even in standby mode. This data can include detailed spatial maps of user surroundings and captured audio, video, locational, and positional data. Some of this data can be accessed remotely without the user even being aware it is happening.
And if nothing else gives you pause, imagine going for dinner with a friend who’s wearing augmented reality glasses developed by Facebook. Would it bother you that those glasses are being used to “produce multi-layer representations of the world using crowdsourced data, traditional maps, and footage captured through phones and augmented reality glasses”?
Your friend will be the crowd. His glasses will be an always-on Facebook surveillance device. And your physical presence within his view will just be more data vacuumed up into the ad network which is Facebook.
As The Verge dryly notes: “It’s not totally clear how (or if) Facebook would protect privacy while collecting all of this data.”
What We Can Learn From the Early Metaverse
15 years ago I was writing about virtual worlds. I saw them as test beds for how we’d increasingly interact with digital content. They allowed us to explore issues of identity, user-created worlds, avatar rights and the real-world value of digital property.
The Metaverse seemed like it was right around the corner.
(It has taken a bit longer than expected).
But the early glimpses of shared virtual realities gave us a lot of insight into how people felt, reacted and valued having a digital presence.
For many, how they were represented in World of Warcraft wasn’t just a character, it was an identity. It had a community. Educators flocked to the platform to teach collaboration and problem-solving. Researchers studied it like a newly discovered tribe.
Tom Boellstorff, a professor of anthropology at Stanford did field studies in virtual worlds and wrote about the Techne – the human practice of creating a new world as well as a new person: the homo cyber.
How The Early Metaverse Can Shape Augmented Reality
For me, a seminal post from this time was an interview conducted by Tish Shute (joined by David Levine, a researcher from UBM) with Eben Moglen, founder of the Software Freedom Law Center.
The interview ranged over a wide number of topics, primarily focused on privacy and the uneasy relationship between consumers and the platforms they participate in.
A lot of it seems quaint today. They had a long discussion about who owns digital photos and the role of Flickr. These discussions of ‘ownership’ can seem like an anachronism in an era when photos are more likely to be scanned by AI to determine who’s in them than they are to be ‘stolen’.
Disney brought them a shared sense of dread because of the amount of data mining done by the organization:
The average number of visits to Disney Land by people brought to Disney Land as children is four in a lifetime. Disney’s goal over the next 20 years is to make that six. And in order to make that six all they have to do is keep buying everything they can buy about all those people they opened a file on as children looking for opportunities by data mining to find an opportunity, like the oncoming 65th birthday of a parent or an oncoming retirement or a this or a that. They just have to play the siblings off against one another – you’ve never been to Disney Land but Johnny has. … They are going to do what they can do to create feelings around that person that says remember what it felt like being in Disney land when you are child – do that again.
Today, it would be called a business model.
The truce that they hoped we’d arrive at where individual privacy could be respected in an age of corporate data collection never did.
Facebook was only three years old. It was still considered a social media company rather than what it truly is: a company that conducts mass surveillance in order to make money from behaviourally-targeted advertising.
Augmented Reality and The Clean Well-Lit Room
And yet the interview stuck with me because of the concept Eben put forth of the clean well-lit room (emphasis added and lightly edited):
I think what we really want to say is something like this. If you are talking about a public space you’re talking about a thing that has not just a TOS contract but a social contract.
It’s a thing which has to do with what you get and what you give up in order to be there.
There ought to be two rules. One: Avatars ought to exist independent of any individual social contract put forward by any particular space. And two: social contracts ought to be available in a machine readable form which allows the avatar projection intelligence to know exactly what the rules are and to allow you set effective guidelines. I don’t go to spaces where people don’t treat me in ways that I consider to be crucial in my treatment.
Its one thing to say that the code is open source – let’s even say free software – it is another thing to say that that code has to behave in certain ways and it has to maintain certain rules of social integrity.
It has got to tell you what the rules are of the space where you are. It has to give you an opportunity to make an informed consent about what is going to happen given those rules. It has got to give you an opportunity to know those things in an automatic sort of way so I can set up my avatar to say, you know what, I don’t go to places where I am on video camera all the time. Self, if you are about to walk into a room where there are video cameras on all the time just don’t walk through that door. So I don’t have to sign up and click yes on 27 agreements, I have got an avatar that doesn’t go into places that aren’t clean and well lit.
Choosing Your Level of Immersion (And Reducing Harm)
In augmented reality, this dovetails nicely with recommendations proposed in The Ethics of Realism in Virtual and Augmented Reality. For example, because superrealism can cause harm (for example, hyper-real violence), users should have the choice of toggling the degree of realism in a virtual or augmented simulation:
To reduce the level of realness, implementers (for example, researchers) and participants may be able to select a level of deception. For example, level 10 means that the XR should try its absolute best to completely convince participants that what they are experiencing is real. Level 1 might be “give me some experience, but do your best to keep reminding me that this is not happening, it is not real.”
They further propose that content-induced risk could be minimized:
In conventional media, there are clear warnings, for example, that what is being shown is a reconstruction, or that some material contains images that may be disturbing to some, so that people are not caught off-guard or misinterpret the veracity of what they observe….XR has a different level of intensity—and often a different objective—that calls for the development of new conventions and sometimes the modification of existing ones (for example, clear depictions of violence may be necessary in military training with XR, and principles or guidelines intending to ameliorate the distress of the participants may not apply in this case). Clear warnings are always advisable and minimum age requirements may be adequate in some instances.
These concepts, which would reduce the ethical risks and harms of XR, could be paired with Eben’s concept of a programmatic solution of two-way communication between space and participant.
My Glasses Talk Back
Here’s how we currently enter digital spaces: we’re presented with a Terms of Service and EULA. We don’t read it. We accept and enter.
Sure, we’ll never get rid of the EULA.
With augmented reality we’ll enter physical spaces in which two things will happen:
- Sensors in the world around us will use SLAM (Simultaneous Localization and Mapping) to precisely locate us in the environment (they already do to a large degree)
- We’ll increasingly be able to see digital “overlays” of physical reality. These overlays will be bind us to the ‘AR Cloud’ and to physical space
Social Contracts with Physical Space
Eben’s concept was to give us machine-readable agency as we enter these spaces. Instead of “read this EULA and click accept”, we would carry around a version of our preferences with us.
Instead of needing to decide whether we’re OK having our movements tracked so that we can be spatially positioned in the AR Cloud, the devices we use would carry those decisions for us.
We would be able to walk around knowing that we have not given permission to be spatially mapped (or be presented with violent content, or be shown content which could trigger our epilepsy) and that any permission we DO give will be an explicit contract with a specific space.
Because won’t your preferences change depending where you’re at? You may have signed a EULA for an app, or for the operating system of your AR glasses, but you might also want to specify that you’re fine being spatially positioned or scanned in a public park so you can play Minecraft World but not when you’re at the local mall.
The Clean, Well-Lit Room
The second concept from the interview with Eben Moglen was that of the clean well-lit room.
Again, it perhaps seems naive today, but it builds off of the idea that EULAs are impenetrable. And what we really want to know is how safe we are, how protected, as we enter digital domains.
These concepts may apply even more directly to augmented reality because they could apply not just to privacy but also to some of the concerns raised in the ethics report.
For example, they worry about vulnerable populations:
An implicit assumption in the introduction was that participants in a virtual environment would typically be drawn from adult and non-patient groups, and generally non-vulnerable populations…For example, children or adolescents may not distinguish well between reality and virtual reality. This may also be the case for certain patient groups, such as those prone to psychosis. With such populations it is a reasonable assumption that even the device-gap would not necessarily operate, perhaps most especially for very young children.
What someone with epilepsy wants to know before entering any virtual space is whether they’re at risk. By ‘carrying around’ their contract, they could notify the places they visit. The “clean well-lit room” arrives by knowing that the spaces you enter respect these contracts. It might look a lot like the lock symbol on an encrypted web site.
We’re entering a brave new world. We will need new approaches to privacy and feelings of safety and we’ll need to create new interfaces and designs.
The AR Cloud Association, research on ethics, and general concerns about privacy, fake content, identity theft and the role of public spaces all indicate that we will need new ways of thinking.
Previous explorations on other platforms can be one source of hints for new designs. The idea of a portable ‘identity contract’ proposed by Eben Moglen could be one way to visualize how we could use new paradigms for augmented reality rather than just adapt what never really worked.
By doing so, we might not only end up with a transformative technology, but one that was designed for the end user in ways that might let us turn back the clock on what we got wrong before.