Does Mixed Reality Have a Cassandra Complex?

Abstract
Cassandra Complex: from Greek mythology; someone whose valid warnings or concerns are disbelieved by others Recently, great steps have been taken in making virtual, augmented, and mixed reality (we refer to three realities as XR) technologies accessible to a broad and diverse end user audience. The sheer breadth of use cases for such technologies has grown, as it has been embedded into affordable, widely accessible, and on-the-go devices (e.g., iPhone) in combination with some popular intellectual property (e.g., Pokémon Go). However, with this increase has come recognition of several ethical issues attached to the widespread application of XR technologies in everyday lives. The XR domain raises similar concerns as the development and adoption of AI technologies, with the addition that it provides immersive experiences that blur the line of what is real and what is not, with consequences on human behavior and psychology (Javornik, 2016; Ramirez, 2019). It is easy to write off concerns with XR technology as unfounded or premature. However, the current state of the art in XR is capable of several use cases which we see as cause for concern: 1) XR can generate realistic holograms, thanks to advances in computer vision, of people. These hologram representations are lifelike and can be made to say or do things thanks to advances in deep fake technology where video footage of a person is generated in real time based on large data repositories of real captured footage (Westerlund, 2019). This can be used to promote disinformation. For example, a deepfake hologram portraying a movie celebrity sharing political propaganda which the celebrity themselves don’t endorse, targeting fans and spreading lies about the incumbent leader’s political opponents. The hologram could be made to harass or provoke viewers (Aliman and Kester, 2020), goading them into acting irrationally. This warrants ethical considerations when designing XR experiences for broadcasting and entertainment; 2) XR technology which can sense and interpret objects in the environment can be used to mask and/or delete recognized objects. This can be used to promote misleading and/or noncompetitive behavior in consumer goods marketing industries. For example, while a user is browsing an XR marketplace a soft drink manufacturer may identify a competitor’s can and make it look dented and/or undesirable, nudging the consumer to purchase their ‘superior’ looking product instead. In an XR environment, consumers have a more direct interaction with a product than in traditional broadcast based marketing, with XR providing powerful virtual affordances (Alcañiz et al., 2019) which can persuade consumers and their purchase intentions. When technology which can track our every move, and has knowledge of our preferences and desires, is given the power to make decisions on our behalf becomes widespread it may have unintended consequences (Neuhofer et al., 2020). Therefore, the use of XR in marketing should be subject to ethical considerations; 3) XR experiences may be so immersive that they distract from the user’s surroundings, opening them up to harm. For example, there have been several reports of Pokémon Go users being hit by passing vehicles as they play the game,1 completely immersed in the experience and unaware of what is happening around them in the real world. As these experiences revolve around storytelling, there are ethical responsibilities on the creators to ensure safe passage through the experience (Millard et al., 2019) for audiences and viewers, and as play takes place in real locations, one must consider the appropriateness of facilitating play in socio-historical or sacred locations (Carter and Egliston, 2020). Likewise, there are similar calls for standards in the design of experiences for educational purposes (Steele et al., 2020); and 4) XR technology can also be used to create realistic environments where though there may be no physical harm, certain experiences may expose participants to psychological trauma. Though there is no real threat in the environment, the participant perceives the virtual representation as such: it looks and 'feels' real. They may become overwhelmed with intense feelings of anxiety and fear as the graphical detail is staggering (Reichenberger et al., 2017; Lavoie et al., 2021; Slater et al., 2020). In this case, complex ethical situations arise when using XR technologies for therapy and research applications. These four use cases alone demonstrate the potential harm XR technologies may introduce for users, whether it be intentional or not, physical, or sociological. Social and political implications of emerging tech, for example social media sickening, are on the rise (Vaidhyanathan, 2018). The pace of emerging tools and technologies is so fast, as soon as we figure out what to do about one problem, a new one arises. Searching the Association for Computing Machinery (ACM) digital library reveals a growing trend in the area of XR and ethics that is nowhere close to slowing down (Figure 1). The point is: XR is following the same trend in publication outputs as AI. Given the bumps in the road that ethics and AI have observed in recent past, we note similar issues may begin to emerge soon in XR. Recent work has raised concerns over the practical utility of ethics documents written by governments, NGOs (Non-Governmental Organisations), and private sector agents. Schiff et al. cite several motivations extracted from a coding process over 80+ documents published between 2016 and 2020 regarding ethical approaches to AI (Schiff et al., 2020). They describe how motivations to publish documents can interact with one another; some agents may be motivated to act in a responsible manner, while others may be motivated to signal responsibility through publishing documents to increase their brand authority or take a leadership position for...