Abstract
Robots, particularly the ones that belong to a special type of robotic technologies designed and deployed for communicating and interacting with humans, slip into more and more domains of human life - from the research laboratories and operating rooms to our kitchens, bedrooms, and offices. They can interact with humans with facial expressions, gaze directions, and voices, mimicking the affective dynamics of human relationships. As a result, they create new opportunities, but also new challenges and risks to peoples’ privacy. The literature on privacy issues in the context of Social Companion Robots (SCRs) is poor and has a strong focus on information privacy and data protection. It has given, however, less attention to other dimensions of privacy, e.g. physical, emotional, or social privacy. This article argues for an “evolving” or “transformable” notion of privacy, as opposed to the “elusive” concept of privacy elaborated by leading privacy theorists such as Daniel J. Solove (2008) and Judith J. Thomson (1975). In other words, rather than assuming that privacy has a single core or definition (as defined, e.g., in Warren and Brandeis' 1890 paper), it maintains that it is important to conceptualize privacy as distinguishable into various aspects, including informational privacy, the privacy of thoughts and actions, and social privacy. This inductive approach makes it possible to identify new dimensions of privacy and therefore effectively respond to the rapid technological evolution in AI technologies which is constantly introducing new spheres of privacy intrusions.

This publication has 16 references indexed in Scilit: