Abstract
There is a demand for more and more sophisticated social robots. The ideal of many engineers is to produce machines indistinguishable from humans, on the level of behavior or appearance….” (Campa, 2016). Artificial intelligence and its companion technology robotics promise to revolutionize human machine relations through their capabilities for analyzing, interpreting, and executing human action (Institute of Electrical and Electronic Engineers, 2017). While stimulating excitement as well as concern (Bostrom, 2014), these capabilities have also invited reflection on the ethics and values guiding technology development (Calo, 2016). Factors that induce value evolution are of interest, therefore, for influencing the forms the technology may adopt. In broad terms these are seen to operate at two levels: (1) by epistemological inference, often through neuroscientific observation—humans are like machines (McCulloch and Pitts, 1943; Fodor, 1975; Marr and Poggio, 1976; Marr, 1982; Piccinini, 2004; Yuste, 2010) and (2) by ontological predication, that is, as an imputed analog of human meta properties—machines are like humans (Hornyak, 2006; Kitano, 2006; Sabanovic, 2014). Due to their design intent of reducing the onus of human intervention, AI devices are increasingly given over to servicing a spectrum of human needs, from lower order motoric assistance to higher order computational and social functions, e.g., living assistance companions and work colleagues (Sabanovic, 2014); accordingly, they invite analogy at multiple levels. Simulation of higher order cognition, especially, is regarded as driving value attribution—here understood as an intrinsic ground for rights and ethical entitlement (Rothaar, 2010)—which flows from ontological inferences about the technology's operational semblance to human cognition. That is, through replication of these uniquely human abilities, there is a growing ontological incursion in the technology, which propels value evolution under the guise of simulating ontological equivalence. Breazeale's Kismet robot, for instance, explores not merely the social gestures essential to promoting human machine interactions but also the construction of human social intelligence and even what it means to be human (Breazeal, 2002; Calo, 2016). Simulation thus challenges the traditional value hierarchy placing human beings at the apex of organismal life and grounding ethical, bioethical, and neuroethical praxis, a prioritization that has promoted human flourishing while also restricting harmful intervention into the human being. Rather than emphasizing the centrality of human value, simulation promotes a value architecture that is more inclusive, democratic, and horizontal in scope, a trend recently taken up in ethical parity models (Clark and Chalmers, 1998; Levy, 2011; Chandler, 2013). Seen through the lens of ethical parity, however, simulation poses a multidimensional challenge to an ethical system where value is contingent to the human being, a challenge mediated at the level of the ethical subject, i.e., in the siting of value contingency (Clark and Chalmers, 1998; Levy, 2011), in its theory of ethics (Latour, 1993; Connolly, 2011), i.e., in how ethics is normatively anchored (Latour, 2007), and in ethical praxis (Sgreccia, 2012). In consequence, it modifies ethical mediation as an intentionalized moral enactment, which is framed by a referential ontology. The pursuit of ethical parity between robotic technology and the human being has highlighted the symbiotic nature of human machine relations (Haraway, 2003; Rae, 2014a). Rather than the merely instrumentalist association identified in Aristotelian and scholastic philosophy, the appropriation of ontological parity motivates a physical reciprocity that lies at the intersection of the human and the machine; that is, behind the human lies hidden the machine, and behind the machine lies the human. Hence, symbiosis is understood to actuate an a priorism that is physically operative at the locus of intersection between the two (Waters, 2006; Onishi, 2011). Elucidating the philosophical roots of this a priorism is, nonetheless, infrequently considered (Rae, 2014b). While the detection of a physical ‘a priorism’ can be expected to constitute a meta valorization of the process of ontological appropriation distinguishing simulation, epistemological sources that may reveal consilience have yet to trace the physical reciprocity invoked by symbiosis to a meta-physical ground (Haraway, 2003; Rae, 2014a). Modern physics, for example, broadly views the world as consisting of individual entities embedded in space time (Esfeld, 2004), a conception rarely considered in human machine, philosophy of science guises and apparently contravened by the sort of symbiosis proposed in their chimeras. This paper will opine that standard simulation accounts like computationalism trace their understanding of ontology to Heidegger's metaphysical deconstruction of subject/object dichotomies which identified a constitutive a priorism of attribute sharing. Recent integrationist accounts of cognition, however, increasingly evidence a unity structured through the body's engagement in action (Fourneret et al., 2002; Kato et al., 2015; Noel et al., 2018; Wolpaw, 2018); that is, neural architectures reveal an a priorism grounded in the unity of their operation, a finding of relevance for ontology, where actionable behaviors qualify an emergent self. “And, in spite of the victory of the new quantum theory, and the conversion of so many physicists to indeterminism de La Mettrie's doctrine that man is a machine has perhaps more defenders than before among physicists, biologists and philosophers; especially in the form of the thesis that man is a computer.” (Popper, 1978). As Karl Popper notes (Popper, 1978), the thesis that human cognition simulates the...

This publication has 28 references indexed in Scilit: