While, for the time being, there
are no
robots on any couches talking to
people about their traumata, concern arises about the well-being of their human
interaction partners, as people and robots experience artificial intelligence
differently. This perception differential raises a question how it should
affect robot design to increase
acceptance by humans amid
fears of domination by technology. As these things go, it is always helpful to
start with the argumentum e contrario[1]
and figure out how not to use robots so
that they are not perceived as a threat but may be safely experienced as an
enrichment. This presupposes research on optimal interfaces between man and
machine to facilitate their coexistence, starting with classical usability research to make user
interfaces intuitive, but quickly progressing to social fears of certain uses
of robots, the role of a user’s personality, and a subjective sense of security
in communication that involves, say, autonomous vehicles and pedestrians.
In 1970, Japanese roboticist Masahiro Mori coined the term “uncanny
valley” for a graph that shows the nexus
between a robot’s human-like qualities and the emotional
reaction to it by humans. While a robot may
have head and eyes, it remains essential for acceptance that it be clearly distinguishable
as a machine. Once silicone
skin and artificial hair further obfuscate distinction, robots are perceived almost
universally as sinister. New
studies indicate that the same might be
true of disembodied AI, suggesting an “uncanny valley of the mind” whenever
online chatbots adopt human-like behavior by simulating emotionality or by making
independent decisions.
Over time, people do get used to
humanoid robots through habituation effects. Exploring those effects would
require long-term studies which have clearly not had occasion to be carried
out. But the fundamental question arises whether humans generally want
to work and live together with
humanoid robots. Thus far, polls show attitudes of major rejection the more the
impression is solidified that robot and people become gradually less
distinguishable. This apparent
similarity tends to cause heavy and permanent insecurities on the part of
humans as it confuses their expectations. While the perceived risk is still far
from a contemporary reality, the problem is far more immediate for software bots
that already communicate with people online and simulate interaction. Research areas
increasingly identified as android
science and affective computing expect technologies that surround us to become
emotion-aware over the
course of the next five years, creating a vision of artificial
emotional intelligence.
Current autonomous machines have
not to date assumed a human form. Thinking of the example of driverless
vehicles, fear of losing control to networked technology could also take hold
of pedestrians, so it may be helpful for an autonomous machine to communicate
with its surroundings and to make highly transparent what steps the machine
will take next.
Does it increase the safety of people
crossing the trajectory of an autonomous vehicle to be told by the robot that it
saw them? Traditionally, we make eye contact with a human driver in such situations.
Exploring if and how this can be translated into technology may hold promise
for increasing not only safety but also efficiency of human-robot interactions.
In one example, light
signals increased predictability of a
vehicle’s actions. Pedestrians became significantly faster and moved more
confidently. Most effective were visual signals perceived only peripherally,
but still increasing a human’s sense of security, for example by mounting on the
car’s grille lights targeting signals in the direction of the pedestrian. Possibilities
to convey signals will evolve, but they also involve a risk of sensory or
information overload. Perhaps in a decade or two, car bodies will serve as
large displays, but this in turn increases the complexity of visual information
processing required of pedestrians.
Predictability is equally important
in industrial robotics. Here, too,
machines need to be able to signal to humans who need to anticipate, for
example, as early as possible the spot and access path where a robot may wish
to reach. Machine movement must not simply take the shortest distance from A to
B if curved movements could indicate intentions early. What may be
inefficient from a technical viewpoint may considerably increase predictability
and workplace safety, improving legibility of robot movements and accelerating
performance of overall team of man and machine.
As for the cultural acceptance of
robotics, research data is not clear. Some studies suggest that people in Japan
are more open to autonomously
acting machines – in part also
to ease the country’s labor
shortage – while others see no difference
or even show the opposite result. Explanations for favorable findings may be
routed in the religious tradition of animism and Shinto with their concepts of
a soulful object, while no such thought exists in the Judeo-Christian
tradition. Media socialization is also frequently mentioned to explain
acceptance of autonomous machines. In popular anime
and manga, robots are cooperation
partners or even act as saviors of humanity. This is in stark contrast to the
Western vision of a "Terminator" that puts an end to humanity.
Most developers of robotics are
men. They also impose their male perspective on technology. In fact, the entire
tech industry is dominated by younger
white males. It is
difficult to investigate which distortions or biases this creates. As an
example, a video circulated widely on social
media showed a black
man who could not seem to manage to get
soap out of a soap dispenser. Only when he held a white sheet of paper under
the dispenser, the soap came out. This illustrates what can go wrong if development
teams are not diverse enough. As another example, service robots are typically designed
with traditional
female stereotypes in mind,
including a cleaning robot with a female
figure and an apron, at least in part
realizing a gynoid fantasy. There is ample evidence that robots adopt prejudices
and stereotypes from their
creators since, at the present stage, AI is above all machine learning.
Programs process man-made content data and draw their conclusions by pattern
intelligence. Therefore, it
is not reasonable to expect value-neutral
results, even if this is intuitively
expected or presumed. Studies show that machine learning systems adopt gender
stereotypes and show females as close to their families while males are viewed
as more career-oriented.
Popular culture has further shaped
technological spins and biases through sci-fi books and movies that cause
people to overestimate the state of technology. Many promptly associate with
the term “robot” the anthropomorphous androids they know from Star Trek or I, Robot. Developers who currently work hard on resolving challenges
of robot
stability on two legs must be amused
by such perceptions. It might not hurt the cause of realism to show actual contemporary
robots in action more frequently – say, their applications in autonomous
transport or cleaning systems. Such reality check promises to be one of the
safest methods to banish premature fear of the man-machine’s interfacing
coexistence.
[1] Polish
scientist Maciej Koszowski produced a remarkable body of logico-legal
algorithmic and interpretive research on aspects of analogy. See Maciej Koszowski, Multiple Function of Analogical Reasoning in
Science and Everyday Life, 197 Polish
Soc. Rev., no. 1, 2017, at 3; Maciej Koszowski, The Scope of Application of Analogical Reasoning in Statutory Law,
7 Am. Int’l J. Contemp. Res., no.
1, March 2017, at 16; Maciej Koszowski,
Why Is Analogy in Empirical Science and Everyday Life Different from Analogy in
Law? 25 Studia Iuridica Lublinensia,
no. 2, 2016, at 127; Maciej Koszowski,
Perelman and Olbrechts-Tyteca’s Account of Analogy Applied to Law: The
Proportional Model of Analogical Legal Reasoning, 13 Archiwum Filozofii Prawa i Filozofii Społecznej, no. 2,
2016, at 5; Maciej Koszowski, The Scope of Application of Analogical
Reasoning in Precedential Law, 37 Liverpool
L. Rev., no. 1-2, 2016, at 19. See
also Richard A. Posner, Legal Reason:
The Use of Analogy in Legal Argument, 91 Cornell
L. Rev. 761 (2006) and Chaim
Perelman & Lucie Olbrechts-Tyteca, The New Rhetoric: A Treatise on
Argumentation (1971).