Pages

2014-07-12

Artificial Intelligence or What does it mean to be human?



We live in an age where science has ceased to reside in a quaint ivory tower. Even analytic philosophy took a quantitative turn and became inextricably entwined with computer science, as Bertrand Russell diagnosed and predicted already in 1945:

"Modern analytical empiricism [...] differs from that of Locke, Berkeley, and Hume by its incorporation of mathematics and its development of a powerful logical technique. It is thus able, in regard to certain problems, to achieve definite answers, which have the quality of science rather than of philosophy. It has the advantage, in comparison with the philosophies of the system-builders, of being able to tackle its problems one at a time, instead of having to invent at one stroke a block theory of the whole universe. Its methods, in this respect, resemble those of science. I have no doubt that, in so far as philosophical knowledge is possible, it is by such methods that it must be sought; I have also no doubt that, by these methods, many ancient problems are completely soluble."[1]

While neuroscience has made some rather remarkable inroads into neurobiology and behavior [2], it is in many other ways still in its infancy – yet artificial intelligence (A.I.) purports to mimic and surpass the human brain on its way to a “singularity” that Ray Kurzweil, Google’s director of engineering, predicts to happen circa 2045. And Stephen Hawking cautions that creating artificial intelligence might be the biggest and yet last event in human history “unless we learn how to avoid the risks.”

Certainly, algorithms and machines are likely to make most knowledge-based work more data-driven, more rational, and it is also highly likely that the coming decades will see machines created that will perfect elements of reasoning and execution of conclusions better than we can. But if robots replace 80 or 90 percent of a profession – those engaged in largely repetitive and routine (however advanced) functions – that is not saying that there will be no doctors or no lawyers under such conditions. Not so long ago, the number of people in agriculture dwindled from, say, 90 percent into the low single digits, so the effect of technological change on employment patterns is hardly new. It is important to remember the fact that many types of professional services are capable of being assumed entirely by robotic functions – with significant benefits in the overwhelming number of cases, if we think of robotic hip replacement or bypass surgeries with an increasingly negligible number of failures where the machine is confronted by limitations of its design and knowledge base. As soon as the incidence of such robotic failings is surpassed by data reporting human malpractice, the ethical aspect of the discussion will turn moot.

Because we have not really nor entirely understood how human brains work, we cannot duplicate, much less create artefacts that surpass them.

Playful companion robots capable of speaking accent-free and faultless texts with synchronized moving lips that remarkably approximate the human appearance are produced and becoming affordable at the price of a high-end laptop in Japan today. While they are presently used as guides in Tokyo’s National Museum of Emerging Science and Innovation Miraikan or as exotic TV announcers, such androids will imminently cross the threshold to behavior that will appear to humans as flirting – but just like it would be a mistake to think that a robot translating “understands” language and is, in fact, “translating,” robotic flirting is devoid of any emotional quality and merely conditioned to respond to certain observations and stimuli. The design of android robots requires profound exploration what it means to be human.

While the U.S. Armed Forces are developing battlefield robotics including killer robots, the Swiss École Polytechnique Fédérale’s Laboratory of Intelligent Systems in Lausanne already in 2009 produced the first robots that, on their own motion, use subterfuge and deceit to achieve an objective. In a series of experiments, engineers Sara Mitri and Dario Floreano and evolutionary biologist Laurent Keller divided 1,000 robots into ten groups with the task of locating certain resources. If they did, they were equipped with a blue light alerting other robots in their group of the location. High points were given for locating and sitting on a good resource, and points were distracted for doing the same on a no-good resource. Furthermore, good resources were limited so that not every robot could score when one was found, and overcrowding could dislocate the original finder. Each robot had a sensor and its own 264-bit binary code controlling its behavior. After each round of experiments, the 200 highest-scoring software “genomes” were randomly “mated” – allowing for “mutations” of their software – simulating machine evolution. “Robocopulation” as a means of modeling evolution has become a very hot area in robotics with considerable potential. It took all of nine generations to perfect their skills at locating the target resource as well as inter-group communication. But the truly interesting pattern took a little longer to emerge: 500 generations later, 60 percent of robots had learned not to turn their blue light on when the target resource was found, enabling them to preserve the benefits all to themselves. One-third of them learned to identify liars by declining to react to the (often enough false-positive) blue light, in direct violation of their original “genetic” software programming.

Now, there is a whole lot to say about this experiment. First off, it would be absurd to speak of “lying” in the case of a machine. There is little question that human and robotic intelligence would evolve differently in an evolutionary computation system developing outside direct human involvement. Equally, motive and moral base, both characteristic elements of lying when defined as an intentionally false statement, are decidedly absent in a machine. Deception in order to achieve a goal is a common tactic in many games, but also in the animal kingdom. While it promotes survival of the fittest, it cannot be considered lying sensu proprio. Because certain behavioral moves are assigned specific costs for a robot, the machine does what it does best – maximize the score. The same concept has already formed the basis of chess capabilities, one of the early accomplishments of AI. Intention and moral judgment are clearly absent here – but are conditioned human responses really that different from software assigning costs to certain behavioral responses?

In Wittgenstein’s example, “if a lion could speak, we would not understand it.” Functionally desired outcomes notwithstanding, the parallels between complex human behavior and machine responses only go so far. But do they – and are humans not a similar, as of the present state of the art more complex, computational cybernetic system translating inputs into outputs?

Hawking’s caution, while unquestionably meritorious, still implies the disappearance of humanity “as we know it” – either because superior machine intelligence will exceed not only individual human intelligence but also aggregate human intelligence in the world, and will dispose of us as homo sapiens did of homo neanderthalensis: we are, after all, unstable, war-prone, hyper-armed, and a source of computer malware; or else natural intelligence will be improved through artificial means by adding neuron circuits and other extensions for speed. In either case, “human nature as we know it” will vanish – in the latter case peacefully and with some legacy, just like the Neanderthal genes that survived in us. Yes, as of today, digital A.I. needs to evolve by quantum leaps to play in the league of analog intelligence of present-day people. But in only a half-century, and largely without any of the accelerating quality of contemporary computational resources, IT has come a long way few even at IBM had imagined.

Without discounting critics who point to the precedence of resolving present-day threats to human existence before worrying about more distant eventualities, the advent of the singularity is an inexorable and inevitable fact of evolution of A.I., and it is ultimately specious to distort critical social dialog by mocking one or the other side’s time projections. This is why topics of transhumanism cannot be sidestepped in reality as they will affect, and change, all areas of life already in the relatively near future, starting with an increasing percentage of prostheses, implants and “spare parts” no one will suspect of altering the human condition – while they most definitely do, just as gay marriage did for the notion of marriage commonly accepted only a generation ago, and as the digital revolution did for the notion of privacy.

Analogy will have it that, in similar ways, A.I. will be characterized by the law of unexpected/unintended consequences for the majority of those affected by it. One of the most significant ways technology revolutionizes our existence is the incompleteness of our predictive abilities. In 1931, logician Kurt Gödel stated in his incompleteness theorem for any computable axiomatic system that, if the system is consistent, it cannot be complete, and that consistency of its axioms cannot be proved within such a system.  If it were provable, it would be false, which contradicts the idea that, in a consistent system, provable statements are always true. Rather, there will always be at least one true but unprovable or undecidable statement.[3] The very same is true of our attempts to foresee and pre-empt or control the consequences of cognitive and technological change, which will arguably be most striking in the area of artificial intelligence.




[1] Russell, Bertrand. A History of Western Philosophy. New York: Simon & Schuster (1945) 834.
[2] Kandel, Eric R, Schwartz, James H., Jessel Thomas M., Principles of Neural Science (4th ed.). New York: McGraw-Hill (2000) ISBN 0-8385-7701-6.
[3] Hawking, Stephen (ed.). God Created the Integers: The Mathematical Breakthroughs That Changed History. Philadelphia: Running Press (2005), ISBN 0-7624-1922-9, 1097 et seq.

No comments:

Post a Comment