IT‘s Coming Power Famine

Ivona Brandić, one of the younger members of a national Academy of Sciences anywhere (yes, there is an old felony on the books, “brilliant while female,” it goes back to Hypatia and probably long before her) recently debunked a myth carefully nourished by media priorities – a fundamentally luddite phobia of AI despite far more pressing worries concerning the energy supply fueling IT as it enters the age of Big Data:

Brandić  points out that nary a day goes by without tales of new horror scenarios about breakthroughs in artificial intelligence AI that describe risks of AI taking over and making us all redundant, of autonomous killer robots, or medical systems making independent decisions capable of extinguishing life. It will probably not come to all that because long before we develop killer robots, will we probably run out of global power supply. Few scientists and bitcoin miners aside, it is still poorly understood why the increasing power consumption of IT can become a real problem. Society at large became aware of an issue with power-hungry IT only when early adopters realized that profit and loss did not depend solely on the price of cryptocurrencies but also on the price of electricity and the global computing power of mining, all determinants of the ‘reward’, which is the number of crypto units. Bitcoin activities consume as much energy as Singapore or Iraq.

The history of information processing had several inflection points, such as the invention of the desktop computer or the smartphone. They had one thing in common: enormous increase in energy consumption. The biggest such event, digital transformation, is yet to arrive.

Just Google alone consumes as much power as San Francisco. There are currently about 8.4 million data centers worldwide that consume about 416 terawatts a year, as much electricity as nearly 30 nuclear power plants are able to generate, with exponentially rising tendency. 400 of them are hyper-scale. Several hundred wind turbines are needed to replace a single nuclear power plant (one nuclear power plant in Germany produces the equivalent of 3,000 wind turbines), none of which is easy to build in many areas for political reasons. Even a coal plant takes 600 wind turbines to replace. Green power generation is limited because it is restricted, inter alia, by geographic and natural resources. As a result of increasing digitization, soon every light bulb, every shopping cart, every jacket, every item around us will be or contain a computer in its own right that continually produces data that needs to be processed and stored. So, in the near future, the IT sector will become one of the largest consumers of electricity globally. There is a rather substantial risk that societies will reach for quick solutions such as new nuclear power plants as IT’s energy consumption suddenly rises dramatically. The other risk is that one will resort to radical measures to reduce or limit access to IT, with devastating social consequences.

Data increasingly needs to be processed where it is generated: at the first possible data processing point in the network. If four autonomous self-driving cars are about to cross an intersection without traffic lights in Birmingham, AL, then the server processing the data cannot be in Atlanta, because decisions must be made by autonomous cars in small fractions of a second. If the server is in Atlanta, then the latency, or response time, is just too great. This will not change much in the near future regardless of 5G, 6G or other technologies, simply because physics will not cooperate. For these four cars and their passengers to remain unharmed, data needs to be processed in their immediate vicinity. Now, it is true that data is increasingly not just transported but also processed by the internet infrastructure, for example through routers or switches. But because these are not powerful enough, new small data centers, so-called "edge data centers", are being built in the immediate vicinity of data producers. But building an edge infrastructure along the highway, in the city and in other urban areas is very expensive, inefficient, and can rarely be resolved with green energy.

The classic approach to this problem would be to outsource data processing to large data centers (for example, to clouds). Such data centers can be built wherever there is cheap and green power, and above all, where there is plenty of space. Currently, many companies are building so-called "high-latitude" data centers beyond the sixtieth parallel north (Alaska, Canada, Scandinavia and Northern Russia) because servers can be cooled there easily and cheaply. Cooling often amounts to nearly 40 percent of a data center’s total power consumption. Still, these large, optimized and efficient green-lawned data centers help us very little with the four self-driving cars at the crossing in downtown Birmingham. That is why building data centers increasingly requires imagination. To reduce cooling cost, Microsoft builds data centers under water for several reasons: half the world's population lives in coastal regions and most data cables are already submerged, which provides for short latency periods in addition to low cooling cost. Alas, Birmingham, like the rest of the world, is far from the sixtieth parallel, with neither a seashore nor a lake in sight where a data center could be buried. The big question is thus whether further technological breakthroughs will be needed to regain control of IT’s voracious power consumption. The obvious answer is in the affirmative. 

The first approach to dealing with increasing power consumption will be to use completely new, super-efficient computational architectures. Quantum computers promise extreme computational efficiency at a fraction of classic computers’ energy consumption. But quantum computers are primarily suitable only for highly specific tasks, say, in the financial sector, for simulations, or in life sciences. Predicting developments in quantum computing research in the near future remains a challenge but it is unlikely that autonomous cars can be equipped with quantum computers in the medium term. It also still seems unlikely that a quantum computer could be used for real-time evaluation of camera imaging right there on the highway.

Another approach is to store power across time and space, which is currently not possible with conventional technology. But hydrogen power plants can achieve precisely that. If, political risk aside, huge solar panel farms could be built in the Sahara and other deserts, we could generate enormous quantities of electricity, albeit in unpopulated areas devoid of power consumers. However, that power could be used to generate hydrogen which may be transported in containers or pipelines to areas of high demand for power and where hydrogen power plants could be operated. Unfortunately, recent studies show that this intermediary step of producing and transporting hydrogen involves an enormously large carbon footprint of its own.

The future will likely bring a hybrid that combines classic and quantum computers, conventional and hydrogen power plants. But we will largely have to make do with classic computers and classic power generation throughout much of the digital transformation. Two assets can be relied upon to achieve that: one is a well-developed telecommunications infrastructure that will be expanded further, and a well-developed internet infrastructure. Their symbiosis requires distributing applications across data centers ensuring that as little power as possible is consumed and that user satisfaction is maintained. This relies on the spatio-temporal conditions of each edge or cloud data center. Geo-mobility of users and the ability to create application profiles will also play an important role.

Unfortunately, behavior surrounding data-intensive applications can be predicted only very poorly, because data values may extend across infinite domains. Statistical procedures promise important remedies. But increasing mobility of users, devices and ultimately of all infrastructure create new challenges for economically optimized workload distribution, requiring a systems approach, a virtual version of operations research analytics to result in substantially improved decisions.


Robot Psychology – An Emerging Legal and Medical Field?

While, for the time being, there are no robots on any couches talking to people about their traumata, concern arises about the well-being of their human interaction partners, as people and robots experience artificial intelligence differently. This perception differential raises a question how it should affect robot design to increase acceptance by humans amid fears of domination by technology. As these things go, it is always helpful to start with the argumentum e contrario[1] and figure out how not to use robots so that they are not perceived as a threat but may be safely experienced as an enrichment. This presupposes research on optimal interfaces between man and machine to facilitate their coexistence, starting with classical usability research to make user interfaces intuitive, but quickly progressing to social fears of certain uses of robots, the role of a user’s personality, and a subjective sense of security in communication that involves, say, autonomous vehicles and pedestrians.

In 1970, Japanese roboticist Masahiro Mori coined the term “uncanny valley” for a graph that shows the nexus between a robot’s human-like qualities and the emotional reaction to it by humans. While a robot may have head and eyes, it remains essential for acceptance that it be clearly distinguishable as a machine. Once silicone skin and artificial hair further obfuscate distinction, robots are perceived almost universally as sinister. New studies indicate that the same might be true of disembodied AI, suggesting an “uncanny valley of the mind” whenever online chatbots adopt human-like behavior by simulating emotionality or by making independent decisions. 

Over time, people do get used to humanoid robots through habituation effects. Exploring those effects would require long-term studies which have clearly not had occasion to be carried out. But the fundamental question arises whether humans generally want to work and live together with humanoid robots. Thus far, polls show attitudes of major rejection the more the impression is solidified that robot and people become gradually less distinguishable. This apparent similarity tends to cause heavy and permanent insecurities on the part of humans as it confuses their expectations. While the perceived risk is still far from a contemporary reality, the problem is far more immediate for software bots that already communicate with people online and simulate interaction. Research areas increasingly identified as android science and affective computing expect technologies that surround us to become emotion-aware over the course of the next five years, creating a vision of artificial emotional intelligence. 

Current autonomous machines have not to date assumed a human form. Thinking of the example of driverless vehicles, fear of losing control to networked technology could also take hold of pedestrians, so it may be helpful for an autonomous machine to communicate with its surroundings and to make highly transparent what steps the machine will take next.

Does it increase the safety of people crossing the trajectory of an autonomous vehicle to be told by the robot that it saw them? Traditionally, we make eye contact with a human driver in such situations. Exploring if and how this can be translated into technology may hold promise for increasing not only safety but also efficiency of human-robot interactions. In one example, light signals increased predictability of a vehicle’s actions. Pedestrians became significantly faster and moved more confidently. Most effective were visual signals perceived only peripherally, but still increasing a human’s sense of security, for example by mounting on the car’s grille lights targeting signals in the direction of the pedestrian. Possibilities to convey signals will evolve, but they also involve a risk of sensory or information overload. Perhaps in a decade or two, car bodies will serve as large displays, but this in turn increases the complexity of visual information processing required of pedestrians.

Predictability is equally important in industrial robotics.  Here, too, machines need to be able to signal to humans who need to anticipate, for example, as early as possible the spot and access path where a robot may wish to reach. Machine movement must not simply take the shortest distance from A to B if curved movements could indicate intentions early. What may be inefficient from a technical viewpoint may considerably increase predictability and workplace safety, improving legibility of robot movements and accelerating performance of overall team of man and machine. 

As for the cultural acceptance of robotics, research data is not clear. Some studies suggest that people in Japan are more open to autonomously acting machines – in part also to ease the country’s labor shortage – while others see no difference or even show the opposite result. Explanations for favorable findings may be routed in the religious tradition of animism and Shinto with their concepts of a soulful object, while no such thought exists in the Judeo-Christian tradition. Media socialization is also frequently mentioned to explain acceptance of autonomous machines. In popular anime and manga, robots are cooperation partners or even act as saviors of humanity. This is in stark contrast to the Western vision of a "Terminator" that puts an end to humanity. 

Most developers of robotics are men. They also impose their male perspective on technology. In fact, the entire tech industry is dominated by younger white males. It is difficult to investigate which distortions or biases this creates. As an example, a video circulated widely on social media showed a black man who could not seem to manage to get soap out of a soap dispenser. Only when he held a white sheet of paper under the dispenser, the soap came out. This illustrates what can go wrong if development teams are not diverse enough. As another example, service robots are typically designed with traditional female stereotypes in mind, including a cleaning robot with a female figure and an apron, at least in part realizing a gynoid fantasy. There is ample evidence that robots adopt prejudices and stereotypes from their creators since, at the present stage, AI is above all machine learning. Programs process man-made content data and draw their conclusions by pattern intelligence. Therefore, it is not reasonable to expect value-neutral results, even if this is intuitively expected or presumed. Studies show that machine learning systems adopt gender stereotypes and show females as close to their families while males are viewed as more career-oriented.

Popular culture has further shaped technological spins and biases through sci-fi books and movies that cause people to overestimate the state of technology. Many promptly associate with the term “robot” the anthropomorphous androids they know from Star Trek or I, Robot. Developers who currently work hard on resolving challenges of robot stability on two legs must be amused by such perceptions. It might not hurt the cause of realism to show actual contemporary robots in action more frequently – say, their applications in autonomous transport or cleaning systems. Such reality check promises to be one of the safest methods to banish premature fear of the man-machine’s interfacing coexistence.

[1] Polish scientist Maciej Koszowski produced a remarkable body of logico-legal algorithmic and interpretive research on aspects of analogy. See Maciej Koszowski, Multiple Function of Analogical Reasoning in Science and Everyday Life, 197 Polish Soc. Rev., no. 1, 2017, at 3; Maciej Koszowski, The Scope of Application of Analogical Reasoning in Statutory Law, 7 Am. Int’l J. Contemp. Res., no. 1, March 2017, at 16; Maciej Koszowski, Why Is Analogy in Empirical Science and Everyday Life Different from Analogy in Law? 25 Studia Iuridica Lublinensia, no. 2, 2016, at 127; Maciej Koszowski, Perelman and Olbrechts-Tyteca’s Account of Analogy Applied to Law: The Proportional Model of Analogical Legal Reasoning, 13 Archiwum Filozofii Prawa i Filozofii Społecznej, no. 2, 2016, at 5; Maciej Koszowski, The Scope of Application of Analogical Reasoning in Precedential Law, 37 Liverpool L. Rev., no. 1-2, 2016, at 19. See also Richard A. Posner, Legal Reason: The Use of Analogy in Legal Argument, 91 Cornell L. Rev. 761 (2006) and Chaim Perelman & Lucie Olbrechts-Tyteca, The New Rhetoric: A Treatise on Argumentation (1971).