Pages

2019-02-11

IT‘s Coming Power Famine

Ivona Brandić, one of the younger members of a national Academy of Sciences anywhere (yes, there is an old felony on the books, “brilliant while female,” it goes back to Hypatia and probably long before her) recently debunked a myth carefully nourished by media priorities – a fundamentally luddite phobia of AI despite far more pressing worries concerning the energy supply fueling IT as it enters the age of Big Data:

Brandić  points out that nary a day goes by without tales of new horror scenarios about breakthroughs in artificial intelligence AI that describe risks of AI taking over and making us all redundant, of autonomous killer robots, or medical systems making independent decisions capable of extinguishing life. It will probably not come to all that because long before we develop killer robots, will we probably run out of global power supply. Few scientists and bitcoin miners aside, it is still poorly understood why the increasing power consumption of IT can become a real problem. Society at large became aware of an issue with power-hungry IT only when early adopters realized that profit and loss did not depend solely on the price of cryptocurrencies but also on the price of electricity and the global computing power of mining, all determinants of the ‘reward’, which is the number of crypto units. Bitcoin activities consume as much energy as Singapore or Iraq.

The history of information processing had several inflection points, such as the invention of the desktop computer or the smartphone. They had one thing in common: enormous increase in energy consumption. The biggest such event, digital transformation, is yet to arrive.

Just Google alone consumes as much power as San Francisco. There are currently about 8.4 million data centers worldwide that consume about 416 terawatts a year, as much electricity as nearly 30 nuclear power plants are able to generate, with exponentially rising tendency. 400 of them are hyper-scale. Several hundred wind turbines are needed to replace a single nuclear power plant (one nuclear power plant in Germany produces the equivalent of 3,000 wind turbines), none of which is easy to build in many areas for political reasons. Even a coal plant takes 600 wind turbines to replace. Green power generation is limited because it is restricted, inter alia, by geographic and natural resources. As a result of increasing digitization, soon every light bulb, every shopping cart, every jacket, every item around us will be or contain a computer in its own right that continually produces data that needs to be processed and stored. So, in the near future, the IT sector will become one of the largest consumers of electricity globally. There is a rather substantial risk that societies will reach for quick solutions such as new nuclear power plants as IT’s energy consumption suddenly rises dramatically. The other risk is that one will resort to radical measures to reduce or limit access to IT, with devastating social consequences.

Data increasingly needs to be processed where it is generated: at the first possible data processing point in the network. If four autonomous self-driving cars are about to cross an intersection without traffic lights in Birmingham, AL, then the server processing the data cannot be in Atlanta, because decisions must be made by autonomous cars in small fractions of a second. If the server is in Atlanta, then the latency, or response time, is just too great. This will not change much in the near future regardless of 5G, 6G or other technologies, simply because physics will not cooperate. For these four cars and their passengers to remain unharmed, data needs to be processed in their immediate vicinity. Now, it is true that data is increasingly not just transported but also processed by the internet infrastructure, for example through routers or switches. But because these are not powerful enough, new small data centers, so-called "edge data centers", are being built in the immediate vicinity of data producers. But building an edge infrastructure along the highway, in the city and in other urban areas is very expensive, inefficient, and can rarely be resolved with green energy.

The classic approach to this problem would be to outsource data processing to large data centers (for example, to clouds). Such data centers can be built wherever there is cheap and green power, and above all, where there is plenty of space. Currently, many companies are building so-called "high-latitude" data centers beyond the sixtieth parallel north (Alaska, Canada, Scandinavia and Northern Russia) because servers can be cooled there easily and cheaply. Cooling often amounts to nearly 40 percent of a data center’s total power consumption. Still, these large, optimized and efficient green-lawned data centers help us very little with the four self-driving cars at the crossing in downtown Birmingham. That is why building data centers increasingly requires imagination. To reduce cooling cost, Microsoft builds data centers under water for several reasons: half the world's population lives in coastal regions and most data cables are already submerged, which provides for short latency periods in addition to low cooling cost. Alas, Birmingham, like the rest of the world, is far from the sixtieth parallel, with neither a seashore nor a lake in sight where a data center could be buried. The big question is thus whether further technological breakthroughs will be needed to regain control of IT’s voracious power consumption. The obvious answer is in the affirmative. 

The first approach to dealing with increasing power consumption will be to use completely new, super-efficient computational architectures. Quantum computers promise extreme computational efficiency at a fraction of classic computers’ energy consumption. But quantum computers are primarily suitable only for highly specific tasks, say, in the financial sector, for simulations, or in life sciences. Predicting developments in quantum computing research in the near future remains a challenge but it is unlikely that autonomous cars can be equipped with quantum computers in the medium term. It also still seems unlikely that a quantum computer could be used for real-time evaluation of camera imaging right there on the highway.

Another approach is to store power across time and space, which is currently not possible with conventional technology. But hydrogen power plants can achieve precisely that. If, political risk aside, huge solar panel farms could be built in the Sahara and other deserts, we could generate enormous quantities of electricity, albeit in unpopulated areas devoid of power consumers. However, that power could be used to generate hydrogen which may be transported in containers or pipelines to areas of high demand for power and where hydrogen power plants could be operated. Unfortunately, recent studies show that this intermediary step of producing and transporting hydrogen involves an enormously large carbon footprint of its own.

The future will likely bring a hybrid that combines classic and quantum computers, conventional and hydrogen power plants. But we will largely have to make do with classic computers and classic power generation throughout much of the digital transformation. Two assets can be relied upon to achieve that: one is a well-developed telecommunications infrastructure that will be expanded further, and a well-developed internet infrastructure. Their symbiosis requires distributing applications across data centers ensuring that as little power as possible is consumed and that user satisfaction is maintained. This relies on the spatio-temporal conditions of each edge or cloud data center. Geo-mobility of users and the ability to create application profiles will also play an important role.

Unfortunately, behavior surrounding data-intensive applications can be predicted only very poorly, because data values may extend across infinite domains. Statistical procedures promise important remedies. But increasing mobility of users, devices and ultimately of all infrastructure create new challenges for economically optimized workload distribution, requiring a systems approach, a virtual version of operations research analytics to result in substantially improved decisions.