Pages

2016-12-01

Accounting for the value of innovation


Readers of my blog[1] and other writings[2] have reliably identified the shortfalls of accounting methods as one of my passions-in-interest. And, sure enough, I am ready to take a stab at it again.

Innovation is arguably one of the few nearly unlimited renewable resources. The human collective has proved capable of accumulating an incredible quantity of knowledge over the past 300 years, setting out to transcend new horizons like a mountaineer that reaches summits only to discover ever taller peaks. Humans as a species continue to increase their dominance, but in an elliptic fashion reminiscent of ‘three steps ahead, two steps back’ – a perpetual pendulum swing between solutions and their unintended or, most often, unforeseen consequences. We burnt a lot of coal and later hydrocarbons to fuel the industrial revolution, only to discover that CO2 is now accumulating in the atmosphere and not going away that soon. Europeans discovered in the 19th century that they did not have to import sugar from the Caribbean but can grow sugar beets domestically. Prices plunged as everyone could suddenly afford it. Then came a surprise: sugar ruined people’s teeth. They discovered how to protect and harden teeth with fluoride rather than cut back on sweets. Tap water and certain foods in many countries are now spiked with fluorides. Next, it turned out that sugar promotes obesity and diabetes. Nature hits back at every progress, forcing continuous innovation so as not to fall behind in the innovation game where no solution is ever final.

For example, innovation in terms of robotics is now at a similar stage as computers were in the 1960s: at that time, it had become clear that there would be computers, but not quite obvious what would be done with them. Similarly, we are just beginning to discover the potential of robotics without having a tangible sense of the journey’s destination. Robots will take all kinds of shapes and sizes and take on helpful tasks not conceivable to date. There is research on nano-sized injectable robots programmed to attack cancer cells, but it has barely scraped the surface yet.

Another area is genetic engineering. It has been around since the dawn of humanity. Think of horses – the breeds we have are not “natural,” they were bred by protracted human interference and selection. God did not create poodles – people did. But breeding is a slow and crude tool for genetic engineering. Genetic modification can achieve results a lot more expeditiously and with greater precision. It will be essential because climate is changing faster than “natural” breeding will, and since it does not appear to be possible to reverse climate change, it is living species that will need to be re-engineered to adapt to a changed environment. It is less likely that humans will be modified anytime soon, but plants, animals, grain, fish, mammals, penguins will be. European resistance notwithstanding, genetic engineering will be done, in the U.S., in Asia or wherever. Somebody will do it because there is a benefit to innovation.

More recently, innovation primarily creates “growth” by creating “progress” – allowing the same output to be produced with less input (of time, knowledge, skill, or other factors). It adds new products that had not previously existed: ten years ago, smart phones did not exist. While all this is “growth,” conventional metrics fail it. Products improve continually – vehicle safety in automobiles improved dramatically compared to some decades ago, although the task, speed limit, and traffic rules remained essentially unchanged. All these evolutionary changes form part of “growth,” but metrics for accounting purposes ignore qualitative aspects. Take anesthesia – a vast improvement of the quality of patient experience that became common since the 1860s. Yet, the change it brought is not reflected anywhere in GDP statistics. The same is true of antibiotics and many other innovations: as their price comes down, and it typically does rather quickly, it ceases to appear as a blip on GDP metrics entirely – although there have been studies putting the price tag for resistance to antibiotics by 2050 at “$100 trillion.” Our post-millennial observation of “shrinking growth” almost certainly reflects profound flaws in methodology of measurement and valuation.

Another, probably even crasser failure of valuation and accounting metrics is its treatment of time. A great measure of innovation and progress creates efficiencies of time, freeing up considerable human resources in the process. Leisure time is valuable, but not to GDP accounting, and therefore not to “growth.” An economy can “grow” without producing more goods, by creating the same output with considerably lesser input. In the future, many people will work considerably less than they do today – a process that has started already. While it continues to be ignored by regulators and managers, there is no question that it will profoundly change the work place. And it is happening already as we speak: at the beginning of the twentieth century, the average worker spent 3200 working hours a year. Today, the average is half that number. This is a form of growth. Once robots will produce our food and garments, we will work even less, trending toward a world where work is optional and almost exclusively creative, thoughtful and intellectual. “Growth” in terms of “progress” may mean that we produce more steel or pump more oil, but it may also mean that we have more time to enjoy and reflect on our existence.

Technological innovation took off in Europe at the dawn of the modern age when people became less respectful of tradition and the knowledge handed down by previous generations. A certain respect for the wisdom of ancestors is natural and necessary. In ancient China, it was believed that truth had been revealed by mystical means to people who had lived in the distant past. Similar traditions existed in Judaism, in Islam and in medieval Christianity. Aristotle and the classical canon had answers for everything. But their answers did not hold up to verification. The enlightenment led people to think for themselves and seek evidence for the teachings in ancient scriptures. Galileo, Torricelli, Tycho Brahe all discovered things wholly inconsistent with the ancient canon. Concluding that nothing should be believed that had not been tested and verified, innovation started with the realization that one’s ancestors had been wrong on many issues. Only this realization unlocked the prison of respect for established tradition and knowledge and liberated the human mind – but nothing to date has liberated it from the constraints of valuation methods that insist on ignoring supremely qualitative aspects.

2016-11-04

The Three Princes of Serendip or Valuing the Unintended Consequences of Our Actions


It is a charming story that Sir Horace Walpole recounted in a letter to Horace Mann of January 28, 1754, coining serendipity as he made a passing observation on a sociopolitical event of the day: “this discovery, indeed, is almost of that kind which I call Serendipity, a very expressive word.” He derived it from Serendip, an old name for Sri Lanka, where it was part of the title of a “silly fairy tale, called The Three Princes of Serendip; as their highnesses travelled, they were always making discoveries, by accidents and sagacity, of things which they were not in quest of … One of the most remarkable instances of this accidental sagacity (for you must observe that no discovery of a thing you are looking for comes under this description) was of my Lord Shaftesbury, who happening to dine at Lord Chancellor Clarendon’s, found out the marriage of the Duke of York and Mrs. Hyde, by the respect with which her mother treated her at table.”

Now, this is not the kind of serendipity that interests me much, but it may have captured the imagination of the 4th Earl of Orford. Yet, the role of serendipity in all things human remains underrated to an almost shocking extent, and so I encountered with great interest the posthumous oeuvre of Robert King Merton & Elinor Barber, The Travels and Adventures of Serendipity: A Study in Sociological Semantics and the Sociology of Science (2004). It was penned by two great Columbians: while Barber gave early thought to scholarly diversity, Merton, a founding father of modern sociology, put sociology of science on the map, for which he received the National Medal of Science. My fascination with the topic, prodded along with a Times Higher Education review bringing up Helga Novotny’s memories of Sir Karl Popper Memorial Lecture at LSE in 2013, brought me to her The Cunning of Uncertainty (2015).  

Alexander Fleming was one of those who could not complain about disfavor by Lady Luck. In 1928, the Scottish bacteriologist noticed that his staphylococci cultures had been contaminated by fungal mold spores of the genus penicillium notatum that happened to kill the germs he had cultured to study causation of pneumonia. Fleming’s discovery was, of course, just one of several possible outcomes: the spores could not have entered or not have taken hold in his culture. He could have found a different method to attack bacterial germs. Or another scientist could have ventured across mold spores in due time.

Sir Isaac Newton saw an apple fall from a tree and started thinking about gravity, perhaps the single most serendipitous use of this fruit since Eve.

And serendipity’s cornucopia is not without conditions and demands: fortuitous accident serves us only if we recognize its significance – and here, Walpole made a major point: “for you must observe that no discovery of a thing you are looking for comes under this description,” and there are many who are all too obsessed with what they are looking for so they neglect solutions of equal or greater value outside their current focus. High-temperature superconductivity was discovered by a French team before Johannes Georg Bednorz and Karl Alexander Müller did at IBM Labs and fetched the 1987 Nobel Prize in Physics – but they had failed to notice its significance.

In 1922, an unhealthy habit of inhaling strong cigar smoke enabled Otto Stern and Walter Gerlach to make quantum spin observable: as the German scientists channeled silver atoms through a magnetic field, the chain smokers continued to chuff cigars. Sulphur in cigar smoke reacted with the silver atoms, enabling them to make directional spin of particles visible for the first time.  Bretislav Friedrich and Dudley Herschbach, then at Harvard, proved in a 2003 paper “Stern and Gerlach: How a Bad Cigar Helped Reorient Atomic Physics.”

As foundational research is increasingly under pressure by governments and research funding to produce applied results, open innovation and distributed innovation are breaking new ground for serendipity that remains indispensable to pure science. 

But for almost any area of policy, even more essential than serendipity – if closely related to it conceptually (as outcomes not foreseen and intended by purposeful action, as, again, Merton described them) – are unintended consequences, a seriously neglected aspect in any area of research and development.  They include unexpected benefits, unexpected detriments (“blowbacks”), and what may be described as “perverse results,” virtuous as well as vicious circles of complex chains of events that reinforce themselves through feedback loops. I plan to write more here about these phenomena in the coming year, as part of a more generalized theory of rational choice, Black Swan events, diverse priorities, and the influence of probability and randomness on different branches of logic.

2016-10-12

Science as an art of miniature horizons

In the course of my restless perambulations across nanotechnology under dreaming spires and between punting on the rivers Cherwell and Isis, I discovered in the depth of a Microscopy Suite an artist with an eye for the truly small: Dr. Louise Hughes and her website, Miniature Horizons. An expert in scanning and transmission electron microscopy specializing in electron tomography, she is an award-winning science communicator. Louise has created, visualized and preserved some of the most stunning images of nature in the realm beyond small and promptly turned them into wearable scientific artwork. As science photography becomes the object of increasing competition, one can easily see why.

2016-10-05

The Swiss nano-alphabet

A Swiss microsystems laboratory headed by Jürgen Brugger at the École Polytechnique Fédérale de Lausanne developed a new method to place nanoparticles on a surface. No one will be surprised that the Swiss accomplished great precision of 1 nm at this task. Thus far, the greatest measure of positioning accuracy was, with luck, 10-20 nm. This opens new perspectives for nano devices such as miniaturized optical and electro-optical nanodevices including measuring sensors where predetermined and selective placement onto large-area substrates such as 1 cm² is needed to utilize the benefits of nanoparticle assembly.
 
Like most great solutions, this one is simple and elegant: gold nanoparticles suspended in a liquid are first heated so they gather in one spot, and then drawn across the surface. Not unlike a miniature golf course, the surface is lithographed with funneled traps and auxiliary sidewalls, thus patterned with barriers and holes. When the nanoparticles hit a barrier (an auxiliary sidewall), they disengage from the liquid and can be deterministically directed to sink into the hole, attaining simultaneous control of position, orientation and interparticle distance at the nanometer level. In this way, position and orientation of the slightly oblong gold nanorods can be steered very precisely. The Swiss research group demonstrated this by writing the world’s smallest version of the alphabet and also shaped complex patterns. This will open new doors for vastly improved assembly of nanodevices.
 
In light of groundbreaking advances in the field, it comes as no surprise that the 2016 Nobel Memorial Prize in Chemistry was just awarded to Jean-Pierre Sauvage (U. Strasbourg), J. Fraser Stoddart (Northwestern U.) and Bernard L. Feringa  (U. Groningen) for work on nanometer-size “molecular machines” that feature characteristics of “smart materials” – an emerging area not only of materials science that opens extremely bright perspectives to nanotechnology overall, bringing nanomachines and microrobots within reach.

2016-10-01

Mathematical modeling of nanotechnology

One of the challenges – and advantages – of nanotechnology is that many aspects of its use can, but also ought to be, tested by simulation. If minuscule applications are to carry active components with extreme precision to their target within the human body, be it to mark cells for diagnostic procedures or to support the buildup of tissues and structures or to deliver therapeutic agents, their functionality needs to be simulated by mathematical models and methods.

One such application is the development of biosensors capable of identifying tumor markers in blood that holds significant potential for cancer therapy. Nanowires composed of semiconductor material enable recognition of proteins that indicate the presence of existing tumors. Physical simulation of such an electronic system with biological application presents novel mathematical problems. Nanowires used as sensors are approximately 1 μm in length (one-hundredth of the diameter of a human hair) and 50 μm in diameter, i.e., one-twentieth of their length. DNA molecules examined by these nanowires are only 2 μm in diameter and tied to receptors on the wire permeated by a certain electric current. This changes conductivity and current flow of the sensor. A mathematical model suitable for simulation needs to reflect the relevant subsystems of this process by equations that describe the distribution of the charges, coupled with equations reflecting the movement of charges. This creates a system of partial differential equations reflective of the transport of charges that can be connected with equations descriptive of the motion of molecules on the outside of the sensor. Currently, Clemens Heitzinger at Vienna University of Technology’s Institute of Analysis and Scientific Computing is developing such models.

Certain systems of equations can also use multiscalar analysis problems that are common in nanotechnology. Both the behavior of fine structures (for example during bonding of a molecule) and the properties of the sensor itself are of interest. If one were to describe it all through a numerical simulation, it would require computing time beyond realizable limits. But partial differential equations combined with a solution of multiscalar problems permit simulation of both relatively large and very small structures in a single system. This can save extremely expensive custom-built items for lab experiments. It also permits conclusions about data and connections that could not be deducted by physical measurement techniques. The technology can target any molecule identifiable through antibodies, not only biological molecules but also poisonous gases.

However, the smaller the examined systems, the greater the importance of random movements and fluctuations. To account for such events, probability theories need to be integrated into systems of differential equations, transforming partial differential equations into stochastic partial differential equations that have random forcing terms and coefficients, can be exceedingly difficult to solve, and have strong connections with quantum field theory and statistical mechanics. This also has numerous applications outside of medicine, for example in information technology. One example is microchips that contain billions of transistors approximately 20 μm in size that cannot all be perfectly identical but need to function despite their fluctuation range. The same mathematical modeling approach permits optimal numerical simulation of such systems.

2016-09-01

Limitations of accounting and a requiem for productivity? The high cost of failing to innovate


Robert J. Gordon is a most interesting thinker in contemporary economics. In The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), but already in his 2013 TED Talk, The Death of Innovation, the End of Growth, the globally renowned macroeconomist argues that, despite all the chatter about “disruptive” innovation, new ideas with powerful effect on productivity are just not forthcoming – not since considerable time. Gordon sees innovation limited by four principal factors: demographics, education, debt, and inequality.

A major poll conducted by Inc. Magazine among VCs in late 2015 asked what startups are most likely to take off in 2016. Given that Silicon Valley is likely the most innovative spot on this planet, the results are shockingly disappointing: the presumptive champion most likely to succeed through innovation turned out to be Vulcun, an online betting platform allowing users to gamble real or fictitious money on the outcome of online video games. Another top-ten contestant was Juicero – it sells an internet-based juicer capable of delivering “disruptively fresh juice.”

If one considers the great discoveries of the industrial revolution, say, the steam engine, the combustion engine, electricity, or assembly line production, it is fairly clear how and why they affected productivity. Alas, while the digital age has changed our daily lives, it is less clear if – and how – it has affected productivity and created greater prosperity. Following the 2008 financial crisis, stagnation has set in globally and has endured to date: the Euro zone and Japan grow at a rate barely reaching, much less exceeding one percent of GDP; even the U.S. situation appears dire if one discounts the effect of cheap oil and low costs of capital. But already before the Great Recession, increase of productivity barely hovered above zero.

There are many suspect causes: taxation, bureaucracy, public debt, public austerity, cost of regulation; the list goes on. But regardless of these factors, all industrialized countries find themselves pretty much in the same situation. They face a lack of innovation that results in growth: IT companies barely create any jobs. The world’s three most highly valued corporations (Google, Apple, Microsoft) collectively employ 250,000 around the world, while an industrial engineering conglomerate such as Siemens employs 350,000 while Walmart is home to 2,100,000 jobs. Nothing indicates that 3D printing, drones, robotics or self-driving cars will create growth similar to the automobile, electricity, or aviation.

Arguably, GDP is a deficient measurement for innovation benefits. IT changes lives and processes qualitatively but it does not measurably contribute to productivity. I have written before and elsewhere about the inadequacies of contemporary accounting standards, but this phenomenon fits straight into those arguments. Since redistribution of prosperity by growth seems to have reached its limits at least in the OECD, European socialists argue that personal time should be the new resource to be redistributed: if people are not afforded wage increases, they could be rewarded by having to work fewer hours. But it may take a generation or more to change work ethic and mentality currently fixated on measuring performance by one’s ability to put in long and hard hours. It is a safe assumption that the pursuit of material wealth has released an immense quantity of human energy. It will not be possible to abandon this priority without offering compelling replacements. It is not clear whether the most appropriate substitute may be reduction of global poverty, strengthening of social coherence or ending global threats to the environment, and society will need extended discourse to redefine itself and the individual’s place in it. While this vision will not materialize without lots of innovation, the character of it will likely pivot away from offering top rankings to betting platforms and online juicers.

Personally, I will sharply dispute the assumption that productivity is flat-lining or on the decline. Innovation as we have known it in recent decades has fundamentally changed almost all processes. A networked, Big Data - based society may very well require a fundamental overhaul of its valuation and accounting standards, but there is no denying that productivity, in the sense of greater efficiency as measured by cost, time, and quality is on the rise. It is difficult to measure the creation of intellectual property through R&D but it has never been at a higher level than today. Another undeniable fact is population growth despite statistically increasing averages of living standards. While it is true that R&D expenditures are rising, their product in terms of GDP does not. But this may well require qualitatively different valuation of innovative products and their utility for every task encountered in life. Productivity-based increases of prosperity in the last centuries largely fell into the comparably brief period from 1870-1940, the time of great inventions. But if a 19th century woman spent two days of her week doing laundry, the invention of washer and dryer and electric iron changed GDP-measured productivity not because of the quality or significance of those innovations but because women entered the workforce in large numbers, suddenly placing a monetary value on their time. Much the same was true of improved hygiene, with no directly attributable contribution to GDP. Gordon is likely right that productivity increases between 1920 and 1970 raised prosperity more than in 1,000 years prior. However, and to a not insignificant degree, this also had a lot to do with recognizing, valuing and compensating services that previous generations took for granted or considered a mere add-on. While entertainment and communication have limited influence on industrial production processes, and while it is true that digital devices amount only to a limited percentage of household spending, the picture is a lot more differentiated than that: practicability of many tasks that play a significant role in directly GDP-relevant production is inconceivable without digital technology – from recycling to fintech, from robotics to mass transportation and logistics. Concededly, some innovation may not be much of a job creator, or even indeed the opposite, a low-end job killer. But value creation is an altogether different matter more closely related to valuation and accounting standards than the references we inherited from the industrial age.

2016-08-14

Single-atom-sized data storage


When Richard Feynman gave his famous 1959 speech suggesting that there is “[p]lentyof room at the bottom,” he not only created a vision for the development of nanotechnology, he also created a school of thought at its extreme limit: he speculated about the possibility of arranging single atoms as sufficiently stable building blocks. His idea moved within realistic reach following the 1981 development of the scanning tunneling microscope that not only enabled imaging surfaces at the atomic level but also arranging and rearranging individual atoms, a breakthrough that earned Gerd Binning and Heinrich Rohrer at IBM Zurich labs the 1986 Nobel Prize in Physics. Feynman himself, along with others, had received the Nobel Prize in Physics already in 1965 for other contributions, primarily to the development of quantum electrodynamics.

Now, a team of physicists at the Delft University of Technology succeeded in manipulating gaps in a chlorine atomic grid on a copper surface in a manner that enables a never before accomplished density of data storage, some 100 times more dense than in the case of the smallest known storage media and about 500 times more dense than contemporary hard disk drives. Given the massive increase in the size and energy consumption of data centers in the age of cloud computing, getting away from the contemporary standard of writable memories that still encompass many thousand atoms, miniaturization of storage media to the submolecular nanolevel is key.

The Delft team’s discovery may lead to a result that would permit to store the data of any and all books ever written by mankind on a single storage media the size of a postal stamp. This is possible because chlorine atoms automatically form a two-dimensional grid on a flat copper surface. By providing fewer chorine atoms than would be required to cover the copper surface in its entirety, vacancies are created in the grid. One bit is comprised of a chlorine atom and a vacancy. To store data, atoms are moved individually by a scanning tunneling microscope (STM). The STM’s ultrafine measuring tip creates electric interaction as it does to analyze the atomic structure of surfaces. If current of about 1 µA flows through the tip, it becomes possible to move a chlorine atom into a vacancy. This process can, of course, be automated and chlorine atoms are moved into vacancies until the desired field of bits emerges. To keep the chlorine atom grid stable, each bit needs to be limited by chlorine atoms. Therefore, bits are never positioned directly adjacent to each other.

It is fair to note that this technology is still very far removed from commercialization: reading a 64-bit block by STM takes about one minute while writing the same 64 bits takes two minutes. To date, a kilobyte rewritable atomic memory has been created – and while 8000 atomic bits are by far the largest atomic precision structure ever created by humans, it is not at all impressive in an era that considers a 1TB laptop a middle-of-the-road standard. This becomes even more clear if one realizes that the entire procedure only works in an ultra-clean laboratory environment and at temperatures of -196 º C lest chlorine atoms start to clot. But regardless, the experiment demonstrated proof of concept and the first viable path for further development on the road to space reduction for atomic-level data storage, currently a central issue in the advancement of storage technology, and it would hardly be the first concept that evolved explosively following fundamental experimental proof.