Pages

2016-10-05

The Swiss nano-alphabet

A Swiss microsystems laboratory headed by Jürgen Brugger at the École Polytechnique Fédérale de Lausanne developed a new method to place nanoparticles on a surface. No one will be surprised that the Swiss accomplished great precision of 1 nm at this task. Thus far, the greatest measure of positioning accuracy was, with luck, 10-20 nm. This opens new perspectives for nano devices such as miniaturized optical and electro-optical nanodevices including measuring sensors where predetermined and selective placement onto large-area substrates such as 1 cm² is needed to utilize the benefits of nanoparticle assembly.
 
Like most great solutions, this one is simple and elegant: gold nanoparticles suspended in a liquid are first heated so they gather in one spot, and then drawn across the surface. Not unlike a miniature golf course, the surface is lithographed with funneled traps and auxiliary sidewalls, thus patterned with barriers and holes. When the nanoparticles hit a barrier (an auxiliary sidewall), they disengage from the liquid and can be deterministically directed to sink into the hole, attaining simultaneous control of position, orientation and interparticle distance at the nanometer level. In this way, position and orientation of the slightly oblong gold nanorods can be steered very precisely. The Swiss research group demonstrated this by writing the world’s smallest version of the alphabet and also shaped complex patterns. This will open new doors for vastly improved assembly of nanodevices.
 
In light of groundbreaking advances in the field, it comes as no surprise that the 2016 Nobel Memorial Prize in Chemistry was just awarded to Jean-Pierre Sauvage (U. Strasbourg), J. Fraser Stoddart (Northwestern U.) and Bernard L. Feringa  (U. Groningen) for work on nanometer-size “molecular machines” that feature characteristics of “smart materials” – an emerging area not only of materials science that opens extremely bright perspectives to nanotechnology overall, bringing nanomachines and microrobots within reach.

2016-10-01

Mathematical modeling of nanotechnology

One of the challenges – and advantages – of nanotechnology is that many aspects of its use can, but also ought to be, tested by simulation. If minuscule applications are to carry active components with extreme precision to their target within the human body, be it to mark cells for diagnostic procedures or to support the buildup of tissues and structures or to deliver therapeutic agents, their functionality needs to be simulated by mathematical models and methods.

One such application is the development of biosensors capable of identifying tumor markers in blood that holds significant potential for cancer therapy. Nanowires composed of semiconductor material enable recognition of proteins that indicate the presence of existing tumors. Physical simulation of such an electronic system with biological application presents novel mathematical problems. Nanowires used as sensors are approximately 1 μm in length (one-hundredth of the diameter of a human hair) and 50 μm in diameter, i.e., one-twentieth of their length. DNA molecules examined by these nanowires are only 2 μm in diameter and tied to receptors on the wire permeated by a certain electric current. This changes conductivity and current flow of the sensor. A mathematical model suitable for simulation needs to reflect the relevant subsystems of this process by equations that describe the distribution of the charges, coupled with equations reflecting the movement of charges. This creates a system of partial differential equations reflective of the transport of charges that can be connected with equations descriptive of the motion of molecules on the outside of the sensor. Currently, Clemens Heitzinger at Vienna University of Technology’s Institute of Analysis and Scientific Computing is developing such models.

Certain systems of equations can also use multiscalar analysis problems that are common in nanotechnology. Both the behavior of fine structures (for example during bonding of a molecule) and the properties of the sensor itself are of interest. If one were to describe it all through a numerical simulation, it would require computing time beyond realizable limits. But partial differential equations combined with a solution of multiscalar problems permit simulation of both relatively large and very small structures in a single system. This can save extremely expensive custom-built items for lab experiments. It also permits conclusions about data and connections that could not be deducted by physical measurement techniques. The technology can target any molecule identifiable through antibodies, not only biological molecules but also poisonous gases.

However, the smaller the examined systems, the greater the importance of random movements and fluctuations. To account for such events, probability theories need to be integrated into systems of differential equations, transforming partial differential equations into stochastic partial differential equations that have random forcing terms and coefficients, can be exceedingly difficult to solve, and have strong connections with quantum field theory and statistical mechanics. This also has numerous applications outside of medicine, for example in information technology. One example is microchips that contain billions of transistors approximately 20 μm in size that cannot all be perfectly identical but need to function despite their fluctuation range. The same mathematical modeling approach permits optimal numerical simulation of such systems.

2016-09-01

Limitations of accounting and a requiem for productivity? The high cost of failing to innovate


Robert J. Gordon is a most interesting thinker in contemporary economics. In The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War (2016), but already in his 2013 TED Talk, The Death of Innovation, the End of Growth, the globally renowned macroeconomist argues that, despite all the chatter about “disruptive” innovation, new ideas with powerful effect on productivity are just not forthcoming – not since considerable time. Gordon sees innovation limited by four principal factors: demographics, education, debt, and inequality.

A major poll conducted by Inc. Magazine among VCs in late 2015 asked what startups are most likely to take off in 2016. Given that Silicon Valley is likely the most innovative spot on this planet, the results are shockingly disappointing: the presumptive champion most likely to succeed through innovation turned out to be Vulcun, an online betting platform allowing users to gamble real or fictitious money on the outcome of online video games. Another top-ten contestant was Juicero – it sells an internet-based juicer capable of delivering “disruptively fresh juice.”

If one considers the great discoveries of the industrial revolution, say, the steam engine, the combustion engine, electricity, or assembly line production, it is fairly clear how and why they affected productivity. Alas, while the digital age has changed our daily lives, it is less clear if – and how – it has affected productivity and created greater prosperity. Following the 2008 financial crisis, stagnation has set in globally and has endured to date: the Euro zone and Japan grow at a rate barely reaching, much less exceeding one percent of GDP; even the U.S. situation appears dire if one discounts the effect of cheap oil and low costs of capital. But already before the Great Recession, increase of productivity barely hovered above zero.

There are many suspect causes: taxation, bureaucracy, public debt, public austerity, cost of regulation; the list goes on. But regardless of these factors, all industrialized countries find themselves pretty much in the same situation. They face a lack of innovation that results in growth: IT companies barely create any jobs. The world’s three most highly valued corporations (Google, Apple, Microsoft) collectively employ 250,000 around the world, while an industrial engineering conglomerate such as Siemens employs 350,000 while Walmart is home to 2,100,000 jobs. Nothing indicates that 3D printing, drones, robotics or self-driving cars will create growth similar to the automobile, electricity, or aviation.

Arguably, GDP is a deficient measurement for innovation benefits. IT changes lives and processes qualitatively but it does not measurably contribute to productivity. I have written before and elsewhere about the inadequacies of contemporary accounting standards, but this phenomenon fits straight into those arguments. Since redistribution of prosperity by growth seems to have reached its limits at least in the OECD, European socialists argue that personal time should be the new resource to be redistributed: if people are not afforded wage increases, they could be rewarded by having to work fewer hours. But it may take a generation or more to change work ethic and mentality currently fixated on measuring performance by one’s ability to put in long and hard hours. It is a safe assumption that the pursuit of material wealth has released an immense quantity of human energy. It will not be possible to abandon this priority without offering compelling replacements. It is not clear whether the most appropriate substitute may be reduction of global poverty, strengthening of social coherence or ending global threats to the environment, and society will need extended discourse to redefine itself and the individual’s place in it. While this vision will not materialize without lots of innovation, the character of it will likely pivot away from offering top rankings to betting platforms and online juicers.

Personally, I will sharply dispute the assumption that productivity is flat-lining or on the decline. Innovation as we have known it in recent decades has fundamentally changed almost all processes. A networked, Big Data - based society may very well require a fundamental overhaul of its valuation and accounting standards, but there is no denying that productivity, in the sense of greater efficiency as measured by cost, time, and quality is on the rise. It is difficult to measure the creation of intellectual property through R&D but it has never been at a higher level than today. Another undeniable fact is population growth despite statistically increasing averages of living standards. While it is true that R&D expenditures are rising, their product in terms of GDP does not. But this may well require qualitatively different valuation of innovative products and their utility for every task encountered in life. Productivity-based increases of prosperity in the last centuries largely fell into the comparably brief period from 1870-1940, the time of great inventions. But if a 19th century woman spent two days of her week doing laundry, the invention of washer and dryer and electric iron changed GDP-measured productivity not because of the quality or significance of those innovations but because women entered the workforce in large numbers, suddenly placing a monetary value on their time. Much the same was true of improved hygiene, with no directly attributable contribution to GDP. Gordon is likely right that productivity increases between 1920 and 1970 raised prosperity more than in 1,000 years prior. However, and to a not insignificant degree, this also had a lot to do with recognizing, valuing and compensating services that previous generations took for granted or considered a mere add-on. While entertainment and communication have limited influence on industrial production processes, and while it is true that digital devices amount only to a limited percentage of household spending, the picture is a lot more differentiated than that: practicability of many tasks that play a significant role in directly GDP-relevant production is inconceivable without digital technology – from recycling to fintech, from robotics to mass transportation and logistics. Concededly, some innovation may not be much of a job creator, or even indeed the opposite, a low-end job killer. But value creation is an altogether different matter more closely related to valuation and accounting standards than the references we inherited from the industrial age.

2016-08-14

Single-atom-sized data storage


When Richard Feynman gave his famous 1959 speech suggesting that there is “[p]lentyof room at the bottom,” he not only created a vision for the development of nanotechnology, he also created a school of thought at its extreme limit: he speculated about the possibility of arranging single atoms as sufficiently stable building blocks. His idea moved within realistic reach following the 1981 development of the scanning tunneling microscope that not only enabled imaging surfaces at the atomic level but also arranging and rearranging individual atoms, a breakthrough that earned Gerd Binning and Heinrich Rohrer at IBM Zurich labs the 1986 Nobel Prize in Physics. Feynman himself, along with others, had received the Nobel Prize in Physics already in 1965 for other contributions, primarily to the development of quantum electrodynamics.

Now, a team of physicists at the Delft University of Technology succeeded in manipulating gaps in a chlorine atomic grid on a copper surface in a manner that enables a never before accomplished density of data storage, some 100 times more dense than in the case of the smallest known storage media and about 500 times more dense than contemporary hard disk drives. Given the massive increase in the size and energy consumption of data centers in the age of cloud computing, getting away from the contemporary standard of writable memories that still encompass many thousand atoms, miniaturization of storage media to the submolecular nanolevel is key.

The Delft team’s discovery may lead to a result that would permit to store the data of any and all books ever written by mankind on a single storage media the size of a postal stamp. This is possible because chlorine atoms automatically form a two-dimensional grid on a flat copper surface. By providing fewer chorine atoms than would be required to cover the copper surface in its entirety, vacancies are created in the grid. One bit is comprised of a chlorine atom and a vacancy. To store data, atoms are moved individually by a scanning tunneling microscope (STM). The STM’s ultrafine measuring tip creates electric interaction as it does to analyze the atomic structure of surfaces. If current of about 1 µA flows through the tip, it becomes possible to move a chlorine atom into a vacancy. This process can, of course, be automated and chlorine atoms are moved into vacancies until the desired field of bits emerges. To keep the chlorine atom grid stable, each bit needs to be limited by chlorine atoms. Therefore, bits are never positioned directly adjacent to each other.

It is fair to note that this technology is still very far removed from commercialization: reading a 64-bit block by STM takes about one minute while writing the same 64 bits takes two minutes. To date, a kilobyte rewritable atomic memory has been created – and while 8000 atomic bits are by far the largest atomic precision structure ever created by humans, it is not at all impressive in an era that considers a 1TB laptop a middle-of-the-road standard. This becomes even more clear if one realizes that the entire procedure only works in an ultra-clean laboratory environment and at temperatures of -196 º C lest chlorine atoms start to clot. But regardless, the experiment demonstrated proof of concept and the first viable path for further development on the road to space reduction for atomic-level data storage, currently a central issue in the advancement of storage technology, and it would hardly be the first concept that evolved explosively following fundamental experimental proof.

2016-07-03

When cars sit on death panels

Many have warmed up to Tesla, Google, Apple and others in the U.S., EU and Japan promoting the idea – and feasibility – of self-driving, even fully autonomous cars. Meanwhile, little attention has been paid to the fact that the decisions left by drivers to their cars need to be made by someone, somehow. Life in the digital age implies some algorithm applying artificial in lieu of human intelligence and ethics.

One of the greatest anticipated benefits of fully autonomous cars is a dramatic reduction of accidents by up to 90%, assuming that such cars represent a majority of vehicles in circulation. That should be achievable reasonably soon, since 10 million of them are expected to be on the road by 2020. The elimination of “pilot error” alone – for example influence of alcohol and drugs, distraction by smart phones or other devices, and speeding – rationally accounts for such a forecast.  But even fully autonomous cars will encounter unforeseeable events, and some of those will result in accidents, including fatal accidents. NHTSA has started to examine the first fatality involving a Tesla S, which showed that full autonomy is the only acceptable solution for self-driving cars under most circumstances, not least because reaction speed and decisional quality and predictability can be developed to surpass human capabilities by a considerable margin.

But making and carrying out decisions does not absolve manufacturers, drivers, and society at large from the burden of analysis and – sometimes tragic – choices. How is an “autonomous” car to react when a child suddenly runs into the traffic lane but the evasive action would hit another pedestrian? Should the car be programmed to accept death for its passenger(s) if, say, ten other lives will (or merely can) likely be saved? AI is a matter of coding, but also of machine learning.

Researchers including Jean-François Bonnefon of the Toulouse School of Economics, Azim Shariff  of U. Oregon’s Culture and Morality Lab, and Iyad Rahwan at MIT’s Scalable Cooperation Group polled some 2,000 individuals on their views and values on ethical choices in autonomous vehicles. The participants considered the questions in different roles and situations, as either noninvolved bystanders or as passengers of the car. The participants were then asked how they would like to see their own autonomous car programmed.

A clear majority of participants preferred to minimize the total number of casualties. 76% chose to sacrifice the driver to save ten other lives. The scenario pitting one driver against one pedestrian, however, produced a virtually exact complementary outcome: only 23% approved of sacrificing the driver. But passengers that were not drivers took fundamentally different positions.

While utilitarian ethics may leave questions to be probed, there are also commercial considerations for the automotive industry: if autonomous cars are programmed to reflect generally accepted moral principles, or legally ordained ones, fewer individuals may be prepared to buy them. Another option may be to allow for “scalable morality” by allowing the buyer or driver to choose to some extent how selfish or altruistic he wants to set his car’s “ethics thermostat.” Welcome to machine ethics. Tort law reaches a new frontier by naming software (or its programmer, i.e., the manufacturer) as a defendant – with prospects for mass recalls based on the stroke of a judicial pen.

2016-06-01

Megaprojects, meet reality

In a relatively recent assessment of Oxford Said Business School’s Bent Flyvbjerg, megaprojects are characterized, roughly speaking, by budgets in the billions of dollars, while major projects run in the hundreds of millions and garden variety projects amount to tens of millions and less.
 
In the fifteen years to come, a deal volume of $35-90 trillion in megaprojects is expected to materialize world-wide. At the high estimate, that exceeds the current annual GDP of the planet while, at the low estimate, it is still twice the GDP of the United States or the European Union. Annually, this amounts to at least the GDP of France, up to twice the GDP of Germany. So we are not talking about a footnote in the global economy. Quite the contrary, they are often centerpieces of substantial economic stimulus plans.
 
Yet 90 percent of all megaprojects – however one wants to define the category did not seem to matter terribly much in a study of about 1,000 of them – result in grotesque cost and time overruns relative to budget. The primary diagnosis is that megaprojects, be they major infrastructure projects or capital investments, are typically one of a kind, without comparable precedent, complex and first of their kind, at least in their setting. It is also fairly obvious that, under such circumstances, challenges emerge only midstream. Regardless of political or economic system, cost overruns average 55 percent. Munich-based consultancy and study author Roland Berger concludes that savings of just 10 percent through avoidance of management errors and omissions would result in savings of $3.5-9.0 trillion – not a particularly ambitious assessment given average cost overruns of 55 percent. There is considerable potential for intelligent systems engineering as well as for improved risk management. The general problem with high-quality cost control and risk minimization is that damnum cessans (damage avoided) is a very tricky base for performance reward and incentive compensation, as opposed to lucrum emergens – and thus frequently enough is not avoided.
 
Criticism of megaprojects has addressed a wide range of issues: their top-down planning and adverse effects on certain communities as much as their extreme complexity in technical as well as human terms, a long record of poor delivery, its remodeling of urbanism in the spirit of neoliberalism in recent years, introverted governance lacking democratic participation and accountability, global economic positioning at the expense of local issues, physical and social disconnect from the context of the host city or region, and, in too many cases, lack of public benefit and social outcomes – they foreclose upon a wide variety of social practices, reproducing rather than resolving urban inequality and disenfranchisement” and “inhibit the growth of oppositional and contestational practices.”
 
Megaprojects appeal to policymakers consistently for a quadriga of reasons set forth by Flyvbjerg, all rooted in ambition and yearn for a legacy: their technological, political, economic and aesthetic dimension.  A seldom mentioned reason should be added: megaprojects mean mega-budgets and present unique opportunities for credible justification of elevated cost due to unforeseen and quasi-unique challenges: where challenges of coordination and integration run high, it is almost impossible to lay blame for relatively minor deviations from planning. But if Roland Berger arrives at an average of 55 percent in cost overruns, it shows a very similar effect to the one I hold ultimately responsible for the 2007-2008 financial crisis: too many, virtually everyone, tried to pass off 98-99 percent compliance as 100 percent. This has a way off adding up to a meltdown, or to extreme leverage in the compilation of adverse effects. I am also inclined to believe that the average of 55 percent deviation is very generously calculated, or by inclusion of many projects in the sample that present fewer opportunities for it. There are simply too many megaprojects in the 200 percent overrun range to support a less than absolutely shocking figure.
 
Interestingly, less than a decade following overrun disasters, the mere fact of each project’s completion and the pride and identification it entails furnish proof of Sir Frederick Henry Royce’s adage: “The quality will remain long after the price is forgotten.” In a way, megaprojects are the Rolls Royce of legacy and pride – for all categories of stakeholders. Secure knowledge of the fait accompli of this investment and its consequences is the reason why so many get safely away with so much. It is not a sign of personal failure or wrongdoing so much as it is indicative of systemic flaws and the absence of genuine system auditing and adjudication.