Pages

2015-03-31

The Graying of Swan Lake


Seldom have I been more impressed with the subtle accuracy of a generalization than in the case of Nassim Nicholas Taleb’s dictum, “you never win an argument until they attack your person.” As an essayist and statistician, this Lebanese American teaching currently at NYU Polytechnic has developed some of the most original critical thinking on risk analysis and risk engineering. It reaches far beyond mathematical finance and has game-changing consequences for decision-making overall. Not many quants have their writings called to be among the dozen most influential books since WWII. No stranger to controversy, Taleb made his assertion first in 2007 that statistics is inherently incomplete as a tool set because it cannot predict the risk of rare events (which he calls Black Swans). But despite their rarity, and despite being incapable of prediction by extrapolation of existing data, they have disproportionally vast effects.

Since I can remember, I have heard people acknowledge, with a snicker, the ‘theoretical’ possibility of systemic risk, of a meltdown of basic operating infrastructure and assumptions. Like the presumption of innocence, it had become one of those exercises in lip service everyone made a ritual of mentioning while appealing to a near-universal consensus (end-time theorists of all stripes excepted) that systemic risk was just a theoretical hypothesis. Prior to 2008, who except few eye witnesses of 1929 et seq. would have given serious consideration to “major banks not lending to their peers,” bringing the money market to a virtual standstill? Who would have expected that banks would, in essence, depart the lending business altogether – at least to the extent it could not be pushed off their balance sheet? Or that there would not be significant demand for major-ticket securities blessed by all three major rating agencies with their highest medals of honor? Or have perceived the Swiss National Bank as a source of global instability?

If we are to capture by intellect and to deal meaningfully with the effects of entirely random and unpredictable events in life, we require different tools than conventional wisdom traditionally uses – and that includes transcending conventional statistics.

2015-02-28

Ptychography: A different approach to sub-nanometer resolution imaging


It has been a notable phenomenon since considerable time that almost every major university invents their own nano-imaging techniques. Usually, this results in a particular piece of technology with limited applications that does not necessarily become industry standard. Although it may produce interesting results and demonstrate alternative options, it does not necessarily mean that it will end up relevant. While it is quite worthwhile taking a close look at the strengths and benefits of individual approaches, extolling their virtues remains to be postponed for years if not decades in certain cases until the consensus of market forces has articulated a clear preference along with reasons for it.

That said, one needs to take into consideration that all technology, even what has become the industry standard, is provisional and its continued development is cross-fertilized by alternative approaches. In the longer term, there is no room in technology development for ‘not invented here,’ of ignoring third-party solutions because of their external origins. With few exceptions, looking sideways to leverage other people’s work behooves all further R&D, since it avoids reinventing the wheel while highlighting potential for improvements.

While scanning electron microscopy (Ernst Ruska and Max Knoll, 1931, ~ 50 nm resolution) opened the door to imaging the molecular dimension, the scanning tunneling microscope (Gerd Binnig and Heinrich Rohrer,[1] 1981, ~ 0.1-0.01 nm resolution) enabled imaging and manipulating individual atoms in a sample. It has since been refined into deterministic electron ptychography at atomic resolution levels. Ruska as well as Binnig/Rohrer received the 1986 Nobel Prize in Physics for their contributions to electron microscopy. The next leap came in scanning transmission x-ray microscopy (STXM) of which a special case is ptychography, a form of diffractive imaging using inverse computation of scattered intensity data. The name derives from Greek ptyx for fold or layer as in diptychon, triptychon, polyptychon. Developed by Walter Hoppe in the late 1960s,[2] further developments arrived at applications for use in both x-ray and visible spectrum, resulting in a resolution improvement by more than a factor of 3 so that it can, in principle, reach wavelength-scale resolution. Even with typical resolutions of just 0.24 nm its image quality is improved over standard scanning tunneling microscopy and therefore useful in the nanoscale. Its principal limitation was, until recently, the need to avoid vibrations in the x-ray microscope. A sample is scanned through a minimal aperture with a narrow and coherent x-ray generated by a synchrotron. Smart algorithms based on Fourier transformations replace optical or magnetic lenses.  Or, as John Rodenburg put it,

“We measure diffraction patterns rather than images. What we record is equivalent to the strength of the electron, X-ray or light waves which have been scattered by the object – this is called their intensity. However, to make an image, we need to know when the peaks and troughs of the waves arrive at the detector – this is called their phase.
     The key breakthrough has been to develop a way to calculate the phase of the waves from their intensity alone. Once we have this, we can work out backwards what the waves were scattered from: that is, we can form an aberration-free image of the object, which is much better than can be achieved with a normal lens.”

As a 2013 study conducted jointly by Switzerland’s Paul Scherrer Institute (PSI) and Technical University Munich showed, progress in imaging and metrology increasingly correlates with sophisticated control of and comprehensive characterization of wave fields. This technology makes it possible to image an entire class of specimens that could not previously be observed particularly well. Not only can remaining vibrations of the x-ray microscope be compensated for by purely mathematical and statistical methods, arriving at much higher image quality, but ptychography also makes it possible to characterize fluctuations within the specimen itself, even if they occur at a speed transcending that of individual frames. It may become possible to determine changes in magnetization of individual bits in high-density magnetic storage media.

Qualitative image improvements accomplished by this technology are notable:


Computer simulation enables testing the diffraction imaging composed by the system’s algorithms, which allows both simulation of instrumentation effects and of effects of and within the specimen.  This matters because it proves that the specimen and its dynamics are accurately reflected in the algorithmic images. 3D images may be generated by repeat scans of de facto 2D samples at different tilt angles. The PSI/TU Munich method renders high-resolution images of mixed states within the sample. These may include quantum mixtures or fast stationary stochastic processes such as vibrations, switching or steady flows that can be generally described as low-rank mixed states since the dynamics of samples are often the very objective of an experiment.



[1] Nanotechnology – as well as I personally – owe Heinrich Rohrer an immense debt of gratitude. The passing in May 2013 of this disciple of Wolfgang Pauli and Paul Scherrer as well as an IBM Fellow was an immense loss. IBM’s Binnig and Rohrer Nanotechnology Center in Rüschlikon, Zurich was named after both physicists.
[2] Walter Hoppe. “Towards three-dimensional “electron microscopy” at atomic resolution.” 61 (6) Die Naturwissenschaften (1974), 239–249.

2015-01-06

Saved From the Singularity: Quantum Effects Rule Out Black Holes

Laura Mersini-Houghton is perhaps one of the most interesting and unlikely transformative forces of nature in cosmology today. Born the daughter of a professor of mathematics in Tiranë, Albania during Enver Hoxha’s quasi-Maoist dictatorship - not a known fertile ground for astrophysicists - she teaches today at the University of North Carolina at Chapel Hill, not at Princeton, Caltech, or Harvard. And yet, she seized Archimedes’ proverbial lever and took a stand where the evidence led her. This may have moved the world of theoretical physics by blowing up most of her colleagues’ assumptions about black holes with a quantum effect that may reverberate for a long time across her discipline’s standard narrative of how our universe began, and about some of its most intractable phenomena known as black holes. As we had all long heard, the universe came into being with a Big Bang - allegedly. But then Mersini-Houghton does not believe in “the universe.” Her signature line of argument, within the landscape of string theory,[1] has long been for the existence of a multitude of universes as wave functions - a “multiverse.” In that aspect of string theory, for which at least some hard evidence appears to exist,[2] our universe is merely one of 100500 possible ones,[3] as Hugh Everett, III had first proposed in his 1957 many-worlds interpretation of quantum physics.[4] She claims that, as a logical result, standard Big Bang cosmology has been plain wrong. And, no, Mersini-Houghton is assuredly not a scientific undercover agent of creationism, either. In fact, she professes: “I am still not over the shock.”[5] 

2014-12-04

In silico: When biotech research goes digital

Since the dawn of life sciences, observations and experiments were conducted on live objects and also on dead ones. The crudity of existing analytic methods made most meaningful in vivo experiments on humans increasingly unacceptable except rare cases in extremis, yet working with dead matter was evidently inadequate. Science took the first step towards modeling by resorting to animal experiments. The concept was based on the assumption of similarity of all relevant animal structures and processes with human ones, ceteris paribus. That was an assumption increasingly recognized as flawed and problematic, not least because of increasing public awareness and disapproval of quantitative as well as qualitative suffering inflicted on laboratory animals in the process, and the emergence of the concept that at least certain animate beings had recognizable rights.

But ethical issues aside, contemporary research increasingly recognized that existing models had two severe limitations: first and foremost, they differ significantly and in critical aspects from human structures and processes they were intended to approximate. Second, replication and variation of experiments is frequently and quite substantially limited by two critical factors: time and cost. As a result, live (or formerly live) models could no longer be considered valid approximations in a growing number of areas, calling for alternatives capable of bypassing these restrictions that were also able to handle a dramatic increase in complexity which is the basis of any really useful approximation.

As a result, experiments in silico were conceived, interfacing computational and experimental work, especially in biotechnology and pharmacology. There, computer simulation replaces biological structures and wet experiments. It is done completely outside living or dead organisms and requires a quantifiable, digitized mathematical model of such organism with appropriate similarities, analogies, and Kolmogorov complexity (a central concept of algorithmic information theory)[1], relying in part on category theory[2] to formalize known concepts of high-level abstractions.[3] So, always presuming availability of a high quality computational mathematical model of the biological structure which it is required to adequately represent, “executable biology” has become a rapidly emerging and exciting field expected to permit complex computer simulation of entire organisms, not merely partial structures. A pretty good digital molecular model of a rather simple cell has already been created at Stanford. Much evidence suggests that this could be a significant part of the future of synthetic biology and of neuroscience – the cybernetics of all living organisms – where a new field, connectomics, has emerged to shed light on the connections between neurons. In silico research is expected to increase and accelerate the rate of discovery while simultaneously reducing the need for costly lab work and clinical trials.  But languages must be defined that are sufficiently powerful to express all relevant features of biochemical systems and pathways. Efficient algorithms need to be developed to analyze models and to interpret results. Finally, as a matter of pragmatic realism, modeling platforms need to become accessible to and manageable by non-programmers.

2014-11-14

Self-organizing Robots: What we saw was nature – and the promise of Open Technology

Sooner or later, much of the research of systems theory and complexity arrives at the topic of self-organization, the spontaneous local interaction between elements of an initially disordered system as analyzed by Ilya Prigogine.  This is so for a variety of reasons: first off, self-organization, if statistically significant and meaningfully predictable, may be superior to organization by command because, aside from factors influencing its design, it does not require instruction, supervision or enforcement – the organizer may spare himself critical components of due diligence (and therefore potentially resulting liability) which, in and of itself, can amount to a rather significant difference in cost-efficiency for any purpose-oriented organization.

Self-organization has been receiving much attention since the dawn of intelligent observation of swarms of fish, birds, anthills, beehives and – with increasingly obvious similarities – human behavior in cities. Later, a thermodynamic view of the phenomenon prevailed over the initial cybernetics approach. Based on initial observations on the mathematics of self-organizing systems by Norbert Wiener,[1] they follow algorithms relying on sensor data, interacting with neighbors, looking for patterns. Such pattern-oriented behavior makes the swarm as a whole much more resilient in self-assembling structures that, given a certain numerical threshold and degree of complexity, may not be able to be destroyed by almost any influence.

This is also where robotics approximates nature and the observations of aspects of “swarm intelligence” in cells, social insects and higher-developed species of social animals like birds and fish. Swarm intelligence is the collective behavior of decentralized, self-organized systems.

2014-10-18

The madness of automated response: warfare on autopilot

As we commemorate the centenary of the outbreak of World War I – arguably the first engagement that could with some justification be characterized as “automated response” – it behooves us to take a look at the development of this phenomenon in years since, but, even more importantly, at its anticipated future.

The automated response that led to WWI was purely legal in nature: the successive reactions following the general mobilization of the Austro-Hungarian Empire were rooted in a network of treaties of alliance. The system contained a fair number of “circuit breakers” at almost every turn, even if they would have amounted, in contemporary view, to breaches of a treaty obligation. This situation found a direct successor in Article 5 of the North Atlantic Treaty of April 4, 1949 (the Washington Treaty establishing NATO), which to date has been invoked only once following 9/11.

But it is not so much the automatism based on honor, credibility and other social compulsion on an international scale that will likely determine automated responses in the future. It is much more a technical and systemic automated response that will increasingly, for a variety of reasons, take reactions out of human hands.

For one, response time of modern weapon systems is shrinking at an increasing pace. Comparable to the situation in computer trading, the percentage of situations – without regard to their significance – will grow where human response will under almost any imaginable circumstance be too slow and hence come too late.

From a warfighter’s perspective, therefore, automated response is a good thing: if a threat is identified and incoming munition destroyed before it becomes a manifest threat, it matters little whether that happens by human intervention or fully under the control of technology. Of course, a number of concerns are evident:

2014-09-02

Patents on Mathematical Algorithms

It has long been a distinguishing mark that ideas and concepts generated by the queen of quantitative and formal sciences are incapable of patent protection. If we are charitable, one might say this is because the queen does not touch money. But, as it goes so often in matters legal, this is a case of “not so fast” because there are, of course, exceptions. And questionable logic.

First, let’s take a look at the topology of mathematics in the USPTO’s value system:  while it requires mathematics coursework as a prerequisite of its employees working as patent examiners in the computer arts, it does not recognize mathematics courses as qualifying for patent practitioners. Quite the contrary: while bachelor’s degrees in 32 subjects will constitute adequate proof of requisite scientific and technical training, not to mention a full two-and-one-half pages of acceptable alternatives, the General Requirements Bulletin for Admission to the Examination for Registration to Practice in Patent Cases Before the United States Patent and Trademark Office lists Typical Non-Acceptable Course Work that is not accepted to demonstrate scientific and technical training, notably “… machine operation (wiring, soldering, etc.), courses taken on a pass/fail basis, correspondence courses, home or personal independent study courses, high school level courses, mathematics courses, one day conferences, …” Consequently, it cannot come as a surprise that there are precious few patent attorneys with significant mathematical training as required to understand the mathematics underlying contemporary, much less future, science and technology.

2014-08-05

Gecko Nanoadhesion Research – A classical paradigm of biomimetics

Biomimetics is perhaps the oldest form of scientific plagiarism – science plagiarizing nature. It is "the science of systems which have some function copied from nature, or which represent characteristics of natural systems or their analogues." It is also one of the most fascinating, most fertile areas of engineering and applied science (from airplane wings to velcro or echolocation). Living organisms have evolved particularly well-adapted structures and materials over geological time by means of natural selection. Especially in the area of micro- and nanotechnology, it is difficult to imagine operating without the remarkable opportunities of biomimetic methodology. Harvard established the Wyss Institute for Biologically Inspired Engineering precisely with that objective in mind. Biomimetic synthesis is an entire field of organic chemistry.

Geckos, an ectothermic infraorder of lizards, are one of nature’s most inspiring evolutionary mysteries: they reach in length from 1.6 cm to 60 cm. Some species are parthenogenic, perhaps one reason why they occur throughout the world in warm climates, even on remote islands.

Geckos are capable of running on almost any surface smooth or rough, wet or dry, clean or dirty. They do this at any orientation: up a wall as well as inverted along a ceiling. But not very well on Teflon, and not very well under water. Miraculously, their toes are covered by millions of micron-scale bristles-like structures (setae) that constitute a self-cleaning dry adhesive. On their foot pads, the micrometer-scale setae branch out into nanometer-scale projections (spatulae). It is generally assumed that the exceptional adhesive power – that does not rely on any “sticky” substance – exploits molecular attraction (Van der Waals forces) between the gecko’s toe pads and the surface it is walking on. But very recent research suggests that electrostatic forces may be primarily responsible.

At many research institutions including Oxford, UC Berkeley, Stanford, Northwestern, Carnegie Mellon, UMass, UAlberta, UManchester and numerous other places, Gecko research has become a very exciting topic for biomimeticists, especially in nanotechnology and adhesives research. No wonder DARPA became interested early as well in its potential military applications for scaling vertical surfaces, but so is NASA, the NIH and BAe. Millions of micron scale setae on each toe form combine to a dry adhesive that is self-cleaning and does not involve a glue-like substance. The microfibers or setae are activated by dragging or sliding the toe parallel to the surface. The tip of a seta ends in 100 to 1000 spatulae measuring just 100 nanometers in diameter and 0.2 μm in length. In the gecko, evolution has formed intricate nanostructures that work together in a hierarchy of spatula, spatula stalks, setal stalks, setal arrays, and toe mechanics. Each square mm contains about 14,000 setae with a diameter of 5 μm. An average gecko’s setae can support a weight of 133 kg, while its body weight is 70 g. The adhesion force of a spatula varies with the surface energy of the substrate to which it adheres. The surface energy that originates from long-range forces, e.g., van der Waals forces, follows from the material's structure below the outermost atomic layers at up to 100 nm beneath the surface. The setae are lubricated by phospholipids generated by the gecko’s body that also enable it to detach its feet prior to each subsequent step without slowing down.

Principal commercial applications of Gecko research have focused to date on biomimetic adhesives, a kind of superglue that attaches equally well to wet surfaces and to dry ones. “Geckel,” as a first product is called, has combined a coating of fibrous silicone with a polymer mimicking the “glue” employed by mussels that allows them to stick to rocks while they and the surfaces they adhere to are being pounded by giant ocean waves in perfect storms.

“Gecko tape” was developed already in 2003 at the University of Manchester but only produced in small quantities while scaling up production has proved commercially difficult. In the life sciences, sheets of elastic, sticky polymers could soon replace sutures and staples, including use in laparoscopic surgeries, and provide long-term drug delivery patches to expanding and contracting areas such as cardiac tissue and provide stem cell attracting factors for tissue regeneration.

The gecko’s ability to” run up and down a tree in any way”, as Aristoteles observed in his History of Animals (Περὶ τὰ Ζῷα Ἱστορίαι), continues to inspire even in the robotic age: Stanford’s “Stickybot” has likely applications primarily in outer space and security. One common characteristic of visionary biomimetic technology is, however, that its potential becomes plausible and sometimes obvious far sooner than its commercial viability. It is – and will likely remain for considerable time – one of the central challenges of technology assessment.


2014-07-27

Micro and nano photography

The FEI Company, a manufacturer of scanning electron microscopes and transmission electron microscopes capable of imaging objects on a nanoscale, organizes an annual international competition for pictures obtained by using FEI imaging equipment in co-operation with National Geographic.

This unusual and also eerily beautiful photography, both on a micro and a nano scale, may be viewed on FEI’s Flickr page:


The finalists of the most recent competition are featured by NBC News under the title “Wonders of unseen worlds”:



2014-07-12

Artificial Intelligence or What does it mean to be human?



We live in an age where science has ceased to reside in a quaint ivory tower. Even analytic philosophy took a quantitative turn and became inextricably entwined with computer science, as Bertrand Russell diagnosed and predicted already in 1945:

"Modern analytical empiricism [...] differs from that of Locke, Berkeley, and Hume by its incorporation of mathematics and its development of a powerful logical technique. It is thus able, in regard to certain problems, to achieve definite answers, which have the quality of science rather than of philosophy. It has the advantage, in comparison with the philosophies of the system-builders, of being able to tackle its problems one at a time, instead of having to invent at one stroke a block theory of the whole universe. Its methods, in this respect, resemble those of science. I have no doubt that, in so far as philosophical knowledge is possible, it is by such methods that it must be sought; I have also no doubt that, by these methods, many ancient problems are completely soluble."[1]

While neuroscience has made some rather remarkable inroads into neurobiology and behavior [2], it is in many other ways still in its infancy – yet artificial intelligence (A.I.) purports to mimic and surpass the human brain on its way to a “singularity” that Ray Kurzweil, Google’s director of engineering, predicts to happen circa 2045. And Stephen Hawking cautions that creating artificial intelligence might be the biggest and yet last event in human history “unless we learn how to avoid the risks.”

2014-06-01

Thinking about a legal singularity


Law follows life, and our world is changing at an unprecedented pace. Small wonder, then, that it seems blue-eyed to think that the framework of rules balancing interests in such a changing environment will not have to change at equal speed. Think of the time it takes deliberative democracy to hold a meaningful public debate among citizens in an open society - most of whom are not at all familiar with the realities of innovation. So, perhaps for the first time in history, law will need to be revamped to anticipate and pre-empt the potential subject matters likely to require future regulation. Visionary legislation – a contradiction in terms?

Precedents exist, albeit hardly encouraging ones.

America’s legislative response to 9/11, known as the USA Patriot Act, was signed into law by President Bush on October 26, 2001 – all of 45 days after the event. This massive piece of legislation fundamentally affected not just constitutional rights as we knew them but also amended a dozen federal statutes, all in the name of clear and present danger. Headlines make for notoriously bad law, but it is difficult to believe that much of this statute was not drafted considerably prior to the actual emergency that enabled its passage through Congress with barely any debate or opposition. I use this example not as part of some conspiracy that mushroomed by the dozens around this tragic but, in theory and concept, entirely foreseeable event. It is part of something even bigger, an entire legislative and regulatory approach to complexity under conditions of rapid change for which our systems of governance are alarmingly ill prepared.

Terrorism has been a serious problem to political leaders at least since the days of the French Revolution and throughout the 19th century, but has existed even in the age of Sulla’s proscriptions and probably long before. It is a weapon of asymmetric, unconventional warfare, a form of leverage by blackmail, targeting irrational and disproportionate fears in prosperous and stable societies. But lessons from those ample antecedents unsurprisingly did not find meaningful reflection in the Patriot Act beyond attempts to band-aid certain symptoms.

Substantively adequate anticipation of issues and responses seems perhaps the paramount challenge to the rule of law. Legislatures are easy to blame for failure to grasp unprecedented events or developments, be they traumata like 9/11 or the much farther-reaching revolutions in gathering and processing data, in biomedical research, in robotics and artificial intelligence, in novel instruments of speech or self-expression. Dispassionate analysis of 9/11 invariably leads to the conclusion that the U.S. government did have sufficient data to predict the attack but it did not have analytic capabilities to manage, connect and interpret its aggregation of Big Data. The resulting expansion of NSA data gathering has only yielded quantitative gains without a quantum leap in qualitative analytic and synthetic capabilities, and that deficiency persists to date. As a result, conflicts between constitutionally protected interests and with strategic international relationships have abounded, but no significant conceptual solution has appeared.