Pages

2015-05-31

Reflections on Eurotechnopanic and the fiscal biosphere

I came across some choice interviews and essays by Jeff Jarvis.

A professor at CUNY’s Graduate School of Journalism and a long-standing practicing journalist, Jarvis is a noted and loyal fan of Google as well as of other major Silicon Valley success stories and an irreconcilable adversary of their critics.

Among the issues he raises with persistence, gleefully rankling European sensibilities in the process, is what he has diagnoses as “Eurotechnopanic” – a “German disease” that manifests itself through symptoms of excessive consideration for and subservience to stakeholders in old technologies (well, old money) – including major publishers  and their media arms who are, as we all know, not to a large extent on the winning side of digital developments – and for consideration given to transparent and self-serving anticompetitive measures of European (in this case German) players who simply cannot come up with a globally comparable product. Instead, they lean heavily on the German government to impose copyright fees in favor of publishers whose products are linked in pertinent part by search engines; to enforce pixilation of Google Street View leading to abandonment of recent photo updates and the moniker “Blurmany”; to initiate antitrust proceedings (not that events like that were unknown in the U.S. to the likes of Microsoft and IBM and others); to recognize a “right to be forgotten”; and other measures Jarvis views as anticompetitive, protectionist and anti-American. An inveterate sceptic of government, Jarvis attributes the success of American internet entrepreneurs and products in part to a changing culture of sharing (information) and publicness espoused by consumers.

Still, Jarvis professes to be an enduring admirer of another German, Johannes Gensfleisch, called Gutenberg, whom not only he calls history’s first and most influential technology entrepreneur (albeit financially unsuccessful in his lifetime), arguably the principal catalyst of the scientific revolution, and whose invention in the 1440s predictably encountered at least as much concern and opposition from early Eurotechnophobes as internet services are confronted with today: there would be too many books, too much knowledge in the hands of people, too many hazards for lawful government, vast need for regulation, etc. And that is indeed what set Gutenberg apart: typographic printing, invented more than a millennium earlier in East Asia, was never used for speed, high output and mass efficiency – further evidence of the superiority of the low-margin, high-volume concept.

But one conclusion one can derive from his argument speaks for itself: it is an utter impossibility that a serial phenomenon like Elon Musk would accomplish a fraction of his achievements in the Europe as we know and appreciate fondly. No two ways about that. The number and mobility of European engineers, inventors and entrepreneurs who came through the literal and figurative equivalent of Ellis Island to succeed spectacularly speaks louder and clearer than any amount of online chatter – and would appear to be a primary incentive for meaningful and overdue immigration reform both sides of the Atlantic.

Another point of note is that, despite Jarvis’ observations, Berlin has become an emerging Silicon Valley in terms of the number and growth of startups – although not yet in terms of size or market capitalization (the latter is unlikely to occur at any time in the near future given that Germany does not have a tradition of widespread equity investment and hence no deep and broad stock markets). But it is worth noting that both California and Germany – as, by the way, Israel, the world’s second-largest agglomeration of tech companies, along with London, Moscow and Paris, are all notorious highly taxed jurisdictions. Innovation is driven by interaction, incubation, opportunity and financing infinitely more than it is by marginal corporate tax rates.

2015-04-30

It’s a small world: images of molecular engineering


The step from high-resolution optical imaging to electron microscopy with nanometer resolution is almost unbelievable. Even more beyond intuitive grasp are the methods (and, well, cost) required to create engineered particles and devices at the molecular level.


Below are examples of relatively recent technologies based on variants of electron and atomic force microscopy that enable visualization of individual molecules and even atoms:


While visualization will be approaching practical and functional limits, the possibilities for cost-effective manipulation and thus for nano-engineering are still in their infancy, especially with regard to their biological and biomedical applications.

2015-03-31

The Graying of Swan Lake


Seldom have I been more impressed with the subtle accuracy of a generalization than in the case of Nassim Nicholas Taleb’s dictum, “you never win an argument until they attack your person.” As an essayist and statistician, this Lebanese American teaching currently at NYU Polytechnic has developed some of the most original critical thinking on risk analysis and risk engineering. It reaches far beyond mathematical finance and has game-changing consequences for decision-making overall. Not many quants have their writings called to be among the dozen most influential books since WWII. No stranger to controversy, Taleb made his assertion first in 2007 that statistics is inherently incomplete as a tool set because it cannot predict the risk of rare events (which he calls Black Swans). But despite their rarity, and despite being incapable of prediction by extrapolation of existing data, they have disproportionally vast effects.

Since I can remember, I have heard people acknowledge, with a snicker, the ‘theoretical’ possibility of systemic risk, of a meltdown of basic operating infrastructure and assumptions. Like the presumption of innocence, it had become one of those exercises in lip service everyone made a ritual of mentioning while appealing to a near-universal consensus (end-time theorists of all stripes excepted) that systemic risk was just a theoretical hypothesis. Prior to 2008, who except few eye witnesses of 1929 et seq. would have given serious consideration to “major banks not lending to their peers,” bringing the money market to a virtual standstill? Who would have expected that banks would, in essence, depart the lending business altogether – at least to the extent it could not be pushed off their balance sheet? Or that there would not be significant demand for major-ticket securities blessed by all three major rating agencies with their highest medals of honor? Or have perceived the Swiss National Bank as a source of global instability?

If we are to capture by intellect and to deal meaningfully with the effects of entirely random and unpredictable events in life, we require different tools than conventional wisdom traditionally uses – and that includes transcending conventional statistics.

2015-02-28

Ptychography: A different approach to sub-nanometer resolution imaging


It has been a notable phenomenon since considerable time that almost every major university invents their own nano-imaging techniques. Usually, this results in a particular piece of technology with limited applications that does not necessarily become industry standard. Although it may produce interesting results and demonstrate alternative options, it does not necessarily mean that it will end up relevant. While it is quite worthwhile taking a close look at the strengths and benefits of individual approaches, extolling their virtues remains to be postponed for years if not decades in certain cases until the consensus of market forces has articulated a clear preference along with reasons for it.

That said, one needs to take into consideration that all technology, even what has become the industry standard, is provisional and its continued development is cross-fertilized by alternative approaches. In the longer term, there is no room in technology development for ‘not invented here,’ of ignoring third-party solutions because of their external origins. With few exceptions, looking sideways to leverage other people’s work behooves all further R&D, since it avoids reinventing the wheel while highlighting potential for improvements.

While scanning electron microscopy (Ernst Ruska and Max Knoll, 1931, ~ 50 nm resolution) opened the door to imaging the molecular dimension, the scanning tunneling microscope (Gerd Binnig and Heinrich Rohrer,[1] 1981, ~ 0.1-0.01 nm resolution) enabled imaging and manipulating individual atoms in a sample. It has since been refined into deterministic electron ptychography at atomic resolution levels. Ruska as well as Binnig/Rohrer received the 1986 Nobel Prize in Physics for their contributions to electron microscopy. The next leap came in scanning transmission x-ray microscopy (STXM) of which a special case is ptychography, a form of diffractive imaging using inverse computation of scattered intensity data. The name derives from Greek ptyx for fold or layer as in diptychon, triptychon, polyptychon. Developed by Walter Hoppe in the late 1960s,[2] further developments arrived at applications for use in both x-ray and visible spectrum, resulting in a resolution improvement by more than a factor of 3 so that it can, in principle, reach wavelength-scale resolution. Even with typical resolutions of just 0.24 nm its image quality is improved over standard scanning tunneling microscopy and therefore useful in the nanoscale. Its principal limitation was, until recently, the need to avoid vibrations in the x-ray microscope. A sample is scanned through a minimal aperture with a narrow and coherent x-ray generated by a synchrotron. Smart algorithms based on Fourier transformations replace optical or magnetic lenses.  Or, as John Rodenburg put it,

“We measure diffraction patterns rather than images. What we record is equivalent to the strength of the electron, X-ray or light waves which have been scattered by the object – this is called their intensity. However, to make an image, we need to know when the peaks and troughs of the waves arrive at the detector – this is called their phase.
     The key breakthrough has been to develop a way to calculate the phase of the waves from their intensity alone. Once we have this, we can work out backwards what the waves were scattered from: that is, we can form an aberration-free image of the object, which is much better than can be achieved with a normal lens.”

As a 2013 study conducted jointly by Switzerland’s Paul Scherrer Institute (PSI) and Technical University Munich showed, progress in imaging and metrology increasingly correlates with sophisticated control of and comprehensive characterization of wave fields. This technology makes it possible to image an entire class of specimens that could not previously be observed particularly well. Not only can remaining vibrations of the x-ray microscope be compensated for by purely mathematical and statistical methods, arriving at much higher image quality, but ptychography also makes it possible to characterize fluctuations within the specimen itself, even if they occur at a speed transcending that of individual frames. It may become possible to determine changes in magnetization of individual bits in high-density magnetic storage media.

Qualitative image improvements accomplished by this technology are notable:


Computer simulation enables testing the diffraction imaging composed by the system’s algorithms, which allows both simulation of instrumentation effects and of effects of and within the specimen.  This matters because it proves that the specimen and its dynamics are accurately reflected in the algorithmic images. 3D images may be generated by repeat scans of de facto 2D samples at different tilt angles. The PSI/TU Munich method renders high-resolution images of mixed states within the sample. These may include quantum mixtures or fast stationary stochastic processes such as vibrations, switching or steady flows that can be generally described as low-rank mixed states since the dynamics of samples are often the very objective of an experiment.



[1] Nanotechnology – as well as I personally – owe Heinrich Rohrer an immense debt of gratitude. The passing in May 2013 of this disciple of Wolfgang Pauli and Paul Scherrer as well as an IBM Fellow was an immense loss. IBM’s Binnig and Rohrer Nanotechnology Center in Rüschlikon, Zurich was named after both physicists.
[2] Walter Hoppe. “Towards three-dimensional “electron microscopy” at atomic resolution.” 61 (6) Die Naturwissenschaften (1974), 239–249.

2015-01-06

Saved From the Singularity: Quantum Effects Rule Out Black Holes

Laura Mersini-Houghton is perhaps one of the most interesting and unlikely transformative forces of nature in cosmology today. Born the daughter of a professor of mathematics in Tiranë, Albania during Enver Hoxha’s quasi-Maoist dictatorship - not a known fertile ground for astrophysicists - she teaches today at the University of North Carolina at Chapel Hill, not at Princeton, Caltech, or Harvard. And yet, she seized Archimedes’ proverbial lever and took a stand where the evidence led her. This may have moved the world of theoretical physics by blowing up most of her colleagues’ assumptions about black holes with a quantum effect that may reverberate for a long time across her discipline’s standard narrative of how our universe began, and about some of its most intractable phenomena known as black holes. As we had all long heard, the universe came into being with a Big Bang - allegedly. But then Mersini-Houghton does not believe in “the universe.” Her signature line of argument, within the landscape of string theory,[1] has long been for the existence of a multitude of universes as wave functions - a “multiverse.” In that aspect of string theory, for which at least some hard evidence appears to exist,[2] our universe is merely one of 100500 possible ones,[3] as Hugh Everett, III had first proposed in his 1957 many-worlds interpretation of quantum physics.[4] She claims that, as a logical result, standard Big Bang cosmology has been plain wrong. And, no, Mersini-Houghton is assuredly not a scientific undercover agent of creationism, either. In fact, she professes: “I am still not over the shock.”[5] 

2014-12-04

In silico: When biotech research goes digital

Since the dawn of life sciences, observations and experiments were conducted on live objects and also on dead ones. The crudity of existing analytic methods made most meaningful in vivo experiments on humans increasingly unacceptable except rare cases in extremis, yet working with dead matter was evidently inadequate. Science took the first step towards modeling by resorting to animal experiments. The concept was based on the assumption of similarity of all relevant animal structures and processes with human ones, ceteris paribus. That was an assumption increasingly recognized as flawed and problematic, not least because of increasing public awareness and disapproval of quantitative as well as qualitative suffering inflicted on laboratory animals in the process, and the emergence of the concept that at least certain animate beings had recognizable rights.

But ethical issues aside, contemporary research increasingly recognized that existing models had two severe limitations: first and foremost, they differ significantly and in critical aspects from human structures and processes they were intended to approximate. Second, replication and variation of experiments is frequently and quite substantially limited by two critical factors: time and cost. As a result, live (or formerly live) models could no longer be considered valid approximations in a growing number of areas, calling for alternatives capable of bypassing these restrictions that were also able to handle a dramatic increase in complexity which is the basis of any really useful approximation.

As a result, experiments in silico were conceived, interfacing computational and experimental work, especially in biotechnology and pharmacology. There, computer simulation replaces biological structures and wet experiments. It is done completely outside living or dead organisms and requires a quantifiable, digitized mathematical model of such organism with appropriate similarities, analogies, and Kolmogorov complexity (a central concept of algorithmic information theory)[1], relying in part on category theory[2] to formalize known concepts of high-level abstractions.[3] So, always presuming availability of a high quality computational mathematical model of the biological structure which it is required to adequately represent, “executable biology” has become a rapidly emerging and exciting field expected to permit complex computer simulation of entire organisms, not merely partial structures. A pretty good digital molecular model of a rather simple cell has already been created at Stanford. Much evidence suggests that this could be a significant part of the future of synthetic biology and of neuroscience – the cybernetics of all living organisms – where a new field, connectomics, has emerged to shed light on the connections between neurons. In silico research is expected to increase and accelerate the rate of discovery while simultaneously reducing the need for costly lab work and clinical trials.  But languages must be defined that are sufficiently powerful to express all relevant features of biochemical systems and pathways. Efficient algorithms need to be developed to analyze models and to interpret results. Finally, as a matter of pragmatic realism, modeling platforms need to become accessible to and manageable by non-programmers.

2014-11-14

Self-organizing Robots: What we saw was nature – and the promise of Open Technology

Sooner or later, much of the research of systems theory and complexity arrives at the topic of self-organization, the spontaneous local interaction between elements of an initially disordered system as analyzed by Ilya Prigogine.  This is so for a variety of reasons: first off, self-organization, if statistically significant and meaningfully predictable, may be superior to organization by command because, aside from factors influencing its design, it does not require instruction, supervision or enforcement – the organizer may spare himself critical components of due diligence (and therefore potentially resulting liability) which, in and of itself, can amount to a rather significant difference in cost-efficiency for any purpose-oriented organization.

Self-organization has been receiving much attention since the dawn of intelligent observation of swarms of fish, birds, anthills, beehives and – with increasingly obvious similarities – human behavior in cities. Later, a thermodynamic view of the phenomenon prevailed over the initial cybernetics approach. Based on initial observations on the mathematics of self-organizing systems by Norbert Wiener,[1] they follow algorithms relying on sensor data, interacting with neighbors, looking for patterns. Such pattern-oriented behavior makes the swarm as a whole much more resilient in self-assembling structures that, given a certain numerical threshold and degree of complexity, may not be able to be destroyed by almost any influence.

This is also where robotics approximates nature and the observations of aspects of “swarm intelligence” in cells, social insects and higher-developed species of social animals like birds and fish. Swarm intelligence is the collective behavior of decentralized, self-organized systems.

2014-10-18

The madness of automated response: warfare on autopilot

As we commemorate the centenary of the outbreak of World War I – arguably the first engagement that could with some justification be characterized as “automated response” – it behooves us to take a look at the development of this phenomenon in years since, but, even more importantly, at its anticipated future.

The automated response that led to WWI was purely legal in nature: the successive reactions following the general mobilization of the Austro-Hungarian Empire were rooted in a network of treaties of alliance. The system contained a fair number of “circuit breakers” at almost every turn, even if they would have amounted, in contemporary view, to breaches of a treaty obligation. This situation found a direct successor in Article 5 of the North Atlantic Treaty of April 4, 1949 (the Washington Treaty establishing NATO), which to date has been invoked only once following 9/11.

But it is not so much the automatism based on honor, credibility and other social compulsion on an international scale that will likely determine automated responses in the future. It is much more a technical and systemic automated response that will increasingly, for a variety of reasons, take reactions out of human hands.

For one, response time of modern weapon systems is shrinking at an increasing pace. Comparable to the situation in computer trading, the percentage of situations – without regard to their significance – will grow where human response will under almost any imaginable circumstance be too slow and hence come too late.

From a warfighter’s perspective, therefore, automated response is a good thing: if a threat is identified and incoming munition destroyed before it becomes a manifest threat, it matters little whether that happens by human intervention or fully under the control of technology. Of course, a number of concerns are evident:

2014-09-02

Patents on Mathematical Algorithms

It has long been a distinguishing mark that ideas and concepts generated by the queen of quantitative and formal sciences are incapable of patent protection. If we are charitable, one might say this is because the queen does not touch money. But, as it goes so often in matters legal, this is a case of “not so fast” because there are, of course, exceptions. And questionable logic.

First, let’s take a look at the topology of mathematics in the USPTO’s value system:  while it requires mathematics coursework as a prerequisite of its employees working as patent examiners in the computer arts, it does not recognize mathematics courses as qualifying for patent practitioners. Quite the contrary: while bachelor’s degrees in 32 subjects will constitute adequate proof of requisite scientific and technical training, not to mention a full two-and-one-half pages of acceptable alternatives, the General Requirements Bulletin for Admission to the Examination for Registration to Practice in Patent Cases Before the United States Patent and Trademark Office lists Typical Non-Acceptable Course Work that is not accepted to demonstrate scientific and technical training, notably “… machine operation (wiring, soldering, etc.), courses taken on a pass/fail basis, correspondence courses, home or personal independent study courses, high school level courses, mathematics courses, one day conferences, …” Consequently, it cannot come as a surprise that there are precious few patent attorneys with significant mathematical training as required to understand the mathematics underlying contemporary, much less future, science and technology.

2014-08-05

Gecko Nanoadhesion Research – A classical paradigm of biomimetics

Biomimetics is perhaps the oldest form of scientific plagiarism – science plagiarizing nature. It is "the science of systems which have some function copied from nature, or which represent characteristics of natural systems or their analogues." It is also one of the most fascinating, most fertile areas of engineering and applied science (from airplane wings to velcro or echolocation). Living organisms have evolved particularly well-adapted structures and materials over geological time by means of natural selection. Especially in the area of micro- and nanotechnology, it is difficult to imagine operating without the remarkable opportunities of biomimetic methodology. Harvard established the Wyss Institute for Biologically Inspired Engineering precisely with that objective in mind. Biomimetic synthesis is an entire field of organic chemistry.

Geckos, an ectothermic infraorder of lizards, are one of nature’s most inspiring evolutionary mysteries: they reach in length from 1.6 cm to 60 cm. Some species are parthenogenic, perhaps one reason why they occur throughout the world in warm climates, even on remote islands.

Geckos are capable of running on almost any surface smooth or rough, wet or dry, clean or dirty. They do this at any orientation: up a wall as well as inverted along a ceiling. But not very well on Teflon, and not very well under water. Miraculously, their toes are covered by millions of micron-scale bristles-like structures (setae) that constitute a self-cleaning dry adhesive. On their foot pads, the micrometer-scale setae branch out into nanometer-scale projections (spatulae). It is generally assumed that the exceptional adhesive power – that does not rely on any “sticky” substance – exploits molecular attraction (Van der Waals forces) between the gecko’s toe pads and the surface it is walking on. But very recent research suggests that electrostatic forces may be primarily responsible.

At many research institutions including Oxford, UC Berkeley, Stanford, Northwestern, Carnegie Mellon, UMass, UAlberta, UManchester and numerous other places, Gecko research has become a very exciting topic for biomimeticists, especially in nanotechnology and adhesives research. No wonder DARPA became interested early as well in its potential military applications for scaling vertical surfaces, but so is NASA, the NIH and BAe. Millions of micron scale setae on each toe form combine to a dry adhesive that is self-cleaning and does not involve a glue-like substance. The microfibers or setae are activated by dragging or sliding the toe parallel to the surface. The tip of a seta ends in 100 to 1000 spatulae measuring just 100 nanometers in diameter and 0.2 μm in length. In the gecko, evolution has formed intricate nanostructures that work together in a hierarchy of spatula, spatula stalks, setal stalks, setal arrays, and toe mechanics. Each square mm contains about 14,000 setae with a diameter of 5 μm. An average gecko’s setae can support a weight of 133 kg, while its body weight is 70 g. The adhesion force of a spatula varies with the surface energy of the substrate to which it adheres. The surface energy that originates from long-range forces, e.g., van der Waals forces, follows from the material's structure below the outermost atomic layers at up to 100 nm beneath the surface. The setae are lubricated by phospholipids generated by the gecko’s body that also enable it to detach its feet prior to each subsequent step without slowing down.

Principal commercial applications of Gecko research have focused to date on biomimetic adhesives, a kind of superglue that attaches equally well to wet surfaces and to dry ones. “Geckel,” as a first product is called, has combined a coating of fibrous silicone with a polymer mimicking the “glue” employed by mussels that allows them to stick to rocks while they and the surfaces they adhere to are being pounded by giant ocean waves in perfect storms.

“Gecko tape” was developed already in 2003 at the University of Manchester but only produced in small quantities while scaling up production has proved commercially difficult. In the life sciences, sheets of elastic, sticky polymers could soon replace sutures and staples, including use in laparoscopic surgeries, and provide long-term drug delivery patches to expanding and contracting areas such as cardiac tissue and provide stem cell attracting factors for tissue regeneration.

The gecko’s ability to” run up and down a tree in any way”, as Aristoteles observed in his History of Animals (Περὶ τὰ Ζῷα Ἱστορίαι), continues to inspire even in the robotic age: Stanford’s “Stickybot” has likely applications primarily in outer space and security. One common characteristic of visionary biomimetic technology is, however, that its potential becomes plausible and sometimes obvious far sooner than its commercial viability. It is – and will likely remain for considerable time – one of the central challenges of technology assessment.