Pages

2014-12-04

In silico: When biotech research goes digital

Since the dawn of life sciences, observations and experiments were conducted on live objects and also on dead ones. The crudity of existing analytic methods made most meaningful in vivo experiments on humans increasingly unacceptable except rare cases in extremis, yet working with dead matter was evidently inadequate. Science took the first step towards modeling by resorting to animal experiments. The concept was based on the assumption of similarity of all relevant animal structures and processes with human ones, ceteris paribus. That was an assumption increasingly recognized as flawed and problematic, not least because of increasing public awareness and disapproval of quantitative as well as qualitative suffering inflicted on laboratory animals in the process, and the emergence of the concept that at least certain animate beings had recognizable rights.

But ethical issues aside, contemporary research increasingly recognized that existing models had two severe limitations: first and foremost, they differ significantly and in critical aspects from human structures and processes they were intended to approximate. Second, replication and variation of experiments is frequently and quite substantially limited by two critical factors: time and cost. As a result, live (or formerly live) models could no longer be considered valid approximations in a growing number of areas, calling for alternatives capable of bypassing these restrictions that were also able to handle a dramatic increase in complexity which is the basis of any really useful approximation.

As a result, experiments in silico were conceived, interfacing computational and experimental work, especially in biotechnology and pharmacology. There, computer simulation replaces biological structures and wet experiments. It is done completely outside living or dead organisms and requires a quantifiable, digitized mathematical model of such organism with appropriate similarities, analogies, and Kolmogorov complexity (a central concept of algorithmic information theory)[1], relying in part on category theory[2] to formalize known concepts of high-level abstractions.[3] So, always presuming availability of a high quality computational mathematical model of the biological structure which it is required to adequately represent, “executable biology” has become a rapidly emerging and exciting field expected to permit complex computer simulation of entire organisms, not merely partial structures. A pretty good digital molecular model of a rather simple cell has already been created at Stanford. Much evidence suggests that this could be a significant part of the future of synthetic biology and of neuroscience – the cybernetics of all living organisms – where a new field, connectomics, has emerged to shed light on the connections between neurons. In silico research is expected to increase and accelerate the rate of discovery while simultaneously reducing the need for costly lab work and clinical trials.  But languages must be defined that are sufficiently powerful to express all relevant features of biochemical systems and pathways. Efficient algorithms need to be developed to analyze models and to interpret results. Finally, as a matter of pragmatic realism, modeling platforms need to become accessible to and manageable by non-programmers.

2014-11-14

Self-organizing Robots: What we saw was nature – and the promise of Open Technology

Sooner or later, much of the research of systems theory and complexity arrives at the topic of self-organization, the spontaneous local interaction between elements of an initially disordered system as analyzed by Ilya Prigogine.  This is so for a variety of reasons: first off, self-organization, if statistically significant and meaningfully predictable, may be superior to organization by command because, aside from factors influencing its design, it does not require instruction, supervision or enforcement – the organizer may spare himself critical components of due diligence (and therefore potentially resulting liability) which, in and of itself, can amount to a rather significant difference in cost-efficiency for any purpose-oriented organization.

Self-organization has been receiving much attention since the dawn of intelligent observation of swarms of fish, birds, anthills, beehives and – with increasingly obvious similarities – human behavior in cities. Later, a thermodynamic view of the phenomenon prevailed over the initial cybernetics approach. Based on initial observations on the mathematics of self-organizing systems by Norbert Wiener,[1] they follow algorithms relying on sensor data, interacting with neighbors, looking for patterns. Such pattern-oriented behavior makes the swarm as a whole much more resilient in self-assembling structures that, given a certain numerical threshold and degree of complexity, may not be able to be destroyed by almost any influence.

This is also where robotics approximates nature and the observations of aspects of “swarm intelligence” in cells, social insects and higher-developed species of social animals like birds and fish. Swarm intelligence is the collective behavior of decentralized, self-organized systems.

2014-10-18

The madness of automated response: warfare on autopilot

As we commemorate the centenary of the outbreak of World War I – arguably the first engagement that could with some justification be characterized as “automated response” – it behooves us to take a look at the development of this phenomenon in years since, but, even more importantly, at its anticipated future.

The automated response that led to WWI was purely legal in nature: the successive reactions following the general mobilization of the Austro-Hungarian Empire were rooted in a network of treaties of alliance. The system contained a fair number of “circuit breakers” at almost every turn, even if they would have amounted, in contemporary view, to breaches of a treaty obligation. This situation found a direct successor in Article 5 of the North Atlantic Treaty of April 4, 1949 (the Washington Treaty establishing NATO), which to date has been invoked only once following 9/11.

But it is not so much the automatism based on honor, credibility and other social compulsion on an international scale that will likely determine automated responses in the future. It is much more a technical and systemic automated response that will increasingly, for a variety of reasons, take reactions out of human hands.

For one, response time of modern weapon systems is shrinking at an increasing pace. Comparable to the situation in computer trading, the percentage of situations – without regard to their significance – will grow where human response will under almost any imaginable circumstance be too slow and hence come too late.

From a warfighter’s perspective, therefore, automated response is a good thing: if a threat is identified and incoming munition destroyed before it becomes a manifest threat, it matters little whether that happens by human intervention or fully under the control of technology. Of course, a number of concerns are evident:

2014-09-02

Patents on Mathematical Algorithms

It has long been a distinguishing mark that ideas and concepts generated by the queen of quantitative and formal sciences are incapable of patent protection. If we are charitable, one might say this is because the queen does not touch money. But, as it goes so often in matters legal, this is a case of “not so fast” because there are, of course, exceptions. And questionable logic.

First, let’s take a look at the topology of mathematics in the USPTO’s value system:  while it requires mathematics coursework as a prerequisite of its employees working as patent examiners in the computer arts, it does not recognize mathematics courses as qualifying for patent practitioners. Quite the contrary: while bachelor’s degrees in 32 subjects will constitute adequate proof of requisite scientific and technical training, not to mention a full two-and-one-half pages of acceptable alternatives, the General Requirements Bulletin for Admission to the Examination for Registration to Practice in Patent Cases Before the United States Patent and Trademark Office lists Typical Non-Acceptable Course Work that is not accepted to demonstrate scientific and technical training, notably “… machine operation (wiring, soldering, etc.), courses taken on a pass/fail basis, correspondence courses, home or personal independent study courses, high school level courses, mathematics courses, one day conferences, …” Consequently, it cannot come as a surprise that there are precious few patent attorneys with significant mathematical training as required to understand the mathematics underlying contemporary, much less future, science and technology.

2014-08-05

Gecko Nanoadhesion Research – A classical paradigm of biomimetics

Biomimetics is perhaps the oldest form of scientific plagiarism – science plagiarizing nature. It is "the science of systems which have some function copied from nature, or which represent characteristics of natural systems or their analogues." It is also one of the most fascinating, most fertile areas of engineering and applied science (from airplane wings to velcro or echolocation). Living organisms have evolved particularly well-adapted structures and materials over geological time by means of natural selection. Especially in the area of micro- and nanotechnology, it is difficult to imagine operating without the remarkable opportunities of biomimetic methodology. Harvard established the Wyss Institute for Biologically Inspired Engineering precisely with that objective in mind. Biomimetic synthesis is an entire field of organic chemistry.

Geckos, an ectothermic infraorder of lizards, are one of nature’s most inspiring evolutionary mysteries: they reach in length from 1.6 cm to 60 cm. Some species are parthenogenic, perhaps one reason why they occur throughout the world in warm climates, even on remote islands.

Geckos are capable of running on almost any surface smooth or rough, wet or dry, clean or dirty. They do this at any orientation: up a wall as well as inverted along a ceiling. But not very well on Teflon, and not very well under water. Miraculously, their toes are covered by millions of micron-scale bristles-like structures (setae) that constitute a self-cleaning dry adhesive. On their foot pads, the micrometer-scale setae branch out into nanometer-scale projections (spatulae). It is generally assumed that the exceptional adhesive power – that does not rely on any “sticky” substance – exploits molecular attraction (Van der Waals forces) between the gecko’s toe pads and the surface it is walking on. But very recent research suggests that electrostatic forces may be primarily responsible.

At many research institutions including Oxford, UC Berkeley, Stanford, Northwestern, Carnegie Mellon, UMass, UAlberta, UManchester and numerous other places, Gecko research has become a very exciting topic for biomimeticists, especially in nanotechnology and adhesives research. No wonder DARPA became interested early as well in its potential military applications for scaling vertical surfaces, but so is NASA, the NIH and BAe. Millions of micron scale setae on each toe form combine to a dry adhesive that is self-cleaning and does not involve a glue-like substance. The microfibers or setae are activated by dragging or sliding the toe parallel to the surface. The tip of a seta ends in 100 to 1000 spatulae measuring just 100 nanometers in diameter and 0.2 μm in length. In the gecko, evolution has formed intricate nanostructures that work together in a hierarchy of spatula, spatula stalks, setal stalks, setal arrays, and toe mechanics. Each square mm contains about 14,000 setae with a diameter of 5 μm. An average gecko’s setae can support a weight of 133 kg, while its body weight is 70 g. The adhesion force of a spatula varies with the surface energy of the substrate to which it adheres. The surface energy that originates from long-range forces, e.g., van der Waals forces, follows from the material's structure below the outermost atomic layers at up to 100 nm beneath the surface. The setae are lubricated by phospholipids generated by the gecko’s body that also enable it to detach its feet prior to each subsequent step without slowing down.

Principal commercial applications of Gecko research have focused to date on biomimetic adhesives, a kind of superglue that attaches equally well to wet surfaces and to dry ones. “Geckel,” as a first product is called, has combined a coating of fibrous silicone with a polymer mimicking the “glue” employed by mussels that allows them to stick to rocks while they and the surfaces they adhere to are being pounded by giant ocean waves in perfect storms.

“Gecko tape” was developed already in 2003 at the University of Manchester but only produced in small quantities while scaling up production has proved commercially difficult. In the life sciences, sheets of elastic, sticky polymers could soon replace sutures and staples, including use in laparoscopic surgeries, and provide long-term drug delivery patches to expanding and contracting areas such as cardiac tissue and provide stem cell attracting factors for tissue regeneration.

The gecko’s ability to” run up and down a tree in any way”, as Aristoteles observed in his History of Animals (Περὶ τὰ Ζῷα Ἱστορίαι), continues to inspire even in the robotic age: Stanford’s “Stickybot” has likely applications primarily in outer space and security. One common characteristic of visionary biomimetic technology is, however, that its potential becomes plausible and sometimes obvious far sooner than its commercial viability. It is – and will likely remain for considerable time – one of the central challenges of technology assessment.


2014-07-27

Micro and nano photography

The FEI Company, a manufacturer of scanning electron microscopes and transmission electron microscopes capable of imaging objects on a nanoscale, organizes an annual international competition for pictures obtained by using FEI imaging equipment in co-operation with National Geographic.

This unusual and also eerily beautiful photography, both on a micro and a nano scale, may be viewed on FEI’s Flickr page:


The finalists of the most recent competition are featured by NBC News under the title “Wonders of unseen worlds”:



2014-07-12

Artificial Intelligence or What does it mean to be human?



We live in an age where science has ceased to reside in a quaint ivory tower. Even analytic philosophy took a quantitative turn and became inextricably entwined with computer science, as Bertrand Russell diagnosed and predicted already in 1945:

"Modern analytical empiricism [...] differs from that of Locke, Berkeley, and Hume by its incorporation of mathematics and its development of a powerful logical technique. It is thus able, in regard to certain problems, to achieve definite answers, which have the quality of science rather than of philosophy. It has the advantage, in comparison with the philosophies of the system-builders, of being able to tackle its problems one at a time, instead of having to invent at one stroke a block theory of the whole universe. Its methods, in this respect, resemble those of science. I have no doubt that, in so far as philosophical knowledge is possible, it is by such methods that it must be sought; I have also no doubt that, by these methods, many ancient problems are completely soluble."[1]

While neuroscience has made some rather remarkable inroads into neurobiology and behavior [2], it is in many other ways still in its infancy – yet artificial intelligence (A.I.) purports to mimic and surpass the human brain on its way to a “singularity” that Ray Kurzweil, Google’s director of engineering, predicts to happen circa 2045. And Stephen Hawking cautions that creating artificial intelligence might be the biggest and yet last event in human history “unless we learn how to avoid the risks.”

2014-06-01

Thinking about a legal singularity


Law follows life, and our world is changing at an unprecedented pace. Small wonder, then, that it seems blue-eyed to think that the framework of rules balancing interests in such a changing environment will not have to change at equal speed. Think of the time it takes deliberative democracy to hold a meaningful public debate among citizens in an open society - most of whom are not at all familiar with the realities of innovation. So, perhaps for the first time in history, law will need to be revamped to anticipate and pre-empt the potential subject matters likely to require future regulation. Visionary legislation – a contradiction in terms?

Precedents exist, albeit hardly encouraging ones.

America’s legislative response to 9/11, known as the USA Patriot Act, was signed into law by President Bush on October 26, 2001 – all of 45 days after the event. This massive piece of legislation fundamentally affected not just constitutional rights as we knew them but also amended a dozen federal statutes, all in the name of clear and present danger. Headlines make for notoriously bad law, but it is difficult to believe that much of this statute was not drafted considerably prior to the actual emergency that enabled its passage through Congress with barely any debate or opposition. I use this example not as part of some conspiracy that mushroomed by the dozens around this tragic but, in theory and concept, entirely foreseeable event. It is part of something even bigger, an entire legislative and regulatory approach to complexity under conditions of rapid change for which our systems of governance are alarmingly ill prepared.

Terrorism has been a serious problem to political leaders at least since the days of the French Revolution and throughout the 19th century, but has existed even in the age of Sulla’s proscriptions and probably long before. It is a weapon of asymmetric, unconventional warfare, a form of leverage by blackmail, targeting irrational and disproportionate fears in prosperous and stable societies. But lessons from those ample antecedents unsurprisingly did not find meaningful reflection in the Patriot Act beyond attempts to band-aid certain symptoms.

Substantively adequate anticipation of issues and responses seems perhaps the paramount challenge to the rule of law. Legislatures are easy to blame for failure to grasp unprecedented events or developments, be they traumata like 9/11 or the much farther-reaching revolutions in gathering and processing data, in biomedical research, in robotics and artificial intelligence, in novel instruments of speech or self-expression. Dispassionate analysis of 9/11 invariably leads to the conclusion that the U.S. government did have sufficient data to predict the attack but it did not have analytic capabilities to manage, connect and interpret its aggregation of Big Data. The resulting expansion of NSA data gathering has only yielded quantitative gains without a quantum leap in qualitative analytic and synthetic capabilities, and that deficiency persists to date. As a result, conflicts between constitutionally protected interests and with strategic international relationships have abounded, but no significant conceptual solution has appeared.

2014-05-23

Innovation finance and foundational nanotechnology research


Rethinking fundamentals of innovation finance leads us inevitably into a conundrum of balancing motives with accounting realities. Even if research may be driven in very particular cases by noble reasons independent of monetary incentives, somebody has to pay for it. Either developers cover the necessary costs themselves – which is just not feasible in the case of foundational research as capital-intensive as nanotechnology; otherwise, funding needs to come from institutions and/or investors or, horribile dictu for many, from the taxpayer.

In the case of academic research institutions, their primary incentive may originate less from a potential for commercialization than from the likelihood of industrial and scholarly recognition. Honors such as a Nobel Prize or other major awards and distinctions are just one category of non-monetary rewards. Published reports of breakthroughs in research define another.

However, the more capital-intensive research turns out to be, the more commercial funding becomes indispensable, and financial interests of significant investors will inevitably gain influence over the type of research undertaken. One way to sidestep prioritization of research with good short-term return-on-investment potential in favor of more basic research or of research done ‘for the right reasons,’ i.e., for the benefit of society, is to increase funding by publicly financed institutions such as the National Institutes of Health or the National Science Foundation. However, such institutions are not really independent either: since funding of research grants originates from the treasury by way of budgetary legislation, allocation of resources to innovation and scientific discovery is subject to, and often enough victim of, a wide array of political pressures. In the end, areas that will receive the largest share of financing will be those marketed by scientists or lobbyists as having the greatest potential to improve either the economic or the political standing of their nation, and above all increase its military capabilities.

Between the economic and political objectives of various stakeholders, finding a golden middle to accommodate research done in the public interest may be easier if scientists keep their eyes on the bigger picture: that is, the reality and priorities of whoever holds the power of the purse in funding science. Since scientists do not normally hire public relations agents, they need to perform this function themselves, and they do that with notoriously bad results. But mediating interests and, above all, conflicting interests, always begins by stepping into the mind and awareness of one’s target constituency. That may be investors who, up to a point, are not always and entirely motivated by numbers only. It could also be politicians caught up in perpetual reelection campaigns and torn between pressures to “create jobs” for their constituents on the one hand and to “protect the public” from various risks and threats on the other, all the while trying to stretch budgetary limitations on spending and to avoid political pitfalls. Or it could be administrators of governmental science organizations whose behavioral patterns follow those of decision makers in any bureaucratic organization.

It is all too often overlooked that the seemingly obvious national interest, and even much less the ‘interest of mankind’, however frequently invoked, does not have an impartial, knowledgeable lobby anywhere. This has to do with the complexity of issues under consideration as well as with the uncertainty about which one of several competing technologies or innovative approaches will ultimately prevail. In its comparatively short history, information technology has given us some choice examples of the fact that it is not the technologically superior and most functional solution that is adopted as the general standard but rather the one that is marketed best – be it as a result of users’ prior investments or, for example, of their esthetical preferences and social appeal.

Because the science and technology community have done such a poor job educating the very public on which they depend for sustainable funding, it comes as no surprise that intermediaries like investment bankers, politicians or industry groups have stepped up to the plate and presented their version of the public interest in highly biased accounts often enough blatantly reflective of conflicting interests. It is very difficult to identify, not to mention enforce, ‘right reasons’ when already our accounting rules so heavily distort generally accepted tools of valuation and allow, if not encourage, the privatization of profits while socializing most pertinent expenditures – as the ongoing environmental discourse reflects – or by failing to account in adequate fashion for creative potential and for as yet unmarketed but largely market-ready products. The result is profound uncertainty about the funding of creative processes including research – an uncertainty that, until the last moment prior to commercialization, cannot be disproved rationally. But this is a characteristic that processes and intangibles in general share with forward-looking issues. Yet, at a time when value creation more and more vitally depends on intangibles, a body of accounting rules that only values tangibles and faits accompli is wholly insufficient. It is this key aspect of valuation and accounting as a basis for almost all decisions in finance that the rules and standards of the pre-internet age still continue to apply.

2014-04-02

Will population explosion drive colonization of space?

With environmentalists and economic development activists ringing alarm bells of population explosion and commensurate resource depletion, provident sights are set on alternative solutions to our growth predicament – including space colonization, either through human settlements, biodomes, or even terraforming, a concept seriously if preliminarily explored by NASA.

But will we really need such extreme solutions? The idea of overpopulation is not new: already in the 19th century, predictions of a Malthusian catastrophe claimed that Earth would soon run out of resources and its human population would be decimated by famines. These predictions were based on two assumptions: one, that population growth would continue at the same rate; and two, that productivity would not increase. By the 21st century, neither has proved to be true. 

2014-03-28

Plenty of room at the bottom

Richard Feynman must have been one of those rare visionaries who predict the future based not on mere creativity of the mind, but on creativity firmly entrenched in subject matter expertise. To him, miniaturization of processes and tools was a logical next step to resolve the shortcomings of existing technology – such as the slowness and bulkiness of computers in the 1950s. While most avant-garde scientific thinkers would have stopped at imagining miniaturization at micro-levels, Feynman went a step further – why not explore the molecular scale of matter? Decades before the term “nanotechnology” was coined and popularized by Eric Drexler, Feynman’s seminal 1959 lecture “Plenty of Room at the Bottom” presented the concepts of nano-scale miniaturization as a logically inevitable development in computing, chemistry, engineering, and even medicine.

Feynman’s timescale predictions were at once overly optimistic and overly pessimistic – on the one hand, he expected nanoscale miniaturization of, say, data storage to be a reality by the year 2000, and already feasible during the early 1970s. On the other hand, some of the applications which he believed would call for a sub-microscopic scale – such as computers capable of face recognition – exist already, and are widely implemented by law enforcement and social networks alike.

As for Feynman’s idea of writing the entire human knowledge onto media as small as pinheads, to the tune of taking up the print size of a pamphlet (readable through an electron microscope improved by a factor of 100), it is not the size of the media that currently poses the problem: the issue humanity is facing is, firstly, the exploding amount of its aggregate knowledge, and, secondly, the challenge of its digitization. In 1959, according to Feynman, the Library of Congress housed 9 million volumes. Today, its holdings include 151 million items in 470 languages, with millions added each year. The vast majority of all this information has not yet been digitized – and here we come to the second problem. It takes thousands of human users to transfer old books into digital form, and the pace of the volunteer-run Project Gutenberg shows how time-consuming the current practice of transcribing books really is, even when aided by continually advancing OCR software. Since 1971, Project Gutenberg digitized a mere 42,000 books in the public domain. Crowdsourcing promises a possible quantum leap in the acceleration of knowledge digitization, with  reCAPTCHA (an ingenuous reverse application of CAPTCHA, the technology used to authenticate human users online by requiring them to identify distorted signs or words as a condition of access to a certain software function or service) harnessing involuntarily the resources of 750 million computer users world-wide to digitize annually 2.5 million books that cannot be machine-read. This solution, by the way, is just another development imagined by Feynman way ahead of his time: parallel computing.

As is quantum computing. Even though Richard Feynman announced the novel idea of quantum computers only in the 1980s, the first (and very rudimentary) models of simple quantum machines are already in existence today.

Almost all of Feynman’s groundbreaking ideas are slowly becoming everyday reality. We may not have usable nanocomputers just yet, but nanotechnology and nanomaterials are slowly but steadily entering the world as we know it. And as for the writing of entire encyclopedias on headpins, “just for fun”? The smallest book in the Library of Congress today is “Old King Cole” measuring 1/25” x 1/25”, or 1 sq mm. That is the size of a period at the end of the sentence.

2014-03-18

To the end of the world and back


Remember the time when every company had a number to call, and a live person on the other side of the line? A person who would simply answer your call and deal with whatever issue you had, or connect you to the person who could help you best? Remember how later the system was improved and you had to listen and wade through touch-phone options to connect you to a live person? Remember how that person would then seamlessly morph into a talking algorithm, taking you through well-ordered steps of instructions the first time you called or the fifteenth time you called, just so that you could finally talk to the person off a scripted dialog? Remember how then the live-yet-automated human being was seamlessly replaced by a machine, one listening to what you repeated after it and answering with pre-recorded messages? And remember how that machine also disappeared, replaced with email and chat, with no number to call at all? And then email and chat also disappeared, and with it any remote interaction with something resembling a human being – the problems you experienced were to be dealt with by thick manuals and automated online or software troubleshooters. But what happens if the machine fails, and the algorithm takes you around the block in circles? Infinite loop with no way out, doom and damnation of an impersonal cyberspace.

It is DIY taken to the extreme: We, the company, are not responsible for any issues you may be experiencing. We just aren’t. You cannot contact us to complain either. Maybe you can find some good-hearted human beings who went through the same predicament, figured their way out, and are now willing to share that knowledge. Spend a few days – weeks – months – in cyberspace, maybe you will find them. Let the search for the Golden Fleece begin.

And then a miracle happens – when you are lost enough, and desperate enough, and wailing long enough, a human being sent by the machine-god suddenly appears, like a specter out of cyberspace. He kindly takes you by hand, and leads you out of darkness. Glorious humanity!

And now, after a long hiatus, we are back to our regular broadcast. Welcome back.