Pages

2014-03-28

Plenty of room at the bottom

Richard Feynman must have been one of those rare visionaries who predict the future based not on mere creativity of the mind, but on creativity firmly entrenched in subject matter expertise. To him, miniaturization of processes and tools was a logical next step to resolve the shortcomings of existing technology – such as the slowness and bulkiness of computers in the 1950s. While most avant-garde scientific thinkers would have stopped at imagining miniaturization at micro-levels, Feynman went a step further – why not explore the molecular scale of matter? Decades before the term “nanotechnology” was coined and popularized by Eric Drexler, Feynman’s seminal 1959 lecture “Plenty of Room at the Bottom” presented the concepts of nano-scale miniaturization as a logically inevitable development in computing, chemistry, engineering, and even medicine.

Feynman’s timescale predictions were at once overly optimistic and overly pessimistic – on the one hand, he expected nanoscale miniaturization of, say, data storage to be a reality by the year 2000, and already feasible during the early 1970s. On the other hand, some of the applications which he believed would call for a sub-microscopic scale – such as computers capable of face recognition – exist already, and are widely implemented by law enforcement and social networks alike.

As for Feynman’s idea of writing the entire human knowledge onto media as small as pinheads, to the tune of taking up the print size of a pamphlet (readable through an electron microscope improved by a factor of 100), it is not the size of the media that currently poses the problem: the issue humanity is facing is, firstly, the exploding amount of its aggregate knowledge, and, secondly, the challenge of its digitization. In 1959, according to Feynman, the Library of Congress housed 9 million volumes. Today, its holdings include 151 million items in 470 languages, with millions added each year. The vast majority of all this information has not yet been digitized – and here we come to the second problem. It takes thousands of human users to transfer old books into digital form, and the pace of the volunteer-run Project Gutenberg shows how time-consuming the current practice of transcribing books really is, even when aided by continually advancing OCR software. Since 1971, Project Gutenberg digitized a mere 42,000 books in the public domain. Crowdsourcing promises a possible quantum leap in the acceleration of knowledge digitization, with  reCAPTCHA (an ingenuous reverse application of CAPTCHA, the technology used to authenticate human users online by requiring them to identify distorted signs or words as a condition of access to a certain software function or service) harnessing involuntarily the resources of 750 million computer users world-wide to digitize annually 2.5 million books that cannot be machine-read. This solution, by the way, is just another development imagined by Feynman way ahead of his time: parallel computing.

As is quantum computing. Even though Richard Feynman announced the novel idea of quantum computers only in the 1980s, the first (and very rudimentary) models of simple quantum machines are already in existence today.

Almost all of Feynman’s groundbreaking ideas are slowly becoming everyday reality. We may not have usable nanocomputers just yet, but nanotechnology and nanomaterials are slowly but steadily entering the world as we know it. And as for the writing of entire encyclopedias on headpins, “just for fun”? The smallest book in the Library of Congress today is “Old King Cole” measuring 1/25” x 1/25”, or 1 sq mm. That is the size of a period at the end of the sentence.

2014-03-18

To the end of the world and back


Remember the time when every company had a number to call, and a live person on the other side of the line? A person who would simply answer your call and deal with whatever issue you had, or connect you to the person who could help you best? Remember how later the system was improved and you had to listen and wade through touch-phone options to connect you to a live person? Remember how that person would then seamlessly morph into a talking algorithm, taking you through well-ordered steps of instructions the first time you called or the fifteenth time you called, just so that you could finally talk to the person off a scripted dialog? Remember how then the live-yet-automated human being was seamlessly replaced by a machine, one listening to what you repeated after it and answering with pre-recorded messages? And remember how that machine also disappeared, replaced with email and chat, with no number to call at all? And then email and chat also disappeared, and with it any remote interaction with something resembling a human being – the problems you experienced were to be dealt with by thick manuals and automated online or software troubleshooters. But what happens if the machine fails, and the algorithm takes you around the block in circles? Infinite loop with no way out, doom and damnation of an impersonal cyberspace.

It is DIY taken to the extreme: We, the company, are not responsible for any issues you may be experiencing. We just aren’t. You cannot contact us to complain either. Maybe you can find some good-hearted human beings who went through the same predicament, figured their way out, and are now willing to share that knowledge. Spend a few days – weeks – months – in cyberspace, maybe you will find them. Let the search for the Golden Fleece begin.

And then a miracle happens – when you are lost enough, and desperate enough, and wailing long enough, a human being sent by the machine-god suddenly appears, like a specter out of cyberspace. He kindly takes you by hand, and leads you out of darkness. Glorious humanity!

And now, after a long hiatus, we are back to our regular broadcast. Welcome back.

2013-11-26

Cyber Warfare as a Superficially Tempting Low-Level Engagement Strategy

At first sight, and against certain counterparties, cyber warfare has appeared and proved to be a phenomenally low-cost and low-risk tool of adversarial foreign policy. But while questions increasingly arise about exposure to reciprocal risk to the most heavily digitized knowledge- and data-based economy and society, it has become clear that the genie will never again return into its bottle. Strategic, legal, and political questions will not be dodged much longer. The very advantages of cyber warfare may easily and all too quickly be turned against a first mover, especially one as vulnerable as a highly digitized industrial state. Its use for asymmetric warfare increases attractiveness to non-state actors. And one of its arguably greatest potential, the disruption of enemy economic functionality by disruption of payment systems has regularly been vetoed in the interest of the integrity of the global system. It may appear that the philosophy underlying the Nuclear Non-Proliferation Treaty as well as treaties banning use of chemical and biological weapons may provide even stronger rationales in an understanding to ensure mutual non-aggression by digital electronic means between major and even mid-size powers.

2013-10-20

The Bionic Man – Paradigm of the Great Taboo?

I read a most interesting narrative of the world’s first “bionic man” assembled from prosthetic body parts and artificial organs donated by laboratories around the world. While it is interesting to note that the world’s first “bionic man” weighs in at a cost of almost $1 million to build, anyone who has taken a look at the cost of building a car of its spare parts purchased at retail knows that this amount is actually an incredible bargain, explicable only by the fact that the “bionic man” is by no means complete but represents only two-thirds of the entirety of the anatomy of living humans. Still, it does contain an artificial heart, blood, lungs, windpipe, pancreas, spleen, kidney and a circulatory system, some brain functions, and prosthetic implant parts of sensory organs. There is little doubt that the number of human “spare parts” will in time grow to approach the original asymptotically – likely in direct proportionality to geometrically rising cost. And let’s not forget that the component artificial body parts were, well, donated. It was their cost of assembly that amounted to a cool million.

After some breathless accounts of technological details, the anecdote reaches the intangible points of the experiment:

“The bionic man brings up some ethical and philosophical questions: Does creating something so humanlike threaten notions of what it means to be human? What amount of body enhancement is acceptable? And is it wrong that only some people have access to these life-extending technologies?

The access issue is especially troublesome, [Roboticist Rich Walker of Shadow Robot Co. in England] said. ‘The preservation of life and quality of life has become basically a technical question and an economic question.’”

True. But is it possible that Walker overlooked the fact that life extension has always been a technical and economic question?

2013-09-16

Crowdfunding: Removing the power of organized finance

It is a fairly fascinating proposition to have projects funded not by a commercial lender but by prospective customers or by the general public. Crowdfunding collects equity from stakeholders of all sorts in the global village. It brings manufacturers and consumers closer. Almost any purpose can be served – and financed: innovative e-products, free software, movies, records, video games, scholarly research, social projects, journalistic projects, political campaigns, start-up funding, disaster relief and other charities, to name just a few.

Crowdfunding platforms are mushrooming, differentiated by project categories, target groups, or geographical regions. One platform does not fit all: some finance creative projects, others fundamental research, another category offers social causes.  Kickstarter.com, for example, one of over 500 known crowdfunding platforms, has worked well for projects launched from the U.S., the U.K. and Canada, such as the digital watch Pebble that interacts with iPhones and Android phones: it sought to raise $100,000 and got over $10,200,000.

Other successful crowdfundings include Amanda Palmer (sought to raise $100,000 funding for an album, raised $1,200,000), Spike Lee (raised $1,250,000 for a movie), or Chris Roberts (raised $2,000,000 to revive space simulation in videogames). Chris Roberts and Cloud Imperium Games also hold the record now pegged at over $17.6 million.

As a method, crowd funding is still in its nascent stage and virtually unregulated in many if not most jurisdictions, with the early exception of the U.S. Jumpstart Our Business Startups Act (JOBS Act) that requires platforms for investment crowdfunding to register as broker-dealers with the SEC. Therefore it is protected only by conventional securities laws, such as the Howey test (Securities and Exchange Commission v. W. J. Howey Co., 328 U.S. 293 (1946)) and general concepts of fraud. Intellectual property protection for ideas disclosed to the public in the course of a crowdfunding effort is lacking as well, unless patent, copyright and trademark applications are filed early. The World Intellectual Property Organisation (WIPO) promotes a novel standard of protection embedded in creative works under the concept of a “creative bar code.”

To avoid the staggering and often suffocating cost of compliance with securities regulation, crowdfunding that does not serve conventional investment purposes needs to devote particular attention to devising its reward structure not to fall under the Howey test for an “expectation of profits” that “depends solely on the efforts” of a promoter or third party. This is relatively easy in the case of charitable funding or rewards through the enjoyment of completed creative artwork such as motion pictures, video games or records.

Typical of crowdfunding are intense media campaigns, typically through social media channels. Communication with backers (investors) including response to their feedback and a constant stream of updates is essential. The generally small amount of funds provided per backer results in a significant increase in communication intensity per dollar raised. Crowdfunding is service-intensive because it is not based on ratings and purely passive major investors content with quarterly or annual reports are rare. The learning curve with successful projects is often described as steep.

However, the operating model is just the latest high-tech based application of a classic: the “low-margin, high volume” idea. Although the high volume aspect is still debatable by comparison with institutionally-raised finance, it is not low with regard to the needs of projects for which it has been tested thus far. This is largely because it has primarily been used for purposes that had previously not been thought of as “bankable” in the first place or that could not be expected to provide an adequate monetary reward.

One of the important advantages of crowdfunding may be its lack of need for investor protection, since contributions can often be structured in ways different from traditional equity investments whenever rewards can be demonstrated to avoid the Howey test. While most backers would disagree that their contributions, often borne by enthusiasm or emotional interest, are a fool’s tax paid with no expectation of reward, they are often, and indeed typically, made in amounts the loss of which they may safely be expected to be able to absorb, just like in the case of mass charitable donations.


But precisely because typical projects are often best described as innovative or cutting-edge, their traditional profitability almost always is in a gray zone. Relative lack of regulatory burdens may be key to getting such projects funded in the first place, and it will be important to avoid the stifling burden of “protective safeguards” that have been designed to stop fraud and abuse but that also hobble the viability of many projects. Especially outside crowdfunded equity investment, this classical dilemma may be sidestepped by relying on the speed by which news of mishaps travel in the internet community and by a combination of user sophistication and improvement of diligence by digestible losses. It is the price to be paid by people who wish to “fund what matters to you” outside conventional profit expectations as their only variable to maximize. It expands democracy in the financial sector by removing the power of large financial institutions, industry associations and government sponsors to decide what does or does not get funded. It will be up to consumers in their capacity as voters to determine whether they will be deprived of this tool under the pretext of “investor protection” for the sole benefit of the monopoly held traditionally by institutional finance – turning business into a domain reserved for entities “too large to fail,” a code word for being a taxpayer-protected part of the establishment.

2013-08-28

Crowdsourcing, Scientific Method and Intellectual Property


The consequences of digital networking for our ways and means of processing complex information are only beginning to emerge. Yet one can see already with great clarity that digital networking will not only change the type of problems that may be addressed, but also method and credit for it. Concepts of intellectual property will never be the same. By acknowledging the substantial and often critical contribution of others to the evolution of thoughts, ideas, questions and solutions, we are led to depart from a “star system” that glorifies individual genius and contribution toward a more realistic acknowledgment of multiple credits for a potentially vast number of contributors, without whom certain problems may not find an answer without engaging vastly greater resources of time and funding.

In science, crowdsourcing means to out-source research and development tasks to a mass of voluntary but sometimes unaware users, in some instances through “games” that superficially serve an entertainment purpose. Crowdsourcing works particularly well if scientific knowledge can be transferred to an application in so elegant a manner that users need not understand it.

With crowdsourcing, individual leadership and ingenuity takes on a different dimension and purpose: turning into more of a managerial task, the emphasis shifts to finding a way to harness intellectual resources of the masses and finding a quid-pro-quo that permits accessing them. In a digitally networked world, it reflects “open innovation,” a changed view of the scientific process, one that anticipates the participation of as many individuals as possible in processes of research and development as an increasingly natural form of an efficient division of labor. This is especially true with regard to superficially tedious routine work. Zooniverse is a good example: it enables laymen to analyze cell tissue for cancer research, categorize galaxies, or sort through weather records in 19th century marine log entries for purposes of climate research. Sometimes tasks outsourced to the masses of users are rewarded monetarily, for example by Amazon Mechanical Turk or Crowdflower.

Relying on the contributions of many is hardly new in human endeavors: the pyramids, the Panama Canal or Neil Armstrong’s moon walk each engaged a collaborative effort of approximately 100,000 individuals. Crowdsourcing, that is relying on the intellectual resources of internet users, may enable the involvement of 100 million or more. Duolingo, a platform offering language learning resources in English, French, German, Italian, Spanish and Portuguese, is free but has an ulterior motive: by practicing, students “translate the Internet,” especially Wikipedia, into the languages they aim to discover.

In 2008, David Baker at the University of Washington created a three-dimensional “puzzle” named Foldit, a “game with a purpose,” which is to fold proteins in a spatial dimension,  a three-dimensional "asks to a mass of voluntary but sometimes unaware usersa task that requires immense computing resources but somehow comes a lot easier to humans. At least to some of them: among 100,000 Foldit aficionados worldwide playing regularly, some particular talents turned out to be 13 years old and intuitively performing tasks pushing supercomputers to their limits. Fifty years of molecular biology are packed into Foldit – but users only need to turn their models in a variety of directions on their computer screens.

Protein structures may be conceived as networks, and it is conceivable that users could be tasked with changing protein networks in a way that strips them of their characteristics in cancerous cells, thereby inaugurating a breakthrough in cancer therapy. If this concept is similarly successful as Foldit, it could result in a 100 times greater output.

There is no shortage of “citizen science” projects: MIT seeks to enable users to “map” the brain through Eyewire. The University of Munich has created Artigo, which creates a competition between users to provide keywords for cataloging archived works of art. With Geo-wiki, the International Institute for Applied Systems Analysis addressed a notorious deficiency in automated analysis of aerial photographs for the classification of land in connection with potential use for ethanol production. This in turn inspired the creation of computer games designed to draw a broader user base. Recaptcha, by now acquired by Google, is a method based on the reverse application of captcha, the technology used to authenticate human users online by identifying distorted signs or words. Recaptcha has been designed by Luis von Ahn at Carnegie Mellon University to harness involuntarily the resources of 750 million computer users world-wide to digitize annually 2.5 million books that cannot be machine-read.

Game design needs to be based on reward and recognition of performance. To date, this is typically achieved when different users arrive at identical solutions. Needless to say, this creates a risk of rewarding congruent nonsense, an outcome for which non-trivial solutions have yet to be designed. In spite of such shortcomings, game results can still improve data quality.

It is easily imaginable that Open Innovation will eventually require a revolutionary change in the protection and reward of intellectual property thus created. Some of the difficulties this presents is the relative anonymity of the web, the small size of individual contribution, and the random, haphazard, or playful nature of at least some, if not most of the contributions. But similar challenges have already been resolved in the design of the class action system: there, benefits to individual plaintiffs are also typically too small and negligible to justify pursuit by traditional methods, and the reward largely accrues to the organizers of the effort. But the social purpose, namely the disgorgement of profits of a mass tortfeasor, may well be compared to the creation of another social good in the form of R&D resulting from, or at least significantly augmented by, a large number of only marginally interested contributors.

Collective reasoning and collaborative creativity may yet ring in an era of division of labor and profit by a mass collective that is organized not along political ideology, but around the opportunities and incentives created by networked technology and pooled human talent.

2013-07-06

Why would you (still) believe Wikipedia?

And so it would seem that the time has come to reverse myself on my seal of approval for Wikipedia as a source of authoritative knowledge: Wikipedia’s article on "The Bicholim Conflict" of 1640-41, also referred to as “The Goan War,” has been shown to be a hoax. After five and a half years of misleading its readers, the article was taken down by the editors who had been unable to source-cite it. Because, surprisingly enough, Wikipedia is, in fact, source-cited. And it may take as long as five years – or indeed forever – for its (unpaid and anonymous) editors to get around to conducting proper factual checks on articles posted by volunteers. This process would also explain the nearly four-year long survival of a fictitious Indonesian island, Bunaka, and a digital lifespan of eight years and one month of Gaius Flavius Antoninus, a supposed conspirator in the assassination of Julius Caesar.

But do incidents like these really discredit a vast depository of free knowledge that is Wikipedia? (Yes, it is free, dear Britannica). Is peer-reviewed and professionally edited information always reliable? The most prominent counterexample is the prestigious journal Science. On at least two occasions, Science had to retract already published, thoroughly vetted and peer-reviewed articles. These retractions came after receiving scrutiny by the ultimate peer reviewer - the scientific community – that had called into question the groundbreaking research on which the articles reported. One of the articles linked they ome after thnd peer  Chronic Fatigue Syndrome to a xenotropic murine leukemia virus, promising a path to treatment for millions of patients. Science withdrew the article after the research results cited therein could not be replicated, and the authors partially retracted some of their findings. Another case dealt with a report of the first human embryonic stem cells created using a novel cloning technique called somatic cell nuclear transfer (SCNT). Following allegations of fraud, a committee of scientists was called upon to verify the substantive findings, and the article was retracted, while the article’s author and his lead researchers were fired from Seoul National University. This is not the end of the story, though – three years later, another article in Cell Stem Cell partly exonerated the condemned author by showing that he was not a fraud, but had been simply wrong. While the author of the retracted article did not accomplish what he claimed – to create stem cells via SCNT – he did achieve another significant breakthrough without even realizing it: he had created human stem cells via parthenogenesis. Even if his real discovery creates a greater promise for finding a cure for diseases such as Parkinson’s, the human cost of this comedy of errors cannot possibly be overlooked – after all, the names of scientists involved in the “scandal” are now surrounded by ignominy, their scientific appointments terminated, and even those who did unearth their actual discovery later find themselves hesitant to side with the condemned authors.

It is true that Wikipedia lacks a rigorous peer review system. In fact, many of its articles that fall through the cracks of its public verification process do so because they are written on topics that are not important enough to attract sufficient attention. How many people are likely to look up on Wikipedia the name of an island that does not even exist? But the “proper” academic peer review process is not infallible or without fault, either. Just the fact that something is printed – be it on paper or on a computer screen – does not support a conclusion that it necessarily contains absolute truth. Still, if a source is consistently reliable, we may safely assume it will be so also in the instant case. Hence, the judicial standard of “general acceptance” – the one extensively discussed in Frye v. United States, 293 F. 1013 (D.C. Cir. 1923) and in Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993) – still stands with regard to Wikipedia.

2013-06-02

Nano Hazards



Synthetic nanomaterials form part of a gigantic emerging market world-wide with expected growth rates of about 23 percent. Only a few years ago, nanomaterials were viewed as barely out of science fiction, with highly promising applications but also novel risks. To date, no labeling requirements exist that would alert consumers to potential near- or long term hazards to the environment, even though the European Union has a directive on cosmetic labeling that will enter into force in July 2013. Some nanomaterials cannot be degraded naturally or filtered and recycled by waste processing plants; some involve risks similar to asbestos, and others may facilitate development of bacterial resistance against their very antibacterial proprieties currently used in hospitals.  

Absent mandatory labeling and registration, consumers cannot determine today whether a product contains nanomaterials. While nanoparticle applications feature them typically bound in other compounds, those are hardly problematic and almost never pose health hazards. But the same cannot be said about production processes and waste disposal. Nanoparticles can be suspended in air, breathed in, and can enter the bloodstream. They can also penetrate various sensitive areas of the environment. Little is known about the dispersion, behavior, and chemical qualities of aging and disintegrating nanoparticles. The benefits of nanotechnology are seldom in dispute – but the question is how to assess and balance benefits and risks appropriately, so that the hygienic, protective, energy, weight, or physical advantages are not offset by unacceptable long-term environmental hazards.

2013-05-31

Nanobiotechnology: Pandora’s genie is pushing the cork



As a market, bionanotechnology is projected to grow world-wide by a rate of 28%. No further explanation is needed why the field is increasingly considered “the future of everything,” even if its potential for raising concerns is seldom overlooked and no FDA regulations exist to date.

As one can always safely assume with interdisciplinary areas, the father of all things is a dispute over terminology – in this case, the distinction, if any, between nanobiotechnology and bionanotechnology. Although enough ink is being spilled on that, it hardly matters: if nanobiotechnology, as a Lilliput version of biotechnology, takes concepts and fundamentals directly from nanotechnology to biotechnological use, bionanotechnology derives its concepts from mechanics, electricity, electronics, optics, and biology,  and relates structural and mechanistic analysis of molecular-level biological processes into specific synthetic applications of nanotechnology. Not a distinction without a difference, but it matters little due to one principal characteristic of all nanotechnology: at the molecular and submolecular scale, biochemistry, biophysics, biology, and all other forms of human inquiry converge.  Thus, multidisciplinarity is inherent.

How to prevent math from sinking in



U.S. industry groups and politicians keep ringing alarm bells dating back to the Sputnik days: we need more scientists! In fact, we need 10,000 more engineers each year, and 100,000 new STEM teachers! That, and higher scores on standardized tests in math and science, is supposed to ensure the country’s technologic and economic supremacy.

Well, research comes to the rescue to explain why we have so few science, technology, engineering, and mathematics (STEM) graduates.
  
First of all, math is really, really bad for some people. It has been shown to provoke sentiments of fear upon so much as showing people some math textbooks. The brain regions activated in those instances are the same that are responsible for the perception of physical pain. We surely cannot expose innocent children to such unpleasant experiences. And does it really matter that the brain regions associated with such pain are closer to those responding to fear of social rejection, than to those relating to fear of physical threat? Even if the response is simply a conditioning received in school by instilling fear to appear dumb to one’s peers and teachers, are we to abandon much-studied math anxiety as a proximate cause of innumeracy in our society?

2013-05-17

The case of synthetic biology

Few people are even aware what the concept synthetic biology represents, and yet it has already become a cutting-edge focus of major research efforts and teaching. Synthetic biology purports, in essence, to create useful creatures through engineering methods. It uses multi-purpose components taken from nature’s building blocks. Organisms not heretofore seen in nature may be capable of producing fuels, complex chemicals, or novel pharmaceuticals, but also computer circuits based on biological structures. Can we out-nature nature and surpass evolution? Impressive steps in that direction have been made already. Optimization and fine-tuning of naturally occurring enzymes no longer makes front-page news.

In fact, synthetic biology’s visions open Pandora’s box of unlimited possibilities pointing to the big bang of a multi-trillion-dollar industry: tailor-made bacteria that identify and destroy toxins, produce fuels formerly known as fossil, render planets like Mars or Venus suitable for human habitation, let tables and chairs grow out of the soil without needing humans to manufacture them from timber wood. The starting point of synthetic biology is the identification of “bio bricks” and a three-dimensional “DNA printer” transforming a customized genome sequence into a new and reprogrammed bacterium or complex organism. Harvard researcher George Church has already presented a prototype he called MAGE (Multiplex Automated Genome Engineering), a device that indeed translates relatively short DNA sequences via several intermediary steps into molecules that it inserts directly into monocellular organisms.

Our competition with nature may be won by sidestepping considerations that would play a central role in evolution, which is, after all, a contest for survival of the fittest.  But what if the effort required to produce a substance is in no relation to the survival of the individual of the species producing it? Natural organisms need to observe a balance of priorities and interests, but synthetically engineered life does not.