Pages

2014-03-28

Plenty of room at the bottom

Richard Feynman must have been one of those rare visionaries who predict the future based not on mere creativity of the mind, but on creativity firmly entrenched in subject matter expertise. To him, miniaturization of processes and tools was a logical next step to resolve the shortcomings of existing technology – such as the slowness and bulkiness of computers in the 1950s. While most avant-garde scientific thinkers would have stopped at imagining miniaturization at micro-levels, Feynman went a step further – why not explore the molecular scale of matter? Decades before the term “nanotechnology” was coined and popularized by Eric Drexler, Feynman’s seminal 1959 lecture “Plenty of Room at the Bottom” presented the concepts of nano-scale miniaturization as a logically inevitable development in computing, chemistry, engineering, and even medicine.

Feynman’s timescale predictions were at once overly optimistic and overly pessimistic – on the one hand, he expected nanoscale miniaturization of, say, data storage to be a reality by the year 2000, and already feasible during the early 1970s. On the other hand, some of the applications which he believed would call for a sub-microscopic scale – such as computers capable of face recognition – exist already, and are widely implemented by law enforcement and social networks alike.

As for Feynman’s idea of writing the entire human knowledge onto media as small as pinheads, to the tune of taking up the print size of a pamphlet (readable through an electron microscope improved by a factor of 100), it is not the size of the media that currently poses the problem: the issue humanity is facing is, firstly, the exploding amount of its aggregate knowledge, and, secondly, the challenge of its digitization. In 1959, according to Feynman, the Library of Congress housed 9 million volumes. Today, its holdings include 151 million items in 470 languages, with millions added each year. The vast majority of all this information has not yet been digitized – and here we come to the second problem. It takes thousands of human users to transfer old books into digital form, and the pace of the volunteer-run Project Gutenberg shows how time-consuming the current practice of transcribing books really is, even when aided by continually advancing OCR software. Since 1971, Project Gutenberg digitized a mere 42,000 books in the public domain. Crowdsourcing promises a possible quantum leap in the acceleration of knowledge digitization, with  reCAPTCHA (an ingenuous reverse application of CAPTCHA, the technology used to authenticate human users online by requiring them to identify distorted signs or words as a condition of access to a certain software function or service) harnessing involuntarily the resources of 750 million computer users world-wide to digitize annually 2.5 million books that cannot be machine-read. This solution, by the way, is just another development imagined by Feynman way ahead of his time: parallel computing.

As is quantum computing. Even though Richard Feynman announced the novel idea of quantum computers only in the 1980s, the first (and very rudimentary) models of simple quantum machines are already in existence today.

Almost all of Feynman’s groundbreaking ideas are slowly becoming everyday reality. We may not have usable nanocomputers just yet, but nanotechnology and nanomaterials are slowly but steadily entering the world as we know it. And as for the writing of entire encyclopedias on headpins, “just for fun”? The smallest book in the Library of Congress today is “Old King Cole” measuring 1/25” x 1/25”, or 1 sq mm. That is the size of a period at the end of the sentence.

No comments:

Post a Comment