Pages

2014-03-28

Plenty of room at the bottom

Richard Feynman must have been one of those rare visionaries who predict the future based not on mere creativity of the mind, but on creativity firmly entrenched in subject matter expertise. To him, miniaturization of processes and tools was a logical next step to resolve the shortcomings of existing technology – such as the slowness and bulkiness of computers in the 1950s. While most avant-garde scientific thinkers would have stopped at imagining miniaturization at micro-levels, Feynman went a step further – why not explore the molecular scale of matter? Decades before the term “nanotechnology” was coined and popularized by Eric Drexler, Feynman’s seminal 1959 lecture “Plenty of Room at the Bottom” presented the concepts of nano-scale miniaturization as a logically inevitable development in computing, chemistry, engineering, and even medicine.

Feynman’s timescale predictions were at once overly optimistic and overly pessimistic – on the one hand, he expected nanoscale miniaturization of, say, data storage to be a reality by the year 2000, and already feasible during the early 1970s. On the other hand, some of the applications which he believed would call for a sub-microscopic scale – such as computers capable of face recognition – exist already, and are widely implemented by law enforcement and social networks alike.

As for Feynman’s idea of writing the entire human knowledge onto media as small as pinheads, to the tune of taking up the print size of a pamphlet (readable through an electron microscope improved by a factor of 100), it is not the size of the media that currently poses the problem: the issue humanity is facing is, firstly, the exploding amount of its aggregate knowledge, and, secondly, the challenge of its digitization. In 1959, according to Feynman, the Library of Congress housed 9 million volumes. Today, its holdings include 151 million items in 470 languages, with millions added each year. The vast majority of all this information has not yet been digitized – and here we come to the second problem. It takes thousands of human users to transfer old books into digital form, and the pace of the volunteer-run Project Gutenberg shows how time-consuming the current practice of transcribing books really is, even when aided by continually advancing OCR software. Since 1971, Project Gutenberg digitized a mere 42,000 books in the public domain. Crowdsourcing promises a possible quantum leap in the acceleration of knowledge digitization, with  reCAPTCHA (an ingenuous reverse application of CAPTCHA, the technology used to authenticate human users online by requiring them to identify distorted signs or words as a condition of access to a certain software function or service) harnessing involuntarily the resources of 750 million computer users world-wide to digitize annually 2.5 million books that cannot be machine-read. This solution, by the way, is just another development imagined by Feynman way ahead of his time: parallel computing.

As is quantum computing. Even though Richard Feynman announced the novel idea of quantum computers only in the 1980s, the first (and very rudimentary) models of simple quantum machines are already in existence today.

Almost all of Feynman’s groundbreaking ideas are slowly becoming everyday reality. We may not have usable nanocomputers just yet, but nanotechnology and nanomaterials are slowly but steadily entering the world as we know it. And as for the writing of entire encyclopedias on headpins, “just for fun”? The smallest book in the Library of Congress today is “Old King Cole” measuring 1/25” x 1/25”, or 1 sq mm. That is the size of a period at the end of the sentence.

2014-03-18

To the end of the world and back


Remember the time when every company had a number to call, and a live person on the other side of the line? A person who would simply answer your call and deal with whatever issue you had, or connect you to the person who could help you best? Remember how later the system was improved and you had to listen and wade through touch-phone options to connect you to a live person? Remember how that person would then seamlessly morph into a talking algorithm, taking you through well-ordered steps of instructions the first time you called or the fifteenth time you called, just so that you could finally talk to the person off a scripted dialog? Remember how then the live-yet-automated human being was seamlessly replaced by a machine, one listening to what you repeated after it and answering with pre-recorded messages? And remember how that machine also disappeared, replaced with email and chat, with no number to call at all? And then email and chat also disappeared, and with it any remote interaction with something resembling a human being – the problems you experienced were to be dealt with by thick manuals and automated online or software troubleshooters. But what happens if the machine fails, and the algorithm takes you around the block in circles? Infinite loop with no way out, doom and damnation of an impersonal cyberspace.

It is DIY taken to the extreme: We, the company, are not responsible for any issues you may be experiencing. We just aren’t. You cannot contact us to complain either. Maybe you can find some good-hearted human beings who went through the same predicament, figured their way out, and are now willing to share that knowledge. Spend a few days – weeks – months – in cyberspace, maybe you will find them. Let the search for the Golden Fleece begin.

And then a miracle happens – when you are lost enough, and desperate enough, and wailing long enough, a human being sent by the machine-god suddenly appears, like a specter out of cyberspace. He kindly takes you by hand, and leads you out of darkness. Glorious humanity!

And now, after a long hiatus, we are back to our regular broadcast. Welcome back.