Pages

2017-12-01

Winning Formula M

Whoever thought that Bernie Ecclestone’s departure from Formula One would be the end of an era had absolutely zero perspective on nanotechnology: there is plenty of room at the bottom, as nanotech’s earliest and, to this day, greatest visionary Richard Feynman declared at Caltech in 1959 – at a time when not only many members of today’s six nano-racing teams but also their academic mentors were not even born yet.

For there is an emergent dimension of molecular racing, and its first event took place at Toulouse, at a Grand Prix race track created by CEMES, the Centre d’Élaboration de Matériaux et d’Etudes Structurales of the Centre National de la Recherche Scientifique this April 28-29, 2017. It is not an experimental playing field for silly geeks who never outgrew their dad’s garage: just like video games produced considerable collateral benefits for the “real world” (think drone technologies, or remote surgery) the purpose of molecular machines is to eventually perform useful work on an atomic scale. Nanorobots will be able to manipulate individual atoms or molecules, manufacture amazing new materials or transport substances in a heretofore unknown targeted fashion.

The six race cars starting in Toulouse were 1-2 nm in size and had to be moved through minimal electric impulses on their own motion along a defined track of 150 nm that included handling two chicanes with a 120-degree angle. It took the winning team 90 minutes, temporarily reaching speeds of up to 300 nm per hour. That was an amazing speed in light of the fact that the organizers had calculated 18 hours rather than 90 minutes as a performance limit.

The winner was neither Mercedes nor Ferrari but a joint venture between Rice University, Houston, TX and the University of Graz, Austria. On the two racing days, they moved their cars by scanning-tunneling microscope over a distance of 1,000 nm or 1 micrometer. The runner-up mustered only about one-sixth of their speed. Division of labor turned out useful: while the team at Rice produced the molecules, the team at Graz focused on “training,” i.e., examining them by scanning-tunneling microscope, observing their motion, and “driving” them in the race. While the Rice chemists benefited from decades of experience in producing an exceptionally suited molecule, Graz advanced manipulation of mechanics by optimizing the interaction between molecule and “road” surface. Among the sponsors of competing teams were also three major automotive manufacturers: Peugeot, Toyota and Volkswagen. The rules permitted molecular race cars to be propelled either (i) using the tunnel electrons passing through it; or (ii) light; or (iii) nano-magnetic effects. Molecule cars were allowed to be manipulated mechanically to reach the 2 gold ad-atoms of the starting line. While it may take a while for nano-race cars to command significant TV following and commercial ratings, part of their fascination is that the action is, indeed, observable and will pave the way for infinitely greater sophistication as time for those developments might shrink at a speed comparable to miniaturization itself.

2017-11-01

Edible Nanofibers: A Substantially More Cost-Effective Delivery System for Iron

Iron deficiency anemia is a growing public health problem, above all in developing countries. About 2 billion individuals or about 30 percent of the world’s population are anemic, many due to iron deficiency which can lead to developmental disorders in children. The pharmaceutical industry has developed a range of products for iron fortification of foods to provide the body with the element central for oxygen transport through the bloodstream. But cost, bioavailability and other factors created limitations. Ferrous sulfate, currently a standard treatment for humans, changes color and taste of foodstuffs in an undesirable manner and shows harmful side-effects.

Recent nanotechnology research at ETH Zurich pursued a different avenue: edible nanofibers based on whey protein are loaded with iron nanoparticles and delivered as a food supplement in liquid or powder form. Experiments with rats showed that stomach enzymes were capable of dissolving the whey protein nanofibers completely, while acidic conditions in the human stomach are certain to dissolve the load of iron nanoparticles with greatly improved bioavailability. The study also tested for risk of harmful accumulation of nanofibers or nanoparticles within the organism, since whey protein fibers had never before been used in food stuffs. But not a single indication of accumulation or changes in organs was found. That notwithstanding, additional studies will be needed to establish security for human consumption. Cost-benefit analysis measuring bioavailability and resorption of iron nanoparticles shows a substantial improvement in nutritional supplementation at a particularly low cost of ingredients and manufacturing processes.

2017-10-01

Nanovibrations: Controlling the Material Properties of Crystals

Salts and comparable structures consist of ions. In a solid state, they form ion lattices. Vibrations of ion crystals account, inter alia, for heat transfer and the diffusion of sound. Controlling vibrations may enable control of material properties of crystals. Conceivable innovative applications include thermoelectric nanoelements that could render human body heat usable as power supply of wearable electronic devices, or wafer-thin structures to create sound-proof chambers.

Now, a cooperation of Rutgers University, a center of nanobiology, and the University of Graz, Austria, relying on latest-generation electron microscopy, made ion lattice vibrations visible for the first time in ultra-high atomic and energetic resolution in both spatial and spectral dimension, compared to earlier experiments. The ion lattice of a crystal nanocube was caused to vibrate by targeting it with an electron beam. Depending on the spot on the cube hit by the electron, it can trigger different modes. The more powerful the ion trigger, the more energy is drawn from the electron beam. The individual modes of vibration can be determined from the measured loss of energy.

By causing the electron beam to cover the entire sample, vibrations of the ion lattice could be determined with topological resolution in the nanometer scale and with a very high frequency in the Terahertz range. The phenomena observed under the electron microscope were computer-simulated, which showed for the first time how ions vibrate at different locations on the cube. This may open the door to revolutionary developments in steering sound and heat with heretofore unknown precision.

2017-09-03

Printing Life

It has been some time since 3D printers entered our collective consciousness as useful tools for more than toys and demonstration objects. Even firearms and certainly synthetic prosthetic limbs seem capable of being 3D-printed. As ageing societies face increasing shortages of donor organs for transplantation, the use of 3D print technology in medicine, although perhaps among the most obvious and compelling, is new but likely to disrupt synthetic biology as few tools have done before. Small wonder it won first prize 2016 at the international Genetically Engineered Machine (iGEM) contest, an MIT creation. The method enables printing intact tissues and potentially even entire organs using 3D bioINK tissue printing technology.

While printing biological material such as cartilage is already established state-of-the-art, printing complex cell tissue still presented notable challenges. They were resolved by printing layers of living cells with a 3D printer into a biocompatible matrix in a petri dish. In the past, hydrogels were used to supply a gelatin-like structure that is only later populated by cells. This “scaffolding” complicates printing and creates unnatural coherence between cells.

Instead, the students at two universities in Munich, Ludwig-Maximilian University and Technical University of Munich, developed a proprietary “biological ink” similar to a two-component adhesive to print living cells directly in 3D. Its main component is biotin, also known as vitamin H or B7 that is loaded onto the cellular surface. The second component, streptavidin, is a protein that binds biotin and thus provides the biochemical adhesive proper. In addition, high-volume proteins were equipped with biotin groups in order to create cross-networking structures. When a suspension of these cells is “printed” into a concentrated solution of the protein components, they form the requisite 3D structure and the bioINK tissue printer forms layers of scalable, formable tissue of living cells, ready for transplant. 

2017-08-01

Shepherding Innovation: Two Very Different Models

Innovation as a socio-intellectual phenomenon also reflects the multitude of ways to skin a cat.

In absolute terms, Switzerland has long been recognized as the world’s most innovative country. Of course, the criteria one picks can oddly vary results: if you count patents per capita, Eindhoven, domicile of Philips, is the innovation capital of the world, followed by San Diego. Israel does not even figure on that list.

The secret to Swiss success has been reliance on cheap and abundant capital including foreign investment, highly selected skilled immigrants, and two world-renowned Federal Institutes of Technology (in Zurich and Lausanne). Over 60 percent of R&D expenditures come from the private sector. The country tops the World Economic Forum’s Global Competitiveness Report, the EU’s Innovation Union Scoreboard, the Global Innovation Index, and patent applications in Europe. But despite comparatively very low taxes for an industrialized country, Switzerland is neither an entrepreneurship hub – its innovation is driven by large and well-established companies, not startups – nor is it known for easy access to venture capital or IPOs. To an extent, it is fair to call the Swiss model of innovation establishment-driven. As such, it is extremely successful and sustainable by any standard.

On the other hand, Israel, a.k.a. Start-Up Nation, a.k.a. Silicon Wadi, a country roughly the size of New Jersey, is world champion at churning out technology at a feverish pace with far more limited resources and infrastructure. It is also world champion in R&D expenditures, clocking 4.3 percent of GDP, almost half of it from foreign investors. It has been called the best country to found a startup and the worst to keep it alive. But it is also the world’s leading model for public-private partnership in innovation.

Take Yissum, a technology transfer vehicle of Hebrew University. Established 1964, it is a wholly-owned subsidiary of the university, it accounts for almost 10,000 patents and 120 spin-offs. When a patent is registered, the inventors / scientists take 40 percent of patent revenue; 20 percent goes to their lab, and 40 percent to the university. This covers one-tenth of Hebrew University’s research budget. Long-term research cooperation exists with several multinational enterprises that established over 320 R&D centers. The downside of market orientation is equally obvious: applied research is prioritized over foundational research, further exacerbating its lag.

More than 8000 startups were created in Israel in the last decade. They employ 500,000 individuals. Just in 2016, some 1,400 new companies were founded. Even though, like everywhere else, the vast majority of startups does not survive, there are at any given time some 6,000 operational startups. The country has no choice but to try new things. Its domestic market is too small, and a foothold in foreign markets requires products the consumer has not realized a need for yet.

Venture Capital finance also follows its own model (although only $4.8 bn was raised in 2016). Terra Venture Partners, a private business development fund, operates in an environment Silicon Valley can only dream of: every shekel invested by the fund will be matched with six shekels by the state.

The government’s Israel Innovation Authority, by now an agency independent of the Ministry of Economics, funds 2,000 projects from all walks of life, with the exception of foundational and military research. Select projects are funded with up to 85 percent of their budget requirements, while university research with up to 90 percent. Israel Innovation Authority recovers 35 percent of its investments on average. If it were more, the agency would conclude that they took insufficient risk. But if a funded company is sold abroad, it must repay three times the amount received.

Not surprisingly, and very much based on the factors described above, the international advantage of Switzerland lay in health and material sciences, while Israel has become a focal point in data sciences, alternative energy, and natural resource substitution.

2017-07-01

The Short-lived Fallacy of Biometrics

When MasterCard introduced last year in twelve countries a feature identifying the payor via fingerprint scanner or selfie, it took one further step toward abandoning the immensely flawed concept of chosen passwords and PINs. Considering the deplorable state of imaginative solutions – the globally dominant password being 123456 and the most-used PIN being 1234 – the move seemed long overdue. Another contribution to security breakdowns is the mushrooming number of “different” password requirements no one can seriously be expected to remember, particularly in combination with a multitude of user names and ever-simplified “forgot password” functions.

HSBC has additionally enabled identification via voice recognition software that verifies some 100 unique speech characteristics such as speed, vocal traction, nasal tones and enough others that are said to work even when the user suffers from a cold. Wells Fargo and also a range of other banks enabled log-in via retina scan. Canadian start-up Nymi authenticates individuals through their pulse taken by a wearable prototype interacting with near field communication terminals.

While banks and fintechs may be right in concluding that this increases safety beyond passwords, there is no question that biometrics will inaugurate just another round in the perpetual arms race between security and illegitimate access.

Its limitations are increasingly obvious.

At the 2014 Chaos Communication Congress, hacker Starbug a.k.a. Jan Krissler showed how a picture taken with a single lens reflex camera of German minister of defense Ursula von der Leyen from a distance of three meters sufficed to reproduce her thumb print with Verifinger, a graphics software tool.

Research at Michigan State University developed a simple method to print pictures of fingerprints on a pedestrian printer with a resolution sufficient to fool fingerprint readers, unlocking smartphones and completing transactions via Apple Pay.

The ACLU has showed that selfie scans heavily depend on lighting conditions and may be influenced by changes in hairdo, ageing or weight changes.

Background noises and recording issues may interfere with identification by voice recognition.

When hackers accessed 40,000 accounts at British Tesco Bank and withdrew funds from 9,000 accounts, the monetary loss of £2.5 million was the least of it and highlighted the consequences of compromised biometric databases: while one may change passwords with a minimum of fuss, not quite the same can be said for getting a new fingerprint or face – here, the method of identification is compromised, potentially permanently. Turns out that biometrics may be worse than passwords - and hackers are still notoriously at least one step ahead of the game.

2017-06-17

Bow-tying DNA

Building on research by CalTech’s Paul W.K. Rothemund, it has been known for considerable time that DNA is folded up to form nanoscale shapes and patterns. The restiform genetic makeup of mammals is folded in bows in order which enables even distant areas to form contacts. It is read by dragging it through cohesin, a ring of proteins, until a stopper is reached. It has been known for some time already that enhancers amplify and activate genes that are positioned far away on the thread of DNA. This can most probably be explained through a precise process of folding back DNA so that enhancers come in contact with the “right” genes. By a commonly accepted hypothesis, this folding back happens as a ring of cohesin molecules surrounds the DNA thread at a random location. It is pulled through the ring until it reaches a “thick” spot that acts as a “stopper.” This thickening is caused by a protein named CTCF that attached itself to the DNA, targeting distant DNA sections for direct contact.

Experiments with mural cells showed that cohesin does indeed move along the DNA thread over long distances with transcription (or “reading” DNA information) acting as an engine. This process is likely powered by RNA polymerase, an enzyme that carries out transcription, probably not least to be able to “read” genetic information in the first place.

2017-05-01

Molecular-sized ball bearings

It turns out that functionality we know in everyday life also exists at a molecular-sized nano-level. Measurements by mass spectrometers had shown some time ago that precisely thirteen boron atoms can enter into a particularly stable form called a magic cluster. It has a flat structure and consists of two concentric rings: an inner ring consisting of three boron atoms and an outer ring consisting of ten boron atoms.

Some years ago, theoretical chemist Thomas Heine, now at the University of Leipzig, predicted that these two rings would permit almost frictionless distortion against each other without affecting the overall stability of the molecule in any ways. This results in a molecular-sized ball bearing that permits a virtually frictionless counter-rotating movement of the atomic rings.

Proving this prediction was not free of challenges and could be done only through spectroscopic measurements with a free electron laser at Fritz-Haber-Institute in Berlin. Commercially available lasers are unsuitable for this proof because it requires extremely intense laser radiation in a narrowly defined wavelength range. By measuring the infrared spectrum and accompanying calculations of quantum mechanics it was possible to prove the functionality of the molecular ball bearing.

This is one of the first practical indications that quantum effects may be put to targeted use as part of the functionality of molecular systems. Despite the fact that applications are still in the distant future, their promise and potential is immense. This comes as no surprise that the 2016 Nobel Prize in Chemistry was awarded for discoveries in the area of molecular machines.


2017-04-01

The Scissor Ladies

I cannot claim that I ever was a fan of Edward Scissorhands. Not enough dark imagination, I guess. But, alas, it brought back the concept of genetic scissors that have bookmakers now give amazing odds on a Nobel Prize in Chemistry to Emmanuelle Charpentier and UC Berkeley’s Jennifer Doudna for their development of CRIPR/Cas9 technology that does not create Frankenscissors such as Edward’s, but is a precision tool for the manipulation of gene sequences by slicing DNA molecules at a chosen spot. Cas9 (or CRISPR associated protein 9) is an RNA-guided DNA endonuclease enzyme associated with the CRISPR adaptive immunity system consisting of segments of prokaryotic DNA that contain Clustered Regularly Interspaced Short Palindromic Repeats. Each repetition is followed by short segments of ‘spacer DNA’ from previous exposure to a bacteriophage virus or plasmid.

Now that I reliably lost 99.99% of my gentle readers, let me mention the rare consensus that the peculiar acronym describes “genome editing,” which, according to MIT Technology Review, is the most important discovery since the dawn of biotechnology in the 1970s. For Science, it was the Breakthrough of the Year 2015. Neither surprises if one takes a look at the perplexing list of awards and honors of the all-European scientist Charpentier. Her idea was a combination of Cas9, which was already known, with an RNA molecule that would dock onto CRISPR/Cas9 together with tracrRNA. It is how bacteria cut foreign viral genes out of their own DNA – the equivalent of a surgeon operating on herself.

Genome editing is considerably more precise than classical genetic engineering where to this day nobody could predict where exactly newly installed genes will be placed. This was a major point of contention on the part of critics of genetic engineering. CRISPR/Cas9, on the other hand, permits precise insertion of changes to the genome through directing the DNA “scissor” Cas9 by means of so-called “guide RNA” to the desired location. The “fracture” in DNA can subsequently be repaired in different ways. To produce guide RNA, one needs to know the sequence of the targeted gene or DNA segment – a set can be produced within one day for about $20, making the technology accessible to any lab. Most molecular biologists would never have dared to dream of a tool like CRISPR/Cas9: it is simple, fast, precise and cheap to cut and modify the genome of any organism – bacteria, plants, animals, humans. It was inspired by an antiviral defense mechanism bacteria use to eliminate virus segments of DNA from their own double helix. If one is to compare conventional gene technology with open-heart surgery, genome editing is the equivalent of a minimally invasive procedure.  It permits scanning of DNA and cutting out parts of it selectively. It has been used to render malaria mosquitoes harmless, to edit plant genomes, it may be used to eliminate and replace part of human DNA, leading to new therapeutic methods in dealing with genetic diseases.

Regulatory issues abound. Because CRISPR/Cas9 can be used in different ways, biologists distinguish three types: Type I performs a single-point editing of a base within the DNA sequence, exchanging one letter for another. In the case of Type II, just a few letters are “edited.” Type III, finally, introduces a larger piece of foreign DNA into the cut. Type I and Type II edits do not result in a genetically modified organism because their edits result in point mutations that also occur naturally by crossing and/or recombination. They occur naturally all the time and are the engine of evolution. Conventional breeding methods also change the genome. Mutagenesis, for example, exposes plants to chemicals or radiation, causing untold mutations. Nobody knows where they occur, and most are harmful. Such plants are considered “natural” and are marketed without additional safety checks anywhere. Why should plants with a more surgically and not randomly altered genome be at a legal or regulatory disadvantage?

Contrary to the U.S. and Canada, the EU and Switzerland place greater importance on the procedure generating a plant or livestock than on the final product. If elements are introduced into the genome of an organism that was prepared outside the organism, it is considered genetically modified under EU law. This would include all plants or animals edited through CRISPR/Cas9 or another genome editing technology. But there is a catch: the material introduced by this technology, namely the guide RNA and the Cas9 enzyme, do not exist in the final product. These organisms do not differ from their conspecifics – how should the use of technology be controlled when its application cannot be proven? Even the Swiss Commission of Experts on Biosafety expressed reservations about strict regulation of such organisms, finding that, since products cannot be distinguished from others generally, they should be treated as equivalent in terms of consumer safety. While in the U.S. first plants with edited genomes are brought to market, the EU has yet to decide how to characterize such organisms.  

CRISPR/Cas9 can help answer questions about the origin of congenital diseases and develop new drugs. It permits reproduction of genetic mutations that lead to disease, be it in an animal model, in plants, or in organoids. But it could also be used for eugenics by eliminating disfavored traits from the genome – arguably the most politically, morally, and ethically sensitive issue in genetics since the 1930s. Recent discoveries of certain genetic roots of crime make scientists nervous. Harvard’s George McDonald Church, who optimized CRISPR/Cas9 for human genome engineering, wants to influence the development of ova and sperm, and Chinese scientists, including Junjiu Huang at Sun Yat-sen University at Guangzhou, have manipulated embryos affected by genetic disorders. In 2015, beta-thalassemia, a blood disorder, was first edited in human zygotes. While Charpentier is adamantly opposed to editing the human embryo, there is no compelling reason other than existing EU regulation to abstain from exploring further use of this technology. Her colleague Doudna, also opposed to such experiments, on the other hand appears more resigned and realistic about the likelihood of such developments. If opposition to stem cell research is any guide, the futility of attempting to estop science and research becomes self-evident. Public debate is a useful necessity given the lightning speed of evolution in genome engineering. Since genetic interventions have become possible, one should remember the adage that if something can be done, it likely will be. Regulating rather than prohibiting in principle appears to hold more realistic promise.

2017-03-10

The evolutionary game theory of conditional cooperation

In my mathematician’s incarnation, and in my loitering around the Institute Vienna Circle, I came across Austrian Karl Sigmund, the 2003 Gauss Lecturer. Along with John Maynard Smith (the “Etonian communist”) and American George Robert Price, he is at least one parent of evolutionary game theory, a fascinating branch of mathematics that applies game theory to biology or, rather, the evolving populations of life forms. Its tools are valuable to my interest in crowd phenomena. It defines a mathematical framework of contests, strategies and analytics for Darwinian competition. There are indeed mathematical criteria to predict the resulting prevalence of such competing strategies, and evolutionary game theory establishes a rational basis for altruistic behaviors within the Darwinian process. Unlike classical game theory, it centers on the dynamics of strategy change; its determinants that are not just competing strategies but, more importantly, the frequency of occurrence of these strategies within a given population.

Humans have superior adaptive abilities. They are far better than apes at imitation. And they are receptive to praise and reprimand. Man is the perfect pet – domesticated like no other, by ourselves. It is not unusual that a species practices selective breeding on its own kind. Sexual selection is well known since Darwin. An oft-cited example is the peacock’s tail: it does not facilitate survival but only impressing the female of the species – although recent research puts that in question. A male characteristic and the female preference for it spread across the population.

Of course, domestication does not only require selective breeding of just any given characteristic. Said characteristic must also have an economic benefit. What is the economic utility of humans? They don’t contribute commodities such as wool or eggs, but they contribute services. There are other service animals as well: horses serve as means of transportation, dogs as a hunting tool or an alarm device. What purpose do humans serve? They serve as partners of other humans. A partner is someone amenable to assistance, but only on condition of reciprocity.

Indeed, human readiness to cooperate with partners, their “conditional cooperation,” is a salient characteristic. And it is unique. While bees or ants also cooperate on a large scale, they do so only within their beehive or anthill, that is to say, with their own siblings. Humans are rather unique in that they are capable of cooperating also with individuals to whom they are not related.

Such cooperation is grounds for the success of our species. There appear to be no natural limits to the degree of our communal enterprise. That seems odd: should evolution not favor creatures that primarily maximize their own interests?

2017-02-01

Carbon nanotube transistors of the future


Moore’s Law has become an endangered species. No longer is it a safe assumption that the number of transistors in processors doubles every two years. Intel abandoned this expectation at the ISC 2016 Conference in Frankfurt, Germany. Processor performance barely increases even though they become more energy efficient, and the growth of transistor density has slowed down remarkably. Since considerable time, the industry is also looking for alternatives to silicon as physical limits to its useful further miniaturization are approaching.

Enter carbon nanotubes. The University of Wisconsin at Madison has reached a major milestone in manufacturing transistors out of carbon nanotubes that leave its silicon cousins way behind in terms of conductivity. An increase of 90% in current was measured by pitting a 140 nm carbon nanotube transistor against a 90 nm silicon P-channel MOSFET transistor. The carbon nanotube FET did well even under the disadvantage of a larger node, but the jury is still out on a comparison to current 14 nm FinFET or Tri-Gate transistors.

Carbon nanotubes consist of rolled single-atom layers. They are among the most highly conductive materials known to man. Carbon-based transistors might support a five-fold increase of performance compared to silicon while energy consumption will decrease to one-fifth of present levels.  But manufacturing had always run into problems with minuscule metal contaminations that massively affected conduct of electricity, and thus performance. New technology developed at UW Madison resolved that by filtering impurities with the assistance of a polymer that leaves 99.99 percent pure carbon layers.

This permitted a new production process that places carbon nanotube transistors on a 1x1 inch wafer. But to render the technology interesting and affordable for commercial use, this process needs to become scalable to produce larger wafers and higher transistor densities. First experiments appear to have been successful. Still, several years will likely pass before carbon nanotubes will appear in commercially available processors, since chip manufacturers will also need to invest heavily in adjusting their manufacturing processes and factories.

RAM might take this leap a lot sooner. Fujitsu is already working on commercialization of nano-RAM (NRAM) based on carbon nanotubes and plans to initiate mass production based on licensed technology of Nantero, the world leader in carbon nanotube electronics, by 2018.

Along the concept, if not the formula, of Moore’s Law, somewhere between 20-50 years out, further miniaturization of IT elements and devices (but also their transition and fusion into biotechnology and transhumanism) will be dominated by a transition from software into hardware, a confusion of the two so complete that it will literally become impossible to know where the boundary between them ought to be drawn. And that is precisely the point: perhaps there shouldn’t be a boundary, especially if the next step or, rather, quantum leap, as some conjecture, should be DNA computing.

2017-01-02

Wearable air conditioning? Passively cooling the human body with nanoPE


The era of global warming creates days when virtually any textile cover may appear too much. That is because all garments heat, even if that effect depends on the type and thickness of used filaments. A new, low-cost material developed at Stanford by the research group of Yi Cui has now developed passive cooling as a breakthrough method of thermal management. It might even result in large energy savings by reducing expenses for air conditioning. Their fabric, nanoPE (nanoporous polyethylene) textile may be thought of as an enhancement on the theme of saran wrap. It consists of two layers of intransparent and nanoporous polyethylene film interspersed with a layer of cotton mesh for strength and thickness. It virtually does not reflect infrared radiation emitted by the human body, producing a cooling effect unavailable with conventional fabric.  It is also permeable to water vapor, which makes it an effective and scalable textile for personal thermal management. What is not entirely clear yet is how nanoPE behaves under UV radiation, in heavy use, and after multiple laundry cycles.

The polyethylene material, already commonly used in battery manufacturing, has a particular nanostructure with pores that measure 50 to 1,000 nm in diameter – a spacing that allows the passage of the body’s infrared radiation yet causes sunlight to scatter and reflect upon contact with its surface. The nanoPE fabric reflected 99% of visible light, contrary to commercially available polyethylene that let 80% of visible light pass through. The resulting skin surface temperature was 3.6oF (2.7oC) lower than under ordinary cotton fabric.

Because few studies looked into engineering the radiation characteristics of textiles, this research would appear to open new avenues to passive temperature management without involving outside energy sources, merely by tuning materials so as to dissipate or trap infrared radiation.