Pages

2016-04-01

The inutility of mass surveillance, backdoors, and limits to encryption


Terrorist attacks such as the ones in Paris on November 13, 2015 raise the specter of prevention through increased surveillance of digital communication. But as is the case with most knee-jerk reactions following dramatic unforeseen events, the call for increased mass surveillance of the internet has very little practical utility. On the contrary, if the events in Paris taught one lesson, it is that mass surveillance failed resoundingly. It could not be otherwise: since many decades, analysis in hindsight has consistently demonstrated one thing, and one thing only: regardless of how much computing power agencies like NSA, GCHQ, DGSE, BND, ISNU and many others dedicate to the task, the current state of algorithms does not enable sorting massive data for relevance.  Therefore, the entire plethora of intercepted and stored data can be made sense of only after the fact but is more often than not of limited use in the prevention of attacks. Neither 9/11 nor the 2005 London Subway Bombing nor the Madrid Train Station attack nor the Paris Bataclan incident nor any other could be prevented through digital mass surveillance. Admittedly, there are claims – for the most part not verifiable – of a great number of thwarted attempts. But proof of success in mass surveillance has not been adduced in a single case of an individual prosecuted for attempted crimes. While this fits the adage that law enforcement needs to be successful 100% of the time while perpetrators only need to succeed once, it does not change one fundamental insight: despite the fact that many attackers in prominent terrorist incidents had previously shown up on agency radar screens, they were still able to carry out their nefarious plans.


While government-sponsored digital mass surveillance is yet to show any tangible benefits in securing its purported goal – our safety, revelations of the scale of data collection prompted calls for individual and organized resistance, as exemplified by Bernard Harcourt’s Exposed: Desire and Disobedience in the Digital Age, Frank Pasquale’s The Black Box Society: The Secret Algorithms That Control Money and Information, and others, and by the discussions they spurred. 


NSA is, of course, entirely aware of the inutility of mass surveillance. Documents revealed by Edward Snowden show the agency’s struggle with its skyrocketing data overload, forcing it to apply artificial limitations in order to be able to conduct any meaningful analysis at all. The reason why it – and its peers abroad – keeps striving to collect even more Big Data lay in every bureaucracy’s ambition to increase power and status through size and budget as an end in itself. Originally, NSA was established to collect Soviet signal intelligence to prevent another Pearl Harbor – comparably a relatively simple task with few points of communication and data flows to monitor – but one that is not infinitely scalable.


Much discussed lately, backdoors for encryption software are nonetheless technological nonsense: a backdoor for NSA would result in claims by any number of governments starting with China, yet nothing would prevent “evildoers” from using proprietary, backdoor-free encryption software. While the ostensible target of surveillance would continue to avoid detection, commercial enterprises with business and industrial secrets to protect that seek secure communication would be exposed – to competitors as well as to their governments. Industrial and fiscal espionage has ultimately blurred the line between public and commercial interest, and, honi soit qui mal y pense, government surveillance is of considerably greater value in commercial exploitation of the findings than most justifiable security purposes could rationalize.


Quite the contrary: in a world with secure end-to-end encryption freely available, law enforcement and intelligence agencies would be compelled to perform their actual job and focus on targeted surveillance and traditional police work – which is superior to mass surveillance not only in terms of effectiveness but also as a matter of cost-benefit analysis: eroding privacy rights of the population at large and of business  needs to be factored in as a cost and balanced against the purported benefits of information gleaned through surveillance. This is also the only way to confront low-tech, asymmetric responses to computational and cryptologic superiority.


It is ultimately the same calculus that applies to the cost-benefit analysis of military response to terrorism: while eliminating threats by drone may be one thing, “avenging” (though certainly not forestalling) the death of a few thousand people over time by military expeditions costing trillions and causing the death of hundreds of thousands of wartime casualties, be they servicemen, civilians, or sympathizers, shows a crass imbalance. After all, given the fact that an American citizen is approximately 10,000 times more likely to die in a motor vehicle accident, and vastly disproportionately more likely to die in a plane crash, than in a terrorist attack, nobody has yet proposed to shut down private or public transportation – the only “reliable” way to avoid those casualties.


There is also the aspect of constitutional rights: in countries where governments and their agencies are, for one or the other legal reason, precluded from surveillance of their own citizens without meeting an at least somewhat demanding standard of probable cause, no country currently applies a variant of the “fruit of the poisonous tree” doctrine to information obtained through intergovernmental cooperation that, had the receiving government conducted the warrantless search, would have been ruled inadmissible at best and a violation of constitutional rights of the surveilled. It is difficult to predict timing but it is foreseeable that the absurdity of obtaining information from a foreign government that your own constitution and laws prohibit you from collecting will meet with increasing judicial rejection.


As I have argued elsewhere, the matter boils down to our standards of accounting for intangible values. For the sake of avoiding hypocrisy, this has to include putting a realistic valuation on human life (as is done by every wrongful death award and certainly in terms of medical insurance) but also on the quality thereof. Maximum protection lacks economic viability – and is unlikely to be total.


Aside from such considerations, futility is also evidenced as mass surveillance perpetuates the failed logic of the arms race: the Manhattan Project delivered an impressive proof-of-concept demonstration in Hiroshima and Nagasaki – some historians argue, in fact, that Little Boy and Fat Man were used primarily to deter the Soviet Union, as a military purpose against Imperial Japan was difficult to substantiate given that the nation was already very near collapse and surrender. Yet it took only until 1949 for the first Soviet device to see the light of day, while the largest hydrogen bomb ever detonated, Tsar Bomba, capable of 50 megatons or twenty times the amount of all explosives used in World War II combined, was set off in 1961. The role of foreign intelligence for closing this knowledge gap is irrelevant as it has remained a constant across time, and digital technologies tend to seep faster into commercial availability than most intelligence operations would take to set up and harvest. In the cat-and-mouse game with the “dark forces” of hackers and foreign governments, any U.S. lead will be equally temporary, but the effects of this technological arms race on the remaining quality of human life and individual rights and liberties will not be.


Confronted with the need to price for balancing purposes the goods we lose and gain through temporary governmental monopolization of technology, this challenge may have one desirable consequence: it may force us to fundamentally rethink our accounting and valuation treatment of intangibles that will include not only constitutional rights but also assets such as the environment. While no one will dispute that any rational, non-random and non-arbitrary valuation will be difficult and depend on complex consideration and balancing of the purposes thereof, part of the reason for this realization may be that they are, at least with the benefit of hindsight, priceless.

2016-03-01

Metamaterials: the case of glassy carbon microlattices


It is often difficult to improve on a definition provided by everyone’s prima-facie source on science and technology. The Free Encyclopedia describes metamaterials thus:

Metamaterials (from the Greek word "meta-", μετά- meaning "to go beyond") are smart materials engineered to have properties that have not yet been found in nature. They are made from assemblies of multiple elements fashioned from composite materials such as metals or plastics. The materials are usually arranged in repeating patterns, at scales that are smaller than the wavelengths of the phenomena they influence. Metamaterials derive their properties not from the properties of the base materials, but from their newly designed structures. Their precise shape, geometry, size, orientation and arrangement gives them their smart properties capable to manipulate electromagnetic waves: by blocking, absorbing, enhancing, bending waves, to achieve benefits that go beyond what is possible with conventional materials. …. Metamaterial research is interdisciplinary and involves such fields as electrical engineering, electromagnetics, classical optics, solid state physics, microwave and antennae engineering, optoelectronics, material sciences, nanoscience and semiconductor engineering.

I have commented on microlattice structures very recently but the topic takes no rest and rather continues to heat up as German scientists at the Karlsruhe Institute of Technology have now built the smallest human-made truss to date, featuring single strut lengths <1 μm and strut diameters of 200 nm made of no composite but of mere glassy carbon – five times smaller than previously known and comparable metamaterials. This dimension achieves hitherto unprecedented strength-to-density ratios that, at the most preliminary consideration, suggest applications in electrodes, filters, or optical elements. Now, it is fairly common knowledge that light and partially hollow materials such as bone and wood may be found just about everywhere in nature. So proof of concept is old news.  These materials typically combine high-load capacity with low weight and serve as a biomimetic model for man-made mechanical metamaterials with a structure planned and produced to have mechanical or optical properties that unstructured solids cannot match as a matter of principle. Think stealth features that direct light, sound or heat around objects, auxetic materials that react counter-intuitively to pressure and to shear, or nanomaterials featuring high specific strength (force per unit area and density). The metamaterial now created in Karlsruhe by 3D laser lithography is extremely stable and its strength in relation to its specific density of up to 3GPa is surpassed only by diamond.

2016-02-09

Ultralight Micromaterials: The Example of Microlattice

Materials science conjures up the building blocks of every vision of a Brave New World. We became used to miraculous materials such as aerogels, carbon aerogels, or black and white graphene – the world’s first 2-D material – that are easily predicted to become foundations of substantial game-changing applications. One of them is ultralight metallic microlattice, a structure developed by  HRL Laboratories and commissioned by DARPA.
 
One hundred times lighter than obsolete styrofoam, yet capable of withstanding the loads in aerospace technology, microlattice consists of 99.99% air. The rest is nickel of 100 nm thickness forming tubes 1/1000 times thinner than a human hair. The material can be balanced on a dandelion blossom without damaging its delicate structure. It is an ultralight (<10 mg/cc) material in a 3D open cellular structure, somewhat – vaguely – similar to bone structure. So it is at the same time, relatively speaking, ultrastrong.
 
Proof of concept for this stuff has been around since 2011. It is, of course, notoriously difficult to manufacture in large quantities. It is done by employing a template created by self—propagating polymer waveguide prototyping that is coated by electroless nickel plating (nickel-phosphorus alloy) before the template is etched away, leaving a microlattice of interconnected hollow rods. The reason why the dandelion blossom holds up so well is because the material’s density is ≤ 0.9 mg/cc (by comparison: silica aerogels have a density of 1.0 mg/cc while aerographite is claimed to be only 0.2 mg/cc and also has remarkable mechanical, electrical and optical properties as a nanowall built out of carbon nanotube material that is extremely robust under strong deformations, with applications especially in electrodes. Microlattice also has strong elastomeric properties and recovers almost completely (98%) from compression exceeding 50% strain and absorbs energy similarly to elastomers. Classical Humpty-Dumpty experiments have shown an egg packed in microlattice to survive a 25-floor undamaged – without having been wrapped in a substantial quantity of material. Now the process needs to be brought out of the lab and into commercial applications that hold immense promise:
 
The value of structural components is determined by weight and energy absorption. Fuel efficiency of any vehicle, especially in aerospace, is determined by the same, which explains Boeing’s interest as well as that of GM and Raytheon. It may also serve applications from thermal insulation to battery electrodes to catalyst support, to acoustic, vibration, and shock energy damping. Generally speaking, its main purpose may be in structural reinforcement and heat transfer, and there is speculation that the scope of its potential uses may render microlattice technology “one of the most significant inventions in history” comparable to lasers and LCD screens.
 
Manufacture is similar to photolithography by employing a two-dimensional mask that defines the structure of the initial template where a self-forming waveguide process permits formation of templates for large, free-standing and scalable 3D lattice structures in 10-100 seconds rather than hours as in traditional stereolithography, the technology used in 3D printing. The template is coated by electroless nickel plating, but the process is not restricted to nickel. Micro-truss nanocrystalline nickel hybrids were first explored in 2008 by testing optimal strut geometry in uniaxial compression.

2016-01-02

Big Data, No Data, and Metadata

Near-universal consensus has it that, sometime around 9/11, the world passed from the Age of Aquarius, through some vernal equinox noticed by few, straight into the Age of Big Data. That passage brought about a seismic epistemological shift. To be sure, any links to the events surrounding 9/11 are coincidental: the real reason for this transition was the coming of age of enabling technology. To that extent, whatever one may want to think of 9/11 conspiracy theorists conjecturing about the tragic events as having been brought on, or at least been aided and abetted, by someone or something other than al Quaeda: the acts and omissions after 9/11 point to its utility for the advancement of surveillance, for which political and civic tolerance could otherwise not have been expected. Very much the same goes for the speed by which authorizing legislation was whipped through the formalities of democratic rule-making processes, purportedly under the influence of those events. But such a pounce on an opportunity of this magnitude had no doubt have to have been incubated for quite some time, in lockstep with deep insights into the progress of technology and entirely independent of whatever statistically unpredictable Black Swan event would one day trigger its sudden political viability. It did not matter which event or who or what would cause it. That, in all likelihood, was indeed not known, and it did not need to be known. It was, in Donald Rumsfeld’s immortal dictum, one of the “known unknowns.”


The extent of surveillance capabilities that became available as a result to the U.S. government and to the other “Five Eyes” Canada, UK, Australia and New Zealand that do not spy on each other (at least in theory and at least for now) and otherwise cooperate to secure the endurance of occidental civilization would have been every totalitarian regime’s wet dream. Perhaps one day, cloning technology may enable resurrection of Feliks Dzierżyński’s or Lawrentii Beria’s DNA, or Heinrich Himmler’s, Erich Mielke’s or Klemens Metternich’s, not to mention Joseph Fouché’s or Philipp II’s or Kang Sheng’s or Pol Pot’s – and I predict the greatest possible unanimity of consensus among all these distinguished oppressors of the unrestrained human mind: no government can ever be secure of power without surveillance. So, does it really matter whether the chicken or the egg existed first, whether surveillance technology eviscerates pre-existing democratic structures and aspirations (those uncontrollable by powers that be) or whether it is created by a totalitarian ambition already thus entrenched? The bottom line remains crisp and clear: information is power.


2015-12-01

The Cancer of Cost of Cancer


Understanding the explosion of health care cost isn’t rocket science. It is actually a trivial warm-up exercise for the mind: as we eliminated, at least statistically speaking, a growing number of once significant causes of death, more people reach an age where they become more cancer-prone because their genetic coding turns less stable, their immune system deteriorates, and mutations following from many causes are less manageable. Malignant neoplasms, also known as cancers, are the result. Although they account for only about 13% of overall causes of death and likelihood of cancer deaths does not increase with longevity – on the contrary, it declines significantly from a tipping point of roughly 55 years of age – those afflicted today not only stand better chances of successful therapeutic intervention but also are less likely to die of other causes, clearing the way for longer survival rates at significantly higher cost.

As the duration of our ability to control malignant tumor cells increases with the advent of various pharmaceutical and other therapeutic approaches, there is still no cure in sight and cancer has remained a highly complex systemic disease requiring multiple complex balancing acts.

These balancing acts quite often are to be taken more literally than society would like. A small group of specialty pharmaceuticals accounts for roughly 25 percent of all drug-related spending. A cancer patient in need of a few targeted therapies can quickly run up a monthly tab in the five digits.In 2007, the overall costs of cancer — including treatment and indirect mortality expenses (such as lost productivity in the workplace) — was estimated to be $226.8 billion. Pressure is likely to increase and will demand answers to questions society has long shirked as unethical – or, more to the point, politically inconvenient to almost everyone: What price human life? Who is to decide? And how to allocate the expense?

2015-11-21

Our Small World

At various times, I showed nano-level imaging results on this blog (July 2014, February 2015, April 2015), not least because nanotechnology is part of my life and research interests. But in many ways, micro-level visuals are at least as fascinating and much closer to our everyday comprehension and experience. Nikon's annual Small World Photomicrography Competition highlights some of the best examples of it - not discernible to the naked eye and still not in the realm of the abstract yet.
See more photographs here and here.

2015-11-01

Commercialization strategies for the nanotechnology sector in the United Kingdom

Background  
In recent years, interest in nanotechnology exploded across research communities and industries as varied as pharmacology, material science, life sciences, ICT, transportation, even defense  and space exploration. The global nanomaterials market of 11 million tons is currently valued at €20bn, employing 300,000-400,000 people in Europe alone. Nano-enabled products reached €200bn worldwide in 2009, expecting to reach €2 trillion by 2015.
Emergence and exponential growth of a disruptive technology call for a systematized approach by the government, assessing national situation, product potential, and growth prospects through a national nanotechnology strategy policy. The Royal Society report of 2004, the 2002 and 2010 Department for Business, Innovation & Skills (BIS) reports,  the Engineering and Physical Sciences Research Council (EPSRC) 2006 and the 2010 Nanotechnology Mini-Innovation and Growth Team (Mini-IGT) report,  as well as the Technology Strategy Board’s Strategy 2009-2012 tried to address these issues, as did EU bodies.   But already by 2015, their recommendations and assessment seem outdated and in need of reformulation. This overview presents current views on nanotechnology in the UK along with issues to be considered in formulating national nanotechnology strategy.

2015-10-04

Archie Cochrane and the challenge of evidence-based medicine: intelligent system design and back

Already in his lifetime, Scottish physician Archibald Leman Cochrane earned a reputation as the ‘father of evidence-based medicine.’ Hailed as the gold standard in life sciences, the notion of ‘evidence-based medicine’ leads us to the rightful question typical of ideas whose time has come whether and how there could ever actually be such a thing as non-evidence based medicine. And if we look around, it becomes quickly and depressingly clear that there is. And plenty of it, too. Whether one is to call it conjectural or speculative or intuitive medicine, it may have merits in respect of matters where the evidence is just not in, or studies have yet to be completed (say, typically: to be financed…) and may explain why medicine retains the aura of an art as well as of a science, but it falls short of scientific method, reliability, verifiability and duplicability of results.

Evidence based medicine needs to rely on data and on diligent and cautious interpretation thereof. Oxford dedicated an interdisciplinary Center to the concept. Of course there are plenty of areas where that is far from reality. Stanford’s John Ioannidis, a leading scholar of meta-research innovation, has claimed that every second medical study is either wrong or seriously flawed. Part of the problem is that studies yielding negative results are frequently not published at all. That which is published represents the tip of an iceberg, cherries picked carefully by greatly interested parties.  But negative results contain a wealth of information that may well point to different lines of reasoning or research that can be almost as valuable as positive results. Even for those, quite often nothing is disclosed or at least published about potential side effects. That leads to significant distortions of the picture in certain instances.

So long as publication is the primary incentive fueling careers in science research (resulting sometimes in a downright bizarre number of co-authors, far exceeding motion picture credits), passing that hurdle is all that matters in reality. Conflicts of interest abound. Pharmaceutical companies, having invested billions of dollars in clinical studies, have no interest in missing out on a return on their investment. They have decidedly no interest in publishing results that would appear to cast doubt on a product. As it stands, conflicts of interest are crass and would not be tolerated in any other setting: pharmaceutical companies who are paying for clinical studies also assess them and decide on the publication of their results. The egregious bias inherent in this situation becomes clear if we were to substitute car manufacturers and had them control tests about quality and product safety. Think of Volkswagen assessing the emissions of its engines, or Ford assessing the rear-end collision safety of its Ford Pinto tanks.

It would have to be a long-term objective to have clinical studies evaluated by independent institutions far beyond institutional review boards – however unlikely this is to happen in the foreseeable future. But just like in a lot of other areas where disruptive ideas press to the fore because their time has come – think capital markets transparency, public integrity, freedom of information, executive liability and private equity – corporations had better get in front or at least on the bandwagon of a momentum that will not be stopped. Change is seldom desirable for established interests, but where it has gathered the gale force of nature, it is not sensible to resist it, especially where it is actually also good business. In fact, the only way of doing business is the longer term. Of course, here we go again, trying to presuppose that the actors prioritize long-term interests of the institution over short-term interest in boni and performance reviews. That, too, is a principal matter of flawed systems design.

Evidence-based analyses have lately been used in multifunctional nanotechnologies and risk analysis, not least at Columbia University and the University of Michigan.

Evidence-based medicine optimizes choices by using qualitative criteria of epistemological strength for the empirical evidence to be relied upon, that is, meta-analyses, systematic reviews and randomized controlled trials, and by making all data available for future reference and additional analysis. It is itself an offshoot of a greater conceptual category, evidence-based design, one of the key elements in intelligent systems design.  Systems design and regulation provides a powerful methodological momentum for the advancement of quality protocols and the improvement of decisional quality.

2015-09-19

A technology-based theory of Social Enterprise

A specter is haunting the globe since the sixties – the specter of Social Enterprise.  All the powers of old have entered into a holy alliance to exorcise this specter. Like all ‘holy alliances’ of the past, they will fail, not least because it offers genuine alternatives for communities that are really neither served by commercial providers nor by government programs that produce results meaningful to their needs. Social enterprises are created on the fault lines between market forces and charity, between the necessity of averting unrest and discontent and the imperatives of continuing to create opportunity on which a knowledge-driven economy depends. Venture philanthropy, CSR and charitable foundations, classic and also non-traditional cooperatives have experimented for some time with a broad range of initiatives. They range from microcredits to social direct investment to many versions of small-scale, targeted help with self-help. Sometimes their underlying objectives include social engineering and sometimes they do not.

But overall, social enterprise is here to stay. It is no threat to for-profit operations, nor does it erode their realistically perceived market potential. Only at the fringes of predatory capitalism (think sub-subprime mortgages, pre-paid credit cards accruing fees greater than their ‘credit line,’ bad-faith insurance policies or violations of implied consumer trust) need it be expected that semi-charitable service to less than privileged target audiences, sometimes to the ultra-poor, would somehow interfere with bona fide generation of profit and other aspects of commercial shareholder value.  

While the traditional political left finds itself at a loss for effective solutions and its reflexive reach for Big Government and entitlement spending has failed resoundingly over considerable parts of the twentieth century, isolationist concepts such as opposition to free trade in North America or quasi-isolationist proposals fashionable in Europe such as taxing machines or hard drives as “job killers” do not resolve anything. Social enterprise attempts to put technology and education to differently prioritized uses that ultimately all aim to put existing tools and concepts to a smarter use with greater holistic value creation for the public interest. It is not synonymous with volunteerism.

Political and philosophical views about the proper role of government or the optimum size and funding of the welfare state may well differ across many cultures and political flavors. It will continue to make for die-hard election issues. What cannot differ, though, because it operates on the very principles of a free market and in the organizational, structural and legal forms and tools of private enterprise, is the use of imaginative entrepreneurial means to effect impactful social change. Whether one wants to proclaim socialism dead or simply hibernating, the conclusion “if you can’t beat them, join them, they must be doing something right” has been demonstrated more than impressively across all the success stories of Eastern Europe as well as by the legacy of Deng Xiaoping, the man who transitioned China from cultural revolutionary chaos to capitalist juggernaut (yet was also responsible for the 1989 massacre at Tienanmen Square) with all its remaining environmental and civil libertarian issues. But one of Deng’s unforgettable aphorisms is particularly memorable in this context: “Poverty is not socialism. To be rich is glorious.”

2015-08-02

Urban Mining in Smart Cities

As cities continue to grow rapidly and multiply across developed and emerging markets, managing their use of materials becomes a task of vital importance for civilization. Cities are no longer built ‘for eternity’ like Rome, not even for centuries, and even landmark protection centers in many places on just a few prewar legacies. Most other urban structures existing today may be expected to have a useful life cycle of 50 to 100 years, trending downward. Rising price levels for land in highly developed central locations and the stunning opportunity costs of suboptimal use dictate a timely replacement of unproductive real estate in the wake of new technologies, but also to accommodate synergies created by nearby developments or improved transportation infrastructure. Yet, at the same time, only a rapidly dwindling percentage of the debris left by demolition finds its way to out-of-town landfills, and that signals hope: in some environmentally conscious and well-administered cities around the globe, at least half but potentially a lot more of demolition waste can be recycled and finds recurring use in new construction and industrial production. This diagnosis represents a material value of at least six but often seven or even eight figures – per demolished object. With more than half the planet’s population already dwelling in urban habitats, these dimensions are motivation enough to refer to the recovery of such percentages of the ‘gross demolition product’ as ‘urban mining’ and to redefine ‘waste’ as an increasingly precious asset.

So it is appropriate to look at urban real estate as a kind of warehouse of raw materials, containing sand, gravel, concrete, metals, woods and synthetics – all commodities in limited supply. Yes, even wood can be recycled, it happens right here in the U.S., and even for upscale uses such as custom furniture. To carry a ton of concrete to a waste dump may run a tab between $15 and $30 while the same ton may be sold as recycled material for up to $8. Also because of increased public awareness and vocal opposition to environmental toxicity, there is increasing scarcity of landfill acreage in the proximity of major urban settlements. An even more valuable sector of urban debris is e-waste, usually shipped to places like China or India but also to all of West Africa, particularly Liberia, Ghana, Cote d’Ivoire, Benin, Nigeria. Metal deposits in e-waste are up to 40 to 50 times richer in target elements than ore extracted from mines. The novel science of hydrometallurgy now provides advanced technologies for refining pure metal fractions out of mixed raw material resources in relatively simple processes, safely and at comparatively very low cost. 

Take urban copper, for example. It is an amazing resource to tap into: an Austrian study showed that electrical household appliances contain 6.4 percent of copper used in the country, cars account for 8.7 percent, but real estate takes up 84 percent of “urban copper.”  On average, today’s buildings house seven times more metal as buildings contained a century ago.  About half of naturally occurring copper has already been used up in our existing urban structures. Given the speed of urban development, it does not take a mathematician to predict a time when the market price of copper might qualify it as a ‘precious metal.’ Despite recent findings on the Pacific Seabed limiting global dependency on deposits in China, recycling rare earth elements will remain a vital necessity at least until they are obviated by technological innovation. The U.N. Environmental Program estimates that 50 million tons of e-waste are generated annually around the world – a rapidly rising tide. Their StEP initiative (Solving the E-waste Problem) found that the global production of electronic items used 320 tons of gold and more than 7,500 tons of silver annually, amounting to an aggregate value of $21 billion - of which just 15 percent is currently being recycled. Urban mining is arguably the most highly yielding alternative source available to us to reduce reliance on metal imports.

Which begs the question of its practicalities and of forward-looking facilitation.


No different in a sense than treasure maps of old, only a lot more accurate and reliable, BIM’s importance for urban mining is difficult to overestimate: it can considerably reduce the cost of efficient waste separation. When networked, it is also a potential source for data feeds needed to create a register of “mineable” materials integrated in any existing structure built with BIM.  Today, every demolition object has to be evaluated individually – often based on almost unsubstantiated estimates. This relates to the potential of BIM technology like an abacus does to a modern computational device: BIM can provide almost every level of detail required (or justifiable in terms of cost). It is the contracting industry’s functional equivalent of grocer’s “farm-to-fork” databases and increasingly detailed mandated accounting for every step the product takes to its final consumer.

Material recycling also significantly reduces a demolition’s CO2 footprint. For example, recycled concrete consumes substantially less gray energy than the production of primary concrete. Additionally, about two thirds of trucking runs to landfills sites may be saved.  An almost trivial cliché says that one man’s trash may be another man’s treasure, but the first part of that equation is no longer sustainable in urban environments expected to house twice the current human population by the end of the 21st century. Turning every increasingly networked and recorded ‘smart city’ into a proverbial perpetuum mobile by tightly integrated, institutionalized urban mining is not only a potential given available data and technology but a vital necessity.