The Cancer of Cost of Cancer

Understanding the explosion of health care cost isn’t rocket science. It is actually a trivial warm-up exercise for the mind: as we eliminated, at least statistically speaking, a growing number of once significant causes of death, more people reach an age where they become more cancer-prone because their genetic coding turns less stable, their immune system deteriorates, and mutations following from many causes are less manageable. Malignant neoplasms, also known as cancers, are the result. Although they account for only about 13% of overall causes of death and likelihood of cancer deaths does not increase with longevity – on the contrary, it declines significantly from a tipping point of roughly 55 years of age – those afflicted today not only stand better chances of successful therapeutic intervention but also are less likely to die of other causes, clearing the way for longer survival rates at significantly higher cost.

As the duration of our ability to control malignant tumor cells increases with the advent of various pharmaceutical and other therapeutic approaches, there is still no cure in sight and cancer has remained a highly complex systemic disease requiring multiple complex balancing acts.

These balancing acts quite often are to be taken more literally than society would like. A small group of specialty pharmaceuticals accounts for roughly 25 percent of all drug-related spending. A cancer patient in need of a few targeted therapies can quickly run up a monthly tab in the five digits.In 2007, the overall costs of cancer — including treatment and indirect mortality expenses (such as lost productivity in the workplace) — was estimated to be $226.8 billion. Pressure is likely to increase and will demand answers to questions society has long shirked as unethical – or, more to the point, politically inconvenient to almost everyone: What price human life? Who is to decide? And how to allocate the expense?


Our Small World

At various times, I showed nano-level imaging results on this blog (July 2014, February 2015, April 2015), not least because nanotechnology is part of my life and research interests. But in many ways, micro-level visuals are at least as fascinating and much closer to our everyday comprehension and experience. Nikon's annual Small World Photomicrography Competition highlights some of the best examples of it - not discernible to the naked eye and still not in the realm of the abstract yet.
See more photographs here and here.


Commercialization strategies for the nanotechnology sector in the United Kingdom

In recent years, interest in nanotechnology exploded across research communities and industries as varied as pharmacology, material science, life sciences, ICT, transportation, even defense  and space exploration. The global nanomaterials market of 11 million tons is currently valued at €20bn, employing 300,000-400,000 people in Europe alone. Nano-enabled products reached €200bn worldwide in 2009, expecting to reach €2 trillion by 2015.
Emergence and exponential growth of a disruptive technology call for a systematized approach by the government, assessing national situation, product potential, and growth prospects through a national nanotechnology strategy policy. The Royal Society report of 2004, the 2002 and 2010 Department for Business, Innovation & Skills (BIS) reports,  the Engineering and Physical Sciences Research Council (EPSRC) 2006 and the 2010 Nanotechnology Mini-Innovation and Growth Team (Mini-IGT) report,  as well as the Technology Strategy Board’s Strategy 2009-2012 tried to address these issues, as did EU bodies.   But already by 2015, their recommendations and assessment seem outdated and in need of reformulation. This overview presents current views on nanotechnology in the UK along with issues to be considered in formulating national nanotechnology strategy.


Archie Cochrane and the challenge of evidence-based medicine: intelligent system design and back

Already in his lifetime, Scottish physician Archibald Leman Cochrane earned a reputation as the ‘father of evidence-based medicine.’ Hailed as the gold standard in life sciences, the notion of ‘evidence-based medicine’ leads us to the rightful question typical of ideas whose time has come whether and how there could ever actually be such a thing as non-evidence based medicine. And if we look around, it becomes quickly and depressingly clear that there is. And plenty of it, too. Whether one is to call it conjectural or speculative or intuitive medicine, it may have merits in respect of matters where the evidence is just not in, or studies have yet to be completed (say, typically: to be financed…) and may explain why medicine retains the aura of an art as well as of a science, but it falls short of scientific method, reliability, verifiability and duplicability of results.

Evidence based medicine needs to rely on data and on diligent and cautious interpretation thereof. Oxford dedicated an interdisciplinary Center to the concept. Of course there are plenty of areas where that is far from reality. Stanford’s John Ioannidis, a leading scholar of meta-research innovation, has claimed that every second medical study is either wrong or seriously flawed. Part of the problem is that studies yielding negative results are frequently not published at all. That which is published represents the tip of an iceberg, cherries picked carefully by greatly interested parties.  But negative results contain a wealth of information that may well point to different lines of reasoning or research that can be almost as valuable as positive results. Even for those, quite often nothing is disclosed or at least published about potential side effects. That leads to significant distortions of the picture in certain instances.

So long as publication is the primary incentive fueling careers in science research (resulting sometimes in a downright bizarre number of co-authors, far exceeding motion picture credits), passing that hurdle is all that matters in reality. Conflicts of interest abound. Pharmaceutical companies, having invested billions of dollars in clinical studies, have no interest in missing out on a return on their investment. They have decidedly no interest in publishing results that would appear to cast doubt on a product. As it stands, conflicts of interest are crass and would not be tolerated in any other setting: pharmaceutical companies who are paying for clinical studies also assess them and decide on the publication of their results. The egregious bias inherent in this situation becomes clear if we were to substitute car manufacturers and had them control tests about quality and product safety. Think of Volkswagen assessing the emissions of its engines, or Ford assessing the rear-end collision safety of its Ford Pinto tanks.

It would have to be a long-term objective to have clinical studies evaluated by independent institutions far beyond institutional review boards – however unlikely this is to happen in the foreseeable future. But just like in a lot of other areas where disruptive ideas press to the fore because their time has come – think capital markets transparency, public integrity, freedom of information, executive liability and private equity – corporations had better get in front or at least on the bandwagon of a momentum that will not be stopped. Change is seldom desirable for established interests, but where it has gathered the gale force of nature, it is not sensible to resist it, especially where it is actually also good business. In fact, the only way of doing business is the longer term. Of course, here we go again, trying to presuppose that the actors prioritize long-term interests of the institution over short-term interest in boni and performance reviews. That, too, is a principal matter of flawed systems design.

Evidence-based analyses have lately been used in multifunctional nanotechnologies and risk analysis, not least at Columbia University and the University of Michigan.

Evidence-based medicine optimizes choices by using qualitative criteria of epistemological strength for the empirical evidence to be relied upon, that is, meta-analyses, systematic reviews and randomized controlled trials, and by making all data available for future reference and additional analysis. It is itself an offshoot of a greater conceptual category, evidence-based design, one of the key elements in intelligent systems design.  Systems design and regulation provides a powerful methodological momentum for the advancement of quality protocols and the improvement of decisional quality.


A technology-based theory of Social Enterprise

A specter is haunting the globe since the sixties – the specter of Social Enterprise.  All the powers of old have entered into a holy alliance to exorcise this specter. Like all ‘holy alliances’ of the past, they will fail, not least because it offers genuine alternatives for communities that are really neither served by commercial providers nor by government programs that produce results meaningful to their needs. Social enterprises are created on the fault lines between market forces and charity, between the necessity of averting unrest and discontent and the imperatives of continuing to create opportunity on which a knowledge-driven economy depends. Venture philanthropy, CSR and charitable foundations, classic and also non-traditional cooperatives have experimented for some time with a broad range of initiatives. They range from microcredits to social direct investment to many versions of small-scale, targeted help with self-help. Sometimes their underlying objectives include social engineering and sometimes they do not.

But overall, social enterprise is here to stay. It is no threat to for-profit operations, nor does it erode their realistically perceived market potential. Only at the fringes of predatory capitalism (think sub-subprime mortgages, pre-paid credit cards accruing fees greater than their ‘credit line,’ bad-faith insurance policies or violations of implied consumer trust) need it be expected that semi-charitable service to less than privileged target audiences, sometimes to the ultra-poor, would somehow interfere with bona fide generation of profit and other aspects of commercial shareholder value.  

While the traditional political left finds itself at a loss for effective solutions and its reflexive reach for Big Government and entitlement spending has failed resoundingly over considerable parts of the twentieth century, isolationist concepts such as opposition to free trade in North America or quasi-isolationist proposals fashionable in Europe such as taxing machines or hard drives as “job killers” do not resolve anything. Social enterprise attempts to put technology and education to differently prioritized uses that ultimately all aim to put existing tools and concepts to a smarter use with greater holistic value creation for the public interest. It is not synonymous with volunteerism.

Political and philosophical views about the proper role of government or the optimum size and funding of the welfare state may well differ across many cultures and political flavors. It will continue to make for die-hard election issues. What cannot differ, though, because it operates on the very principles of a free market and in the organizational, structural and legal forms and tools of private enterprise, is the use of imaginative entrepreneurial means to effect impactful social change. Whether one wants to proclaim socialism dead or simply hibernating, the conclusion “if you can’t beat them, join them, they must be doing something right” has been demonstrated more than impressively across all the success stories of Eastern Europe as well as by the legacy of Deng Xiaoping, the man who transitioned China from cultural revolutionary chaos to capitalist juggernaut (yet was also responsible for the 1989 massacre at Tienanmen Square) with all its remaining environmental and civil libertarian issues. But one of Deng’s unforgettable aphorisms is particularly memorable in this context: “Poverty is not socialism. To be rich is glorious.”


Urban Mining in Smart Cities

As cities continue to grow rapidly and multiply across developed and emerging markets, managing their use of materials becomes a task of vital importance for civilization. Cities are no longer built ‘for eternity’ like Rome, not even for centuries, and even landmark protection centers in many places on just a few prewar legacies. Most other urban structures existing today may be expected to have a useful life cycle of 50 to 100 years, trending downward. Rising price levels for land in highly developed central locations and the stunning opportunity costs of suboptimal use dictate a timely replacement of unproductive real estate in the wake of new technologies, but also to accommodate synergies created by nearby developments or improved transportation infrastructure. Yet, at the same time, only a rapidly dwindling percentage of the debris left by demolition finds its way to out-of-town landfills, and that signals hope: in some environmentally conscious and well-administered cities around the globe, at least half but potentially a lot more of demolition waste can be recycled and finds recurring use in new construction and industrial production. This diagnosis represents a material value of at least six but often seven or even eight figures – per demolished object. With more than half the planet’s population already dwelling in urban habitats, these dimensions are motivation enough to refer to the recovery of such percentages of the ‘gross demolition product’ as ‘urban mining’ and to redefine ‘waste’ as an increasingly precious asset.

So it is appropriate to look at urban real estate as a kind of warehouse of raw materials, containing sand, gravel, concrete, metals, woods and synthetics – all commodities in limited supply. Yes, even wood can be recycled, it happens right here in the U.S., and even for upscale uses such as custom furniture. To carry a ton of concrete to a waste dump may run a tab between $15 and $30 while the same ton may be sold as recycled material for up to $8. Also because of increased public awareness and vocal opposition to environmental toxicity, there is increasing scarcity of landfill acreage in the proximity of major urban settlements. An even more valuable sector of urban debris is e-waste, usually shipped to places like China or India but also to all of West Africa, particularly Liberia, Ghana, Cote d’Ivoire, Benin, Nigeria. Metal deposits in e-waste are up to 40 to 50 times richer in target elements than ore extracted from mines. The novel science of hydrometallurgy now provides advanced technologies for refining pure metal fractions out of mixed raw material resources in relatively simple processes, safely and at comparatively very low cost. 

Take urban copper, for example. It is an amazing resource to tap into: an Austrian study showed that electrical household appliances contain 6.4 percent of copper used in the country, cars account for 8.7 percent, but real estate takes up 84 percent of “urban copper.”  On average, today’s buildings house seven times more metal as buildings contained a century ago.  About half of naturally occurring copper has already been used up in our existing urban structures. Given the speed of urban development, it does not take a mathematician to predict a time when the market price of copper might qualify it as a ‘precious metal.’ Despite recent findings on the Pacific Seabed limiting global dependency on deposits in China, recycling rare earth elements will remain a vital necessity at least until they are obviated by technological innovation. The U.N. Environmental Program estimates that 50 million tons of e-waste are generated annually around the world – a rapidly rising tide. Their StEP initiative (Solving the E-waste Problem) found that the global production of electronic items used 320 tons of gold and more than 7,500 tons of silver annually, amounting to an aggregate value of $21 billion - of which just 15 percent is currently being recycled. Urban mining is arguably the most highly yielding alternative source available to us to reduce reliance on metal imports.

Which begs the question of its practicalities and of forward-looking facilitation.

No different in a sense than treasure maps of old, only a lot more accurate and reliable, BIM’s importance for urban mining is difficult to overestimate: it can considerably reduce the cost of efficient waste separation. When networked, it is also a potential source for data feeds needed to create a register of “mineable” materials integrated in any existing structure built with BIM.  Today, every demolition object has to be evaluated individually – often based on almost unsubstantiated estimates. This relates to the potential of BIM technology like an abacus does to a modern computational device: BIM can provide almost every level of detail required (or justifiable in terms of cost). It is the contracting industry’s functional equivalent of grocer’s “farm-to-fork” databases and increasingly detailed mandated accounting for every step the product takes to its final consumer.

Material recycling also significantly reduces a demolition’s CO2 footprint. For example, recycled concrete consumes substantially less gray energy than the production of primary concrete. Additionally, about two thirds of trucking runs to landfills sites may be saved.  An almost trivial cliché says that one man’s trash may be another man’s treasure, but the first part of that equation is no longer sustainable in urban environments expected to house twice the current human population by the end of the 21st century. Turning every increasingly networked and recorded ‘smart city’ into a proverbial perpetuum mobile by tightly integrated, institutionalized urban mining is not only a potential given available data and technology but a vital necessity.


Nanoengineering molecular-size layers and individual cells – cancer included

This week, Newsweek reported another ‘breakthrough in the search for a cure for cancer.’

Getting patients’ hopes up only to see them overtaken by the speed of the disease or outright disavowal weeks or months or years later has become the central disheartening reality of oncology. And while proof of concept is now in hand for an ingenious new approach, its potential for medical treatment may or may not have its greatest potential in the cure for cancer. But it holds immense promise in many more ways than this.

Cylindrical nanotubes made of graphene, ingested through a pill or intravenous access, enter the blood stream where they are “programmed” to bind to cancer cells, and cancer cells ONLY – even microscopic ones not otherwise detectable and therefore treatable. Heated by radio waves, they literally “burn” cancer cells away, leaving healthy tissue intact.  2,000 such nanotubes can be made to fit into a single erythrocyte, or red blood cell. The invention is now going into human trials, with breathtaking promise.

What matters most is not any single application but the concept of engineering tissues, structures and layers at the nanolevel. Another leading cause of death in humans is coronary heart disease – which, in reality, is neither limited to coronary arteries nor to the heart but affects the entire vascular system in all organs to a greater or lesser extent. Being able to strip away layer by layer the plaque that lines our vessels could not only add considerable length to life but also to its quality.

And the range of applications goes on. Medicine is replete with known causalities for which we lack the infinitesimally fine and subtle tools to manipulate individual cells and organs in the body. Nanotechnology provides those tools in principle, through a variety of manipulation and control techniques that, for the patient, do not necessarily feel all that much like space-age science. But they are.

At the same time, at least given the cost of present-day manufacture, manipulation, and imaging, nanotechnology is sure to boost another quantum leap in the explosion of health care cost. Still, that was also the thought at the cradle of the computer revolution when IBM president Thomas J. Watson reputedly said in the early 1940s, “I think there is a world market for about five computers.” Right. Elementary, Watson.


Storage media: When digital research goes biotech

Justifications with national security interests notwithstanding, a heretofore unimagined extent of data storage is here to stay. This has a variety of reasons that are perhaps best generally characterized as archival and entirely independent of eventual use. Moral, ethical and legal reflections on the subject may be important but they will not stop disruptive new technologies for data collection, data storage, or data processing any more than religious beliefs have ever stalled science for significant periods.

Capacitive revolutions in storage are coming in more ways than one. Besides data storage, but likely with considerable practical cross-fertilization and synergies, geometrical progress is being made in energy storage and battery technology, which are also likely to remove by a reasonably short horizon the last comparative practical and cost advantages of the combustion engine based on fossil fuels. 

In the December 2014 issue of this blog I discussed computational simulation experiments in silico and the example of test cases in “executable biology” under the title “When biotech research goes digital.” But there is also the reverse phenomenon – when digital technology reaches the limits of anorganic storage media, some promising lines of research have “gone biotech” and unearthed surprising potential in the use of DNA. If expectations are sustainable, DNA would also represent a major quantum leap in ensuring continuation of Moore’s Law in the evolution of storage media. While the initial media hoopla about this concept dates back to 2013 and has not been followed up with significant updates since, some more modest but more immediately practical solutions were advertised with proof of concept even somewhat earlier – but are also still pending. I write about it today with some benefit of hindsight and reflection, but also in anticipation of practical solutions essential to commercialization that remain rather far ahead. In truly visionary ideas it is the concept, not its realization, that represents the fundamental breakthrough. Warp drives, as we remember, emerged in science fiction but did inspired real physics as well as R&D.

DNA presents an extremely durable form of information storage. It may be extracted after tens of thousands of years from the bones of mammals found in permafrost, not to mention from human mummies. It does not require electricity and only the barest minimum of physical storage space.


No more ‘cashing out’

The war against one of the last traditional bastions of ‘privacy as we once used to know it’ – cash money – has been raging ever since credit and debit cards gained mass traction as instruments of payment in the 1970s.  Since 9/11, attacks on relative, limited and conditional anonymity offered by bank notes have been stepped up in a shrill crescendo under the pretextual justification of “preventing money laundering and terrorist financing.” While this noble and unquestionably laudable purpose can under no imaginable circumstances be achieved by ending the use of cash, what it really does is increase transaction cost for the enjoyment of a modicum of big-brother-free privacy. Given that bank accounts have come to be ‘interest-bearing’ in a laughably nominal sense at best, and given increasing chatter about limiting or abolishing deposit insurance for bank accounts while the Cyprus bailout has opened Pandora’s box on another taboo – participation of bank customers in the losses of a government-regulated and presumably highly supervised financial institution – since those days, lock boxes for cash have become a fairly rational choice. They ceased to be the stuff of conspiracy theorists, drug lords or made men. Cash has become one of the last effective tools for limiting public transparency of the individual. 


Reflections on Eurotechnopanic and the fiscal biosphere

I came across some choice interviews and essays by Jeff Jarvis.

A professor at CUNY’s Graduate School of Journalism and a long-standing practicing journalist, Jarvis is a noted and loyal fan of Google as well as of other major Silicon Valley success stories and an irreconcilable adversary of their critics.

Among the issues he raises with persistence, gleefully rankling European sensibilities in the process, is what he has diagnoses as “Eurotechnopanic” – a “German disease” that manifests itself through symptoms of excessive consideration for and subservience to stakeholders in old technologies (well, old money) – including major publishers  and their media arms who are, as we all know, not to a large extent on the winning side of digital developments – and for consideration given to transparent and self-serving anticompetitive measures of European (in this case German) players who simply cannot come up with a globally comparable product. Instead, they lean heavily on the German government to impose copyright fees in favor of publishers whose products are linked in pertinent part by search engines; to enforce pixilation of Google Street View leading to abandonment of recent photo updates and the moniker “Blurmany”; to initiate antitrust proceedings (not that events like that were unknown in the U.S. to the likes of Microsoft and IBM and others); to recognize a “right to be forgotten”; and other measures Jarvis views as anticompetitive, protectionist and anti-American. An inveterate sceptic of government, Jarvis attributes the success of American internet entrepreneurs and products in part to a changing culture of sharing (information) and publicness espoused by consumers.

Still, Jarvis professes to be an enduring admirer of another German, Johannes Gensfleisch, called Gutenberg, whom not only he calls history’s first and most influential technology entrepreneur (albeit financially unsuccessful in his lifetime), arguably the principal catalyst of the scientific revolution, and whose invention in the 1440s predictably encountered at least as much concern and opposition from early Eurotechnophobes as internet services are confronted with today: there would be too many books, too much knowledge in the hands of people, too many hazards for lawful government, vast need for regulation, etc. And that is indeed what set Gutenberg apart: typographic printing, invented more than a millennium earlier in East Asia, was never used for speed, high output and mass efficiency – further evidence of the superiority of the low-margin, high-volume concept.

But one conclusion one can derive from his argument speaks for itself: it is an utter impossibility that a serial phenomenon like Elon Musk would accomplish a fraction of his achievements in the Europe as we know and appreciate fondly. No two ways about that. The number and mobility of European engineers, inventors and entrepreneurs who came through the literal and figurative equivalent of Ellis Island to succeed spectacularly speaks louder and clearer than any amount of online chatter – and would appear to be a primary incentive for meaningful and overdue immigration reform both sides of the Atlantic.

Another point of note is that, despite Jarvis’ observations, Berlin has become an emerging Silicon Valley in terms of the number and growth of startups – although not yet in terms of size or market capitalization (the latter is unlikely to occur at any time in the near future given that Germany does not have a tradition of widespread equity investment and hence no deep and broad stock markets). But it is worth noting that both California and Germany – as, by the way, Israel, the world’s second-largest agglomeration of tech companies, along with London, Moscow and Paris, are all notorious highly taxed jurisdictions. Innovation is driven by interaction, incubation, opportunity and financing infinitely more than it is by marginal corporate tax rates.


It’s a small world: images of molecular engineering

The step from high-resolution optical imaging to electron microscopy with nanometer resolution is almost unbelievable. Even more beyond intuitive grasp are the methods (and, well, cost) required to create engineered particles and devices at the molecular level.

Below are examples of relatively recent technologies based on variants of electron and atomic force microscopy that enable visualization of individual molecules and even atoms:

While visualization will be approaching practical and functional limits, the possibilities for cost-effective manipulation and thus for nano-engineering are still in their infancy, especially with regard to their biological and biomedical applications.