Pages

2014-12-04

In silico: When biotech research goes digital

Since the dawn of life sciences, observations and experiments were conducted on live objects and also on dead ones. The crudity of existing analytic methods made most meaningful in vivo experiments on humans increasingly unacceptable except rare cases in extremis, yet working with dead matter was evidently inadequate. Science took the first step towards modeling by resorting to animal experiments. The concept was based on the assumption of similarity of all relevant animal structures and processes with human ones, ceteris paribus. That was an assumption increasingly recognized as flawed and problematic, not least because of increasing public awareness and disapproval of quantitative as well as qualitative suffering inflicted on laboratory animals in the process, and the emergence of the concept that at least certain animate beings had recognizable rights.

But ethical issues aside, contemporary research increasingly recognized that existing models had two severe limitations: first and foremost, they differ significantly and in critical aspects from human structures and processes they were intended to approximate. Second, replication and variation of experiments is frequently and quite substantially limited by two critical factors: time and cost. As a result, live (or formerly live) models could no longer be considered valid approximations in a growing number of areas, calling for alternatives capable of bypassing these restrictions that were also able to handle a dramatic increase in complexity which is the basis of any really useful approximation.

As a result, experiments in silico were conceived, interfacing computational and experimental work, especially in biotechnology and pharmacology. There, computer simulation replaces biological structures and wet experiments. It is done completely outside living or dead organisms and requires a quantifiable, digitized mathematical model of such organism with appropriate similarities, analogies, and Kolmogorov complexity (a central concept of algorithmic information theory)[1], relying in part on category theory[2] to formalize known concepts of high-level abstractions.[3] So, always presuming availability of a high quality computational mathematical model of the biological structure which it is required to adequately represent, “executable biology” has become a rapidly emerging and exciting field expected to permit complex computer simulation of entire organisms, not merely partial structures. A pretty good digital molecular model of a rather simple cell has already been created at Stanford. Much evidence suggests that this could be a significant part of the future of synthetic biology and of neuroscience – the cybernetics of all living organisms – where a new field, connectomics, has emerged to shed light on the connections between neurons. In silico research is expected to increase and accelerate the rate of discovery while simultaneously reducing the need for costly lab work and clinical trials.  But languages must be defined that are sufficiently powerful to express all relevant features of biochemical systems and pathways. Efficient algorithms need to be developed to analyze models and to interpret results. Finally, as a matter of pragmatic realism, modeling platforms need to become accessible to and manageable by non-programmers.

In 2007, the University of Surrey developed the first genome-scale in silico model of the tubercle bacillus. Later, the first project conceived to simulate an entire multicellular organism with an actual neuroanatomy, roundworm (nematode) Caenohabditis elegans, led to a groundbreaking open-code project supported by scientists spread from California to the UK to Ireland to Russia called Open Worm.

Far ahead of the model’s actual reaction lay, of course, the question what degree of biological realism would provide us what degree of behavioral realism – and thus utility. C. elegans is comparatively simple: it has all but 959 cells in total and 302 neurons that form about 7,000 connectomes.[4] That no longer amounts to unmanageable complexity.

But while the European Union has earmarked €1 billion over ten years for its Human Brain Project, after IBM and the Swiss Federal Institute of Technology in Lausanne already invested eight years in Blue Brain and the U.S. government’s BRAIN Initiative dedicated $100 million for essentially the same purpose, it still raises more than a few questions how we should be able to model a brain of any mammal.

The idea of emulating biological systems from a 3D web browser with a quantitative modular simulation engine based on genetic algorithms that allows bringing all the organisms aspects and indeed the whole creature, its connectome and neuronal dynamics to life, on-screen, in real time and with ability to test physical, chemical and neurological effects suggests potential in a very promising direction for a great variety of questions in research and testing.

Already in 1974, Sydney Brenner at Cambridge received a Nobel Prize for Medicine for his early C. elegans research, where he remarked: "Behavior is the result of a complex ill-understood set of computations performed by nervous systems and it seems essential to decompose the question into two, one concerned with the question of the genetic speciļ¬cation of nervous systems and the other with the way nervous systems work to produce behaviour." If in silico simulation did enable us to see how genes shape brains and how brains control bodies, it would be the crowning and eventually likely achievement of this quantum leap in research methodology. Its obvious shortcoming up to now has been, however, that so much of the underlying data on which modeling needs to be based originated from dead worms because so little data derived from live specimens has been published, ever. Some measure of experimental methodology for the harvest of functional data – not the mere fact of the existence of connections between neurons, for example, but the way they actually work in vivo – has only been developed very recently.

While the ingenuity of software engineers is crucial for modeling a worm, talent and capacities for a design of higher-level organisms with complexity and connectomes increased by orders of magnitude will be beyond individual or even collective human creativity and points to a need for heavy lifting by advanced AI and robotics – and those do not appear likely in the coming decade. It took twelve years of research just to map the complete connectome of C. elegans – a nematode that barely has any connectome to speak of, by comparison with higher life forms. The more the model will be determined by the connectome rather than by the genome, the more immense its complexity has to become.



[1] Li, Ming and Vitanyi, Paul. An Introduction to Kolmogorov Complexity and Its Applications (2nd ed.). New York: Springer, 1997, 90.
[2] Phillips, Steven; Wilson, William H. "Categorial Compositionality: A Category Theory Explanation for the Systematicity of Human Cognition". PLoS Computational Biology 6 (7), July 2010, and id., "Categorial Compositionality II: Universal Constructions and a General Theory of (Quasi-)Systematicity in Human Cognition". PLoS Computational Biology 7 (8), August 2011.
[3] Awodey, Steve. Category Theory. Oxford Logic Guides 49. Oxford: Oxford University Press, 2006.
[4]  White, John G.; Southgate, Eileen.; Thomson, J. Nicol.; Brenner, Sydney. "The Structure of the Nervous System of the Nematode Caenorhabditis elegans." Philosophical Transactions of the Royal Society B: Biological Sciences 314 (1165), 1986, 1–340.

No comments:

Post a Comment