Since the dawn of life sciences, observations
and experiments were conducted on live objects and also on dead ones. The crudity of existing analytic methods made most meaningful in vivo experiments on humans
increasingly unacceptable except rare cases in
extremis, yet working with dead matter was evidently inadequate. Science
took the first step towards modeling by resorting to animal experiments. The
concept was based on the assumption of similarity of all relevant animal
structures and processes with human ones, ceteris
paribus. That was an assumption increasingly recognized as flawed and
problematic, not least because of increasing public awareness and disapproval
of quantitative as well as qualitative suffering inflicted on laboratory
animals in the process, and the emergence of the concept that at least certain
animate beings had recognizable rights.
But ethical issues aside, contemporary
research increasingly recognized that existing models had two severe
limitations: first and foremost, they differ significantly and in critical
aspects from human structures and processes they were intended to approximate.
Second, replication and variation of experiments is frequently and quite substantially
limited by two critical factors: time and cost. As a result, live (or
formerly live) models could no longer be considered valid approximations in a
growing number of areas, calling
for alternatives capable of bypassing these restrictions that were also
able to handle a dramatic increase in complexity which is the basis of any
really useful approximation.
As a result, experiments in silico were conceived,
interfacing computational and experimental work, especially in biotechnology and pharmacology. There, computer
simulation replaces biological structures and wet experiments. It is done
completely outside living or dead organisms and requires a quantifiable,
digitized mathematical model of such organism with appropriate similarities,
analogies, and Kolmogorov complexity (a central concept of algorithmic
information theory)[1],
relying in part on category theory[2]
to formalize known concepts of high-level abstractions.[3]
So, always presuming availability of a high quality computational mathematical
model of the biological structure which it is required to adequately represent,
“executable
biology” has become a rapidly emerging and exciting field expected to
permit complex
computer simulation of entire organisms, not merely partial structures. A pretty
good digital
molecular model of a rather simple cell has already been created at
Stanford. Much evidence suggests that this could be a significant part of the
future of synthetic biology and of neuroscience – the cybernetics of all living
organisms – where a new field, connectomics, has emerged to shed light on the
connections between neurons. In silico research is expected to increase and accelerate the
rate of discovery while simultaneously reducing the need for costly lab work
and clinical trials. But languages
must be defined that are sufficiently powerful to express all relevant features
of biochemical systems and pathways. Efficient algorithms need to be developed
to analyze models and to interpret results. Finally, as a matter of pragmatic
realism, modeling platforms need to become accessible to and manageable
by non-programmers.