During routine perusal of the literature for mathematically oriented materials on biological problems, I found one of the more interesting examples I’ve seen in my daily mail. It was titled “A Strategy for Integrative Computational Physiology,” and contained a game plan for developing a quantitative modeling framework for dealing with what the authors called multiscale issues.1 By this they meant the understanding of those gross physiologic processes, such as heartbeat, through the understanding of cellular and molecular processes constraint by structure-function relations.
The reductionist approach is certainly not new, but the authors (under the auspices of the International Union of Physiological Sciences (IUPS)/Physiome and Bioengineering Committee) deserve some consideration for taking on the onerous task of uniquely identifying and linking the components that, at many levels, will input to the models. Bioinformaticians have long struggled with harmonization issues that result from non-unique labels of their subject matter.
Particularly fascinating in their introduction was the perceived linkage between transitions in nineteenth- to twentieth-century physics and transitions in biology from the first to the second half of the 20th century. They believe that, just as classical field theory was overshadowed by quantum theory and particle physics, the integrative systems-level physiology gave way to cellular and molecular biology. Their most interesting contentions are that these studies may lead to unforeseen consequences that may be of valuable use in uncovering even more knowledge. They state that the past guiding principles of simplicity and validation, necessary in all mathematical models, will now be joined by a third, to the effect that the results of applying these models to biological problems will demonstrate that the sum of the parts will actually be exceeded by the whole of the outcome!
The nitty-gritty of their efforts include a cell markup language to standardize mathematical description and metadata of the cell models, application programming interfaces to define the way information is propagated, and the development of authoring tools and simulation codes. The important idea here is that we are taking further steps to not only bring our surplus of data under some form of control, but to quantify the results, develop theories, and then validate these models in some sort of a unifying and standardized framework. This has been attempted in the past, with limited success, on smaller scales by individuals and institutions. The next logical step is the interdisciplinary, multi-institutional attack whereby the data monster can first be subdued and then analyzed. Related approaches have not been very successful as we are usually so concerned with the mass of small isolated facts generated, so overawed by the computational power of large servers to reduce this data to manageable bits, and so overly concerned with polluting the literature with papers detailing the minutiae of small steps, that we often overlook the logistical steps and teamwork necessary to integrate the pieces into a coherent whole.
In my little corner of the world (here we go again!) molecular biologists, statisticians and bioinformaticians are attacking the problem of making sense out of the data generated by microarrays, or “gene chips” that will produce a snapshot in time of the affects of a particular treatment upon the genetic complement of the organism. So far, we have published numerous papers arguing what constitutes a proper spot on the chip, how to reduce the variability in our biological preparations, how to design a reasonable experiment, how to standardize the variability in spots and chips, how to mathematically model the errors in the data, and finally, very brief statements as to what the results might mean. The depth of complexity in these papers runs from the less challenging to the more complex at a level that only a theoretician could love (or understand!). After the initial historical progress with standardization and analysis, we appear to be at an impasse as very little light is being generated from all of this heat.
Baring a few breathtaking breakthroughs, we will still have major problems integrating the genetic with the proteomic data (the genetic material will exert its effects through proteins), to say nothing of relating the outcome to integrative physiology. And, even if that is done, we are still a long way off from reliably generating effective treatments for what ails us, to say nothing of cures. Problems, problems….
Should the point of this months’ diatribe escape the reader, allow me to clarify. We have generated a large database of facts and attempted to analyze the least complex and relate it in some fashion to gross result. However, to truly take the next important step will take realization and effort of the type propounded by the IUPS committee above. It will still be a long hard road, but we are accustomed to this.
1. “A Strategy for Integrative Computational Physiology.” Peter Hunter and Poul Nielsen. Physiology, (20) Oct. 2005, 316-325.
John Wass is a statistician with GPRD Pharmacogenetics, Abbott Laboratories. He may be reached at [email protected]