Once upon a time, before smart phones, Web searches, and even the Internet itself, Barbara was a laboratory technician in the research department of a large clinical laboratory. She was assigned the task of developing a chromatographic test to
detect the deficiency of a desirable chemical in blood serum. This test was predicated on replicating a study performed by researchers in France and published in a prestigious journal.
The success that wasn’t She could purposely spike serum with the substance and obtain a logical standard curve. The problem? Although healthy people would consume what should be a detectible amount of this necessary chemical, she could not detect it in serum, even after literally demanding blood from dozens of co-workers. She tried multiple extraction techniques, tested multiple elution programs. She tried post-column derivatization to increase the sensitivity, including one that resulted in a brilliantly exothermic reaction. The end result? No success. Finally, after a year of experimentation, gnashing of teeth, and disparaging comments from supervisors about the lack of intelligence of lab techs, in desperation, she dusted off her high school French and contacted the researchers. The researchers responded that they also could not detect the chemical in serum. What they actually had detected was an artifact due to contamination of the injector in the chromatographic system. The chemical in question must change or metabolize before entering the blood.
There are three morals to this story. The first moral is: you can save time and money by listening to your laboratory technicians; they may very well know what they are talking about. The second moral is: Beware of success stories. Mistakes happen in publishing, even in peer-reviewed journals. The third moral is: there is no guarantee that negative results or corrections will be published promptly.
Irreproducible results
The noted physicist, Richard Feynman, repeatedly called for truth in publication, including publication of both positive and negative data.1 Have matters improved since the 1970s?
ENTRIES OPEN:
Establish your company as a technology leader. For 50 years, the R&D 100 Awards, widely recognized as the “Oscars of Invention,” have showcased products of technological significance. Learn more.
Perhaps not. In a recent business report, Michael Hiltzik2, 3 describes a study by a large biotech company that found that of over 50 landmark, peer reviewed papers in their field, only six could be proven valid. Among other recommendations, the researchers of this study suggested that “critical experiments should be repeated, preferably by different investigators in the same lab, and the entire data set must be represented in the final publication.”4
Peer review
Wouldn’t peer review have caught the problems in these papers? Not necessarily. Hiltzik points out that researchers may wax eloquent about their results to enhance the likelihood of acceptance by prestigious journals. He characterizes journals as wanting papers with the sexiest claims. Certainly, results that add value to the body of knowledge or that have practical applications to resolving societal problems would be more compelling. Further, reviewers may be eminently qualified to read and comment on the paper, but they may not have the time, inclination, or funding to examine the study to the depth needed to uncover subtle flaws. Researchers have little incentive to repeat published experiments because it can be harder to obtain funding, and because some journals may not be enthusiastic about accepting “me too” submissions.
Continuous peer review?
In this day of social media and virtually instantaneous news reporting, it can make sense, and good science, to provide a pathway to ongoing evaluation and improvement of published works. In PubMed Commons (National Institute of Health), there is an
online, continuous peer view process for published papers. At this point, peers are limited to those who have published on that site and reviewers have to be nominated. It may be an idea whose time has come. However, criticizing published papers can
raise problems, because overly enthusiastic competition is not unheard of even in the rarefied world of academia. Just as with online restaurant reviews, if this idea were to be expanded to manufacturing applications, checks would have to be set up to prevent competitors from contributing unwarranted, counterproductive comments.
Trust and verify
Given time and budget constraints, manufacturers have to depend on other people’s studies. We live in an information-rich society. How can we evaluate the truth of that information? We have learned that even peer reviewed papers may be flawed. In manufacturing, virtually all of our information comes from white papers, from reports in trade publications (print and online), and from conference presentations. Many, if not most, reports are not peer-reviewed; some are sponsored by chemical or equipment vendors. There are so many manufacturing variables that the truth depends on the situation. Even in the absence of out-and-out false or misleading claims, what works for someone else’s cleaning process may not be right for you.
The devil is in the details; so is the truth. One approach is to get more information directly from the people who have published or presented the study. When we learn about a promising technique or case study, we often contact the people. Text, or send an email; but we like a retro technique – the telephone. If we hear a great program at a conference, we have even been known to (gasp!) talk face-to-face with the presenter. The interaction allows us to better understand where a process is most likely to
be successful. In the final analysis, you have to “trust and verify.” Test before investing in new equipment or modifying a process line.
References
1. R. Feynman, excerpts, Commencement Speech, Cal Tech 1974. http://neurotheory.columbia.edu/%7Eken/cargo_cult.html
2. M. Hiltzik, “Science has lost its way, at a big cost to humanity,” Los Angeles Times, Oct. 27, 2013.
3. M. Hiltzik, “More on the crisis in research: Feynman on ‘cargo cult science’,” Los Angeles Times, Oct. 28, 2013.
4. C. Glenn Begley & Lee M. Ellis, “Drug development: Raise standards for preclinical cancer research,” Nature 483, 531–533. (29 March 2012).
Barbara Kanegsberg and Ed Kanegsberg (the Cleaning Lady and the Rocket Scientist) are experienced consultants and educators in critical and precision cleaning, surface preparation, and contamination control. Their diverse projects include medical device manufacturing, microelectronics, optics, and aerospace. Contact: info@bfksolutions.com
This article appeared in the March 2015 issue of Controlled Environments.