The answer is “yes” if a couple of long-held myths are busted.
In an editorial cartoon that appeared in a recent issue of The Journal of Clinical Investigation, a surgeon wields a scalpel over his patient. The caption reads: “Just a little nip here and there. We don’t want it to look like it’s had any work done.” The catch? The patient is a western blot, and the doctor is presumably making his patient look presentable for publication in a peer-reviewed journal.
The drawing’s appearance was “inspired” by four articles that the journal pulled due to alterations of western blot figures. Sometimes, these doctored figures make it through the peer-review process, which can lead to retraction of the published manuscript if detected. In response to a rash of retractions that seriously damaged the reputation of one researcher and forced another to resign, publishers of the blog “Retraction Watch” wrote an op-ed titled “Can We Trust Western Blots?”. In the comments following the op-ed, one reader wrote “I’d like to see western blotting follow its northern cousin into oblivion.” What can scientists do to restore trust in this once venerable—and now vulnerable—technique?
In an editorial article that accompanied the cartoon, the journal editor reminded her readers of “some experimental basics” for western blotting. It’s a good idea to keep these things in mind, as it’s not uncommon for researchers to receive protocols from more senior laboratory members without critical examination. It shouldn’t be difficult for scientists—who are, after all, trained to ask the right questions—to apply this skill to their own methods.
Know the limitations
Despite recent concerns about reliability of the method and its results, western blotting is an indispensable scientific tool used to quantify relative protein levels, with applications spanning from clinical diagnostics to answering fundamental questions in life science. However, as with any powerful analytic method or technique, western blotting has inherent limitations.
Journal editors and scientific peers are now acknowledging this, while also recommending guidelines and testing controls to promote clearer and more reproducible results. It’s clear that researchers must have a firm understanding of these guidelines, both in theory and in practice, to ensure the integrity of their results. With such a time- and resource-intensive technique, awareness of the western blotting process and its limitations are critical.
Two important steps for producing reliable western blot data are normalization and signal detection. Long-held misperceptions—call them “myths”—surround these steps and hinder researchers’ ability to use western blots as a meaningful tool for the quantification of relative protein expression levels. Addressing these so-called myths will go a long way to restoring trust in western blots.
Myth #1: Using housekeeping proteins is the best normalization method
When comparing relative protein expression levels in western blots, normalization methods are commonly used to control for technical errors and inconsistencies that arise during sample preparation, loading and transfer steps. For instance, inherent variations in transfer efficiencies may result in two- to four-fold increases or decreases in the signal between gel lanes. To correct for user inconsistencies that impact signal intensity, scientists employ normalization techniques. The most popular is the use of housekeeping proteins (HKPs) such as GADPH, beta-actin and tubulin. However, it is crucial that researchers have a good understanding of the normalization technique and assay parameters to ensure validity.
Drawbacks of housekeeping proteins
HKPs are thought to accumulate at constant levels under all conditions and in all cell types, because they’re constitutively expressed and maintain cell viability. This may not be true, though. Studies indicate that housekeeping protein expression can vary according to conditions such as tissue type, disease state, sample preparation and the environment. HKP characterization is often bypassed because validating the use and stability of HKPs per application adds significant cost and time to each experiment.
A second, often overlooked, drawback to using HKPs is signal saturation. Target proteins are generally only present in low quantities. Thus, scientists must overload the sample to quantify low-level proteins. This technique may make proteins more visible, but it can also spell trouble.
When saturated, the HKP levels are no longer within the linear dynamic range for immunodetection, preventing accurate quantification. To alleviate this overloading concern, researchers should first characterize and then select the HKP in which the protein signal is linear and within the system’s dynamic range. This involves individually optimizing each HKP with respect to antibody dilutions, incubation times and image settings.
Even if HKPs are properly validated for use under the relevant experimental conditions and optimized so they are present at levels within the linear dynamic range of quantitation, the process of using them—either by stripping and re-probing or via multiplex fluorescent blot detection—can be daunting. The strip and re-probe method is time consuming, and the stripping process inevitably removes some level of antigen, thereby compromising downstream results. Although a more elegant solution, multiplex fluorescent western blotting requires optimization of blocking reagents, antibody concentrations and incubation times, while users need to be mindful of challenges like antibody cross-reactivity.
A better alternative: Total protein normalization (TPN)
Use of total protein normalization (TPN) as a loading control, another normalization method, is less affected by the limitations described in HKP normalization. Studies on HKPs have shown that total protein staining of the gels or blots is better suited than HKPs to correct for differences in loading. Also, TPN’s wider linear dynamic range exhibits better linearity at the lower, more relevant protein levels (10 to 50 µg) used in western blotting. The TPN technique is so well matched to this need that the target protein is directly normalized against the total protein concentration in each lane, making this technique more universal and accurate than HKP normalized experiments.
Aldrin Gomes, an asst. prof. at the Univ. of California-Davis and a leading advocate for improving the reliability of western blots, hopes that a decade from now most researchers will be using TPN as their normalization method.
The major limitation to conventional TPN is that the blots are stained with total protein stains, such as Sypro Ruby or Ponceau S. Although the staining process provides better sensitivity, it adds complexity, cost and time, and introduces time-dependent variables.
A recent development in TPN, known as the “stain-free” approach, possesses even greater sensitivity at low protein levels while reducing the complexity, cost and time associated with staining. The technology uses a tri-halo compound that binds to proteins during gel electrophoresis, and allows researchers to directly visualize and quantify proteins both in gels and on blots within minutes. As this innovative approach eliminates the staining step, stain-free TPN is more cost effective, time effective and robust than its conventional counterpart.
Myth #2: X-ray film is the best method for detecting western blots
X-ray film remains the most widely accepted and commonly used detection technique for visualizing chemiluminescent blots, which are the most popular type of western blot. This is due to x-ray’s high sensitivity, and resolution and affordability.
Yet one major challenge of x-ray film is saturation; it’s easily saturated by chemiluminescent signals from the blot, which precludes quantitation. In fact, it’s this ease of signal saturation that gives scientists the (false) idea that film is more sensitive than digital imaging.
Also, it’s not obvious when oversaturation has occurred. When working with weakly expressed proteins, researchers often go with longer exposure times at the expense of oversaturating the stronger expression signals, such as those that result from an overabundance of HKPs. To prevent saturation effects, researchers must characterize x-ray film to identify its linear dynamic range for detection of a particular antibody under the relevant testing conditions to prevent saturation effects.
A digital alternative
Recently developed digital imaging methods provide a wider linear dynamic range and lower limits of detection, thereby addressing the saturation issue. A recent study found that using a two-fold dilution series of a protein lysate, along with a digital imaging system, resulted in a dynamic range nearly an order of magnitude greater than that with film (0.04 to 2.5 ng versus 0.04 to 0.31 ng) for the protein which they probed. It’s also worth noting that the costs of advancements in digital imaging increasingly make this detection method more reliable and affordable. The costs are now very comparable to film, especially when factoring in the price of the developer, ongoing running costs and the increasing price of film.
Advances in western blotting over the past decade, in both normalization and detection methods, mean researchers can quantify western blots more reliably. It’s up to the scientist to critically re-examine the tools they currently use and make choices that will improve their results. Once this happens, confidence in western blot data will inevitably follow.