Failed Empire

Chronicling the collapse of a failed society

The Decline Effect Demonstrates Strength of the Scientific Method, Not Weakness

There is a fascinating article up at the New Yorker [h/t Susie Madrak] dealing with the disturbing trend of a “decline effect” in all types of scientific research.  The article is rather lengthy but worth the read.

The gist of it is that when a newly discovered phenomena tend to be well-supported by research in the early days, but become undermined by newer studies as time goes on.  The author cites the example of “verbal overshadowing,” a concept which claims that people who try to verbally describe an experience immediately afterwards will be less likely to accurately remember it.  In the original study, it is was overwhelmingly demonstrated that verbal overshadowing exists.  As the years progressed, however, the effect became less and less pronounced in progressive studies, until even the original researcher is now unable to replicate his original findings.

The article asserts that similar occurrences are happening in a broad swathe of scientific research, from psychology to biology to pharmacology.  It suggests that the causes of the decline effect are largely grounded in pervasive biases among the experimenters.  That is, the beliefs and expectations of the scientist inadvertently alters the outcome of the experiment.  Although this phenomenon has been widely recognized, the article implies that it is far more pervasive than many would like to admit.  In fact, it even goes so far as to suggest that the scientific community is preventing discussion of the decline effect, an accusation with profound ramifications for a society as deeply dependent on science as ours.

Perhaps the most fascinating aspect of the article was the implication that the decline could be attributed to the fact that we know so very little about existence itself:

In the late nineteen-nineties, John Crabbe, a neuroscientist at the Oregon Health and Science University, conducted an experiment that showed how unknowable chance events can skew tests of replicability. He performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.

The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.

The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise. The hyperactivity of those coked-up Edmonton mice wasn’t an interesting new fact—it was a meaningless outlier, a by-product of invisible variables we don’t understand.

I suspect that both explanations are accurate.  The biases of scientists undoubtedly do impact the outcomes of their research, and the intricate interactions between countless millions of variables in the the world around us are simply far beyond our current depth of knowledge.  However, I do not believe that these shortcomings, nor the existence of the decline effect, reflect negatively upon the scientific process.  If anything, it is quite the opposite.

The scientific method is designed to be self-correcting, and in essence this is exactly what we are observing.  Our current research is clearly inadequate in many respects, often generating unreliable results.  This seems to be due to innate flaws in the research designs themselves, as well as our woeful lack of understanding of the workings of the universe.  But the built-in mechanisms for error correction within the scientific method are remedying these short-comings.  We have identified the existence of the decline effect, and we can now begin taking steps to overcome it.  The universal application of doubleblind studies, for example, would help to reverse much of influence caused by experimenter biases.

The important lesson to be learned here is not that the scientific method is flawed, or that it is has somehow failed us.  On the contrary, we should take solace from every occasion such as this where a source of error is identified, addressed, and resolved, since these occasions only serve to strengthen the best tool we have in the quest for greater knowledge and understanding.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: