Quote, June 15 2012 — Questions and doubts can be valid, rather than just an indication of ignorance and delusion

“so in the academy and elsewhere the story of declining confidence in science is seen as reflecting a declining confidence in reason itself — and evidence of the rising tide of stupidity against which we enlightened few must ceaselessly battle.

But are things really so simple?”

-Walter Russell Mead, “Unsettling Science”, Via Meadia / The American Interest,  June 10 2012

Short answer to Mead’s question “are things really so simple?”: No.

Facts or studies are regularly cited by people arguing over various topics, whether it is why people do what they do, what a new government policy should be, or whether it will be possible for a theoretical product to actually be built.

Behind these citations is the belief that if science say something, then that is a fact. You can argue with opinions, but you can’t argue facts, and science is all about facts. Therefore you can’t argue with science.

So when Person A says “Science says this is the case, it’s a fact!” and Person B says “I am still unconvinced.”, many people will assume Person B is anti-science, superstitious, fanatical, delusional, in denial about reality, or in some other way wrong. Because science can’t be wrong.

But as Mead argues & documents in his article, “science” is performed, reviewed by, and reported on by people. And people are not perfect. People can exaggerate, make mistakes, and intentionally lie.

Overall, Mead sees various different ways science has started to have a less-than-perfect reputation:

  • Exaggerating and overly enthusiastic reporting: “Every rat that lives another week is reported as a breakthrough and a possible cancer cure”.  I’ve noticed the same things, and also there are a huge amount of studies written about in the popular press where the sample size was very small. Trying to make grand sweeping statements about all of human nature and all of Western (or even American) culture based on a study involving 50 people in America might be a bit of a stretch.
  • People who want to use science as a way to push a certain theory (rather than investigating whether that theory is true and being open to the possibility it’s not), and become so convinced their theory is right, they begin to twist the data or ignore anything that doesn’t agree with their theories. Sometimes this is intentional — meaning there is deliberate massaging of the data, picking “representative samples” that are representative only of the scientist’s viewpoint, or even outright falsifying of results — and sometimes unintentional, the scientist believes they are right so strongly they are blind to their own data.
  • Sometimes there is just plain incompetence, or an entire field has become so influenced by a particular viewpoint it is blind to its own shortcomings. I’ve seen many criticisms of economists regarding this problem, that economics has become so enmeshed in complicated mathematical models the field as a whole (with the exception of a few renegades) has forgotten the individual people who make up any economy and whose decisions are one of the main determining factors in how an economy is doing.

Peer review is supposed to be something that will catch a lot of these problems, but here again we run into the unavoidable reality that “peer review” is done by a researcher’s “peers” who are themselves also people. And therefore just as fallible as the researchers themselves.

There have been a few cases, such as documented in the Climategate e-mails, where researchers intentionally tried to influence the peer review process by pressuring editorial staff at various journals to only use reviewers friendly to a certain point of view and hostile to any opposing views. But one of the larger critiques of peer review is there are so many very highly specialized fields and very highly specialized journals it can be difficult to get a group of peers together for a peer review who will be truly independent: the field is so small that everyone knows each other, knows each other’s writing styles and current projects and opinions on topics under investigation, so any paper being reviewed is not anonymous in either writer or reviewer and that lets in personal bias (whether intentional or unintentional).

Mead’s article is very well-written (as most of his articles are) and there are some links to other good articles about scientific mistakes and outright scientific fraud.

Serious soul-searching and house-cleaning must take place if the academy is to rehabilitate its reputation. Standards must be tightened, publication of experimental data must be made mandatory and peer review in the soft sciences must mean something. We hope that the documented loss of public trust in science serves as the much-needed wake-up call for reform, because until our elites acknowledge that they have a problem, there can be no solution. That acknowledgement begins with the acceptance of a truth as simple as it is deeply disquieting:

Marc Hauser [a Harvard University professor who studied evolutionary psychology and was fired in 2011 after it was found he had falsified data] wasn’t some kind of one in a million exception. He just got caught.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s