Trying to create mountains...

James Annan has a very nice post to sci.env that I'll quote in its entirety. Hey, it beats thinking for myself. Although, to briefly think for myself, I should say that I have rather less sympathy for M&M than James does. His original is available through google here. This is all in the context of McIntyre waffling about "due diligence":

James says:

Steve McIntyre has found a molehill and is doing his best to make a mountain out of it. I do not mean to be unduly critical of him in those words - I understand the frustration that can occur when one finds what appears to be a significant problem, only to be brushed off in a manner that seems to be rude and dismissive. IMO (and IME), scientists are probably no better and no worse than other types of people in this respect, they have their own egos and prejudices and do not like to be told that they are wrong. My own experience in this area is already in the public domain and does not need repeating again.

Although it is only natural that McIntyre should try to talk up the importance of his work, he seems to completely misunderstand the scientific process in his talk of audit trails and replication. Sure, work should be reproducible, and it is embarrassing for those who find errors in their work or, what is worse, have errors pointed out by others. Peer review is indeed a rather superficial check on the validity of the work, and can certainly be subverted by a determined effort at dishonesty. But scientific research is already subject to a far more relevant and stringent test than he advocates. It is an intensely competitive and adversarial process, with rivals continually trying to improve on each others' work. One could even characterise this as "prove each other wrong", but generally it takes the form of incremental advances that modify the previous results, rather than completely overturning them. Results that are strongly divergent from the existing status quo will certainly be carefully checked in subsequent research. But, except in the most exceptional cases, merely checking that a rival had done their sums right is very unlikely to reap any real benefits - even if some error or inaccuracy is found in the calculation or description, it may well not impact significantly on their results [1], and if no error is found, then this replication still provides no assessment of the validity of the underlying assumptions and methodology of the work. However, the alternative - which is how science actually works - of developing new and improved methodologies, more accurate data sets and better models actually provides a much more rigorous check of the correctness of the underlying assumptions and conclusions of earlier research, which is, after all, the main goal.

I have no direct knowledge of the IPCC process, but McIntyre's picture of climate research consisting of a cosy coterie of pals all working towards supporting a "consensus" and patting each other on the back certainly doesn't ring true with me. The "consensus", such as it is, represents the equilibrium in a dynamic tension with different people pulling in different directions. Taking the example of the climate's equilibrium response to 2xCO2, the consensus view of ~2-6C is not because everyone one is trying to agree on this range, but because no-one has yet found any credible cause for disagreement, despite numerous alternative models and methods (the range itself represents the amount of disagreement, to a certain extent). We can see in eg the recent climateprediction.net results, and the comment published on realclimate.org, evidence of the dynamical tension underlying that consensus view.

So while I have some sympathy for McIntyre's cause, I disagree with his conclusions. While his molehill should not just be ignored, it must also be kept in perspective.


[1] It may be worth noting James's Law of computer bugs - the undiscovered bug probably doesn't matter. FWIW, I found a bug in code I used for a recent publication, and correcting it just makes the results marginally more accurate. The bugs that made the method fail completely were corrected at a much earlier stage :-)


LuboŇ° Motl said...

It makes no sense to generate these completely neutral universal cliches about possible competition.

Obviously the competition did not work in this case if Mann et al. was able to promote their completely wrong paper for so many years.

This simple observation makes this philosophical babbling irrelevant.

William M. Connolley said...

Oh Lubos... you are incorrigible, and so predictable. Describing MBH as completely wrong is so unwise.

Also: this is your last warning re civility.

Anonymous said...

How interesting that you pull up Lubos for "incivility" for describing the MBH99 paper as "completely wrong".

It appears to be the reflexive attitude of AGW promoters to censor even mild criticisms of MBH and treat the responses of critics as if they were heretics to some medieval fundamentalist belief.

You may not realise it, but science has no time for shibboleths and "no go areas".

You can censor on one website but you cannot censor the internet nor the ongoing inquiry of science into everything claimed but yet to be proven. Censorship is for moral and intellectual cowards who are afraid of scientific inquiry.

William M. Connolley said...

No, I told Lubos he was unwise in his description of MBH. I pulled him up for incivility for "philosophical babbling" - fairly mild for most people, but Lubos is a serial offender.

Incidentally, you are quite wrong to say that science has no place for no-go areas: flat-earthism and creationism/ID are out. Over the MBH issue, I don't see any censorship.

Anonymous said...

Over the MBH issue, I don't see any censorship.

That's because you've already deleted it. It's as simple as that. Going through the comments in this blog you've already gone for censorious black.

Just imagine if there was another you who threatened expulsion every time you poisoned the well with references to "septics" or "flat earthers" or "creationists".

I think the reason you're so defensive is because somewhere in the back of your mind where scientific ethics still lurk, you know how badly Michael Mann has behaved and has continued to behave. You know fine well that science is founded not on consensus or political weight given to PhDs, but to empirical reliability and replicability.

That's why MBH99 is so bad, because it cannot be fully replicated. And it cannot be replicated because Michael Mann refuses to disclose his complete methodology.

The rest is chatter.