Which brings in this picture. Black is a 200 year control run, with the g95 compiler on a 4-processor Opteron system (using 3 procs for most of the time). Blue is a rather older run on an Athlon system under the antique fujitsu/lahey compiler. Red is an in between run on Opteron with the Portland Group compiler (pgi). All are seasonal data, differenced from 100 year means of an "official" control run. What you'll notice is that the red run has a distinct climate drift, which is enough to make it unusable. Blue looks OK; black has been run out long enough to be sure its OK. The grey shaded bit is some kind of 95% confidence limit based on the variability of the 100 year "official" run.
Quite why the Opteron/pgi runs drifts I don't know. Its 99.999% the same code as the other runs (differing only in whatever it took to make the compiler accept it). Most likely there is some compiler bug in there; but I will probably never know.
By eye, the 200 year run has no drift. By line fitting, the results are:
0- 50: [ 0.00168505, 0.00451390]
0-100: [ 0.00052441, 0.00159977]
0-150: [-0.00050959, 0.00009494]
where I've shown the (95%) confidence intervals for a line fit over the first 50, 100, 150 and 200 years. Which shows up the internal variability quite nicely. If I'd just taken the first 50 years I might have believed in a drift of 0.3 oC/century, which is small but not perhaps totally negligible. By 100 years the "drift" has a central value of 0.1 oC/Century which would be negligible. Out to 150 years there is no statistical trend. Out to 200, a trivial cooling. Note, BTW, that all these sig estimates are rather thrown-together and should be a bit wider to take proper account of autocorrelation.