Scary scaling

A while ago - back in 2002 I suppose - I heard vague refs to a paper about "scaling" which somehow demonstrated global climate models fail to reproduce real climate when they are tested against observed conditions. Since this was being posted to sci.env by the usual nutters I didn't pay too much attention, and as far as I can see neither did anyone else; though it occaisionally recurs. For one thing, the original article was published in Phys Rev Lett which I (and I think most climate folk) don't read; and pdfs weren't scattered across the web quite as freely in those days. And for another, whatever they were saying was so abstruse as to appear meaningless (even the nutters didn't push it much, because they had no idea what it was about either).

However, someone who isn't a nutter (thanks Nick! But I was right: its the Israelis) has re-drawn it to my attention, and even provided me with it on paper, so I've read it. You can too: its Global Climate Models Violate Scaling of the Observed Atmospheric Variability by R. B. Govindan Dmitry Vyushin Armin Bunde, Stephen Brenner, Shlomo Havlin and Hans-Joachim Schellnhuber. And it did get some attention: e.g. from Nature (subs req) (reputable of course, but sometimes over-excitable). But... is it any good?

Weeeeeelllll... probably not. This is yet more of the fitting power laws to things stuff. They use "detrended fluctuation analysis" (DFA) which I don't understand, but that doesn't matter, we'll just read the results. So... Govindan et al. do their DFA on observations from 6 (rather oddly chosen) stations; and 6 GCMs. The first oddness is their chosing Prague, Kasan, Seoul, Luling (Texas), Vancouver and Melbourne as represenatative of the world. Never mind. They get A ~ 0.65 for these stations. Don't worry too much about what A is; its related to the memory of the system: A ~ 0.5 is no memory (white noise); A ~ 1 is long memory (red noise). They assert boldly that this 0.65 is therefore an Universal Value. They discover that the GCMs, forced by GHGs only, by contrast get A ~ 0.5. Which, says Govindan et al., means that the GCMs overestimate the trends. Just to make sure that you won't miss this, they repeat the same at the end. But... this is not news. The fact that GCMs forced only by GHG's overestimate the trends is in the TAR (like just about everything else you need to know about climate change, its in the SPM, as fig 4). When you add in sulphates, the A from the models increases somewhat (to 0.56-0.62 ish); but thats arguably still too low. So whats up?

Which is where we turn to... Fraedrich and Blender, Scaling of Atmosphere and Ocean Temperature Correlations in Observations and Climate Models. Also in PRL. Who argue that G et al. are wrong: their Universal Value of A ~ 0.65 is not universal at all. They do a much wider analysis: instead of just a few stations, they use a gridded dataset across as much of the globe as they can. And they find (surprise!) exactly what you would expect: over the oceans, high A (~ 0.9) and over the continental interiors, low A (~ 0.5) and in between, mixed A (~ 0.65). Why is this exactly what you expect? Because the ocean has a long memory but the land doesn't. And... if you draw the same plot in a GCM (ECHAM4/HOPE) you get a remarkably similar pattern. So they come to a quite opposite conclusion: the DFA analysis actually shows the GCM performing rather well. And they conclude: The main results of this Letter follow in brief: (i) The exponent A ~ 0.65 is predominantly confined to coasts and land regions under maritime influence. (ii) Coupled atmosphere-ocean models are able to reproduce the observed behavior up to decades. (iii) Long time memory on centennial time scales is found only with a comprehensive ocean model. That last point arises because they tried the same analysis with a slab ocean and with fixed ocean; unsurprisingly, the scaling doesn't work in those cases.

F+B also picked their own seemingly odd station, Krasnojarsk, as a continental interior station, and showed (their fig 1) a scaling of A ~ 0.5 between 1y-decadal scales. At this point Govindan drops out, but some of the original authors reply, saying that (i) the scaling isn't 0.5 at K; and (ii) it isn't 0.5 at other interior points too (they pick yet another scatter of random stations). F+B reply, that (i) Oh yes it is (ii) maybe its the fitting interval: they use 1-15 years; the others are using 150-2500 days. On (i), looking at the pics, I'm with F+B and I can't see what the others are up to.

F+B, incidentally, argue that a control-run GCM (ie no external forcing) is quite good enough to get the long-timescale correlations, and that other forcing doesn't much help (for these purposes at least; you might perhaps have argued that adding in solar forcing and volcanic and stuff might help further). In Blender, R. and K. Fraedrich, 2004: Comment on "Volcanic forcing improves atmosphere-ocean coupled general circulation model scaling performance" by D. Vyushin, I. Zhidkov, S. Havlin, A. Bunde, and S. Brenner, Geophys. Res. Letters, 31 (22), L22502. DOI: 10.1029/2004GL021317 they criticise Vyushin (one of the et al. with G) for suggesting that volcanic helps, on the grounds that it simply isn't needed to get these A values right.

So after all that, what do we end up with, and what have we learnt? Assuming F+B are more right (and I think they probably are, based on what I've read so far) we've learnt very little. The fact that T increases are bigger sans aerosols is bleedin' obvious; as is the longer memoery of the oceans. We have a validation of the GCMs by another measure, but a rather abstruse measure and not an obviously useful one.


Blogger Lumo said...

This comment has been removed by a blog administrator.

8:39 pm  
Blogger Lumo said...

This comment has been removed by a blog administrator.

9:06 pm  
Blogger CapitalistImperialistPig said...

Bellete - I don't know what Lubos had to say in his erased comments, but I would like someone to address a comment he made on his blog: Oceans or continents can change the (dimensionful) timescales of exponentially decaying processes or the overall size of the temperature fluctuations, but they should not change the (dimensionless) critical exponents of the power laws.

You, and the original authors, both say gamma = .5 corresponds to white noise, and that larger values correspond to memory, which seems to contradict Lumo, but could you make the argument a bit more directly? What exactly is Lumo missing here?

3:36 am  
Blogger Arun said...

Lubos provide a link to http://arxiv.org/abs/physics/0305080

5:04 am  
Blogger Arun said...

I also find this, emphasis mine:


Power-Law Persistence in the Atmosphere: Analysis and Applications

Authors: Armin Bunde, Jan Eichner, Rathinaswamy Govindan, Shlomo Havlin, Eva Koscielny-Bunde, Diego Rybski, Dmitry Vjushin


We review recent results on the appearance of long-term persistence in climatic records and their relevance for the evaluation of global climate models and rare events.The persistence can be characterized, for example, by the correlation C(s) of temperature variations separated by s days.We show that, contrary to previous expectations, C(s) decays for large s as a power law, C(s) ~ s^(-gamma). For continental stations, the exponent gamma is always close to 0.7, while for stations on islands gamma is around 0.4. In contrast to the temperature fluctuations, the fluctuations of the rainfall usually cannot be characterized by long-term power-law correlations but rather by pronounced short-term correlations. The universal persistence law for the temperature fluctuations on continental stations represents an ideal (and uncomfortable) test-bed for the state of-the-art global climate models and allows us to evaluate their performance. In addition, the presence of long-term correlations leads to a novel approach for evaluating the statistics of rare events.

5:07 am  
Blogger Arun said...

There is a description of the DFA algorithm in section II of this:


How does one apply this to the weather, on the scale of days to years? How does one take out the annual periodicity, for instance?

5:34 am  
Blogger Arun said...

I think Belette's A is ( 1 - gamma / 2).

Thus physics/0208019 which finds gamma = 0.4 for small-island weather stations, has A = 0.8, which tends to agree with F+B about the oceans.

6:00 am  
Blogger Belette said...

Arun, CIP - the idea that the ocean has a "memory" is a commonplace, and the name you want is Hasselmann (Tellus, 1976, p473 is probably the first). The idea being that the ocean integrates short term noise to give you a red spectrum.

As to what Lubos is missing - its more what he is assuming. The very idea that there is this universal power law is deeply dodgy, and probably simply wrong - I suggest reading the F+B paper for that.

9:56 am  
Blogger Wolfgang said...


it seems that you and Lubos are talking about different things.

You write that exponent = 1/2 is white noise, but this is of course not true for Lubos' exponent.

(A previous comment mentions the same issue.)

In addition, it seems to me that
your post is inconsistent:

"A ~ 0.5 is no memory (white noise); A ~ 1 is long memory (red noise). [..]
They discover that the GCMs [..] get A ~ 0.5. Which [..] means that the GCMs overestimate the trends. [..] But... this is not news."

Either A = 0.5 is white noise or trends but not both.

4:41 pm  
Blogger EliRabett said...

Arun they killed the annual cycle by finding an average temperature Tave for each day, and then working with DELTA Ti - Ti-Tave.

Power laws are numerology without a theoretical basis. What worries me about these exercises is that the data clearly has curvature (on a log-log plot!) for most of the stations, and that means that the fit to an exponent depends on where you start your fit.

4:42 pm  
Blogger Belette said...

Wolfgang - well, I started this! If Lubos wants to use a different nomenclature thats his (and perhaps his readers!) problem.

Govindan (and F+B) are both using S(f) ~ s^-B, with B = 2A - 1 (they use A as the primary measure). So A = 0.5 is B = 0. And G as Arun says. Its on p2 of F+B with refs.

As for overestimating the trends... I'm just reporting what G et al. say. I'm not quite sure why they think A = 0.5 indicates overestimation... you're welcome to read the paper and explain their reasoning if you like! My point was that GHGs only is well known to overestimate the trends, so there is nothing new there.

Eli - I don't think this is numerology. The F+B paper shows that the A's follow a physically plausible pattern: what you would expect, in fact. And (F+B fig 1) they clearly are straight (ish) lines over a fair portion of the time range - 1-15y.

5:03 pm  
Blogger Arun said...

From F+B "Long time memory on centennial time scales is found only with a comprehensive ocean model".

Are the oceans the reason we can meaningfully talk of climate?

2:44 am  
Blogger Belette said...

Arun - the point about the ocean as opposed to the land (though I'm sure RP Sr would dispute this...) is that being a large fluid reservoir it has a "memory" and can thus support variability on long timescales. Whereas the land, being fairly boring (at least as represented in GCMs) can't.

So we would have a climate if the ocean temperatures were fixed to be annually-repeating (as they can be in atmos-only GCM runs); but we can have a climate with rather more interesting long-term fluctuations with an ocean too.

Potentially the same could be true at even longer timescales with large amounts of land ice (LGM; D-O cycles) though this is less clear.

10:25 am  

Post a Comment

Links to this post:

Create a Link

<< Home