Somewhere in the discussion around and after a post of mine on Junkscience is junk I realised that people just don't understand repeatability in GCMs, or indeed much about them at all. This may be in part because they have assumed "telling us precisely which model you used, which initialisation files and what forcing values you supplied (so we can indulge that repeatability thing" (from http://www.junkscience.com/MSU_Temps/Model_Request.htm) made some kind of sense. Sadly now I can't find where the comments were, so I'll try to make it up.
So... although I'm fairly sure much of this is the same for most modern atmosphere-ocean GCMs, I'm actually talking specifically about [[HadCM3]].
There are two sorts of repeatability: you run the model again, and you get *exactly* the same results down to the last bit. This is call bit-reproducibility. Or, you run the model again, and you get *scientifically* the same result (the same climate; probably the same response to forcing within statistical error) but the exact details of the weather differ. Because the climate is chaotic (in the sense that small initial perturbations rapidly amplify) and GCMs reproduce this well, if your model diverges even slightly from bit-reproducibility it will diverge strongly from it, because the details of the individual weather will be totally different. But the climate (statistics of the weather) will be the same.
Which is a good time to point out that GCMs are really weather forecast models run at a lower resolution but for much longer. When you *see* GCM results you normally see global-mean annual-mean data. Don't be confused by this (as some clearly are) into thinking this is what the model directly outputs. It directly outputs temperatures at each gridpoint (96*73 in the case of hadam3 at std res; and in fact at 19 atmos levels too; and of course all the other model variables too) at each timestep (1/2 hour) and this is then meaned up into what you see.
For scientific purposes, bit-reproducibility is not necessary. The weather isn't supposed to be any individual events anyway, so you don't care if you get different weather. But for computational purposes, its rather useful. Firstly, if you're looking for the bug that caused a model crash, its pretty hard to find if the model isn't going to follow the same path when run again. Secondly, the model runs on multiple processors. Bit-reproducibility means it will follow exactly the same path even if the number of processors is different, or if the decomposition is 2*4 instead of 4*2. Also, a good test of the correctness of the MPP decomposition is to run it on different numbers of procs to check you get the same answer.
But bit-reproducibility is only possible on the same processor type, with the same compiler (probably the same version) and the same compiler options, and exactly the same code, and exactly the same input constants, and exactly the same start files, which are themselves enormous. And of course you have to be competent, which lets out Junkscience right from the start. In theory, from the papers and documentation you can write a version of HadCM3 that would be scientifically equivalent. You have no hope of writing one that is bit-equivalent. Its also possible to run the model in non-bit-reproducible mode if you want to (its slightly faster; for example, for bit-r you need to manage the corss-processor calculations in a particular way to get them to come out exactly the same).
And sometimes you don't want bit-reproducibility. The figure that JS are too stupid to see (SPM fig 4) has an ensemble of model runs deliberately started from different initial conditions so that the weather would be different in each, to get a feel for the range of natural variability. If you started a model (assumed to be 100% correct) back at 1860 with 99.9999% accurately known initial conditions you wouldn't expect it to track the individual years accurately (which is why those stupid sci-fi novels about time-travel, where they wander around being careful not to toouch anything, and only get into trouble when they accidentally crush a butterfly, are nonsense. Just being there, standing in the way of the air currents, is enough to totally change the weather and hence all of history). Hence to know if your model is right, you need an ensemble of runs to bracket the natural variability (and you may get unlucky and find that the real world took an unlikely branch).
There has been some debate on climateprediction about this. The nature paper claims that where identical runs are handed out "most give identical results. Where they do not, they are usually very similar".
Participants however found on comparing well over 50 pairs, that most were very similar but none were identical. Some of the pairs compared had identical processor, processor speed and operating system version. Rerunning the same model on the same PC does seem to produce identical files while on different PCs the results are usually different (though AFAIK only a few people have tested this). A couple of examples of very large differences have been found like:
but I think we understand why these unrealistic cooling climates occur.
Part of the discrepancy between "most" and none being identical may relate to the nature paper using the first results returned which would all be from fast modern PCs running windows. The examples examined by participants are on BOINC and would have a wider range of PCs and operating systems.
I am not disagreeing with what you have written, just showing that even a nobody like me can provide a reference and recount examples which largely back up what you are saying. (The article seemed short of authoritative references :P )
It still seems odd to me that there should be such a large discrepancy between "most" and none being identical. Can you explain?
Excellent post. There is not enough explanations of how GCMs works, which leads to lots of public misunderstandings (it looks like a lot of people believe you can just download a GCM, click on a few options and run it on your PC while surfing the web).
I've clarified your typically crackpot opinions about science on my blog. You have quite a lot to learn about the basic principles of science and the use of computers for scientific purposes in particular.
Hi Lumos. Well, I attempted to explain to you what people in the field know: that even given all the things JS demanded, the run wouldn't be exactly repeatable. And that there is no reason to want it to be.
Sadly, for a presumably intelligent person, your comprehension of anything climate related is poor. I really think your mindset is so fixed that your internal filters reject anything resembling the truth.
Still, its funny - you've done 3 (?) posts on me so far. By Hobbes views, thats pretty respectful of you.
John Fleck said -
In a post on my blog a few days ago, I singled out Lumo - unfairly in his view - for his penchant for dismissing his opponents in this debate for being venal, or stupid or both:
In response, he suggested that I was probably not venal, just stupid, and he reiterated his belief in William's stupidity:
I thought that his response rather made my point, far more usefully than I ever could have. And again, here in the Stoat comments, he demonstrates his mastery over a certain style of rhetoric.
Hi John. Until you get your comments fixed you're welcome here. Well, you'll be welcome after too.
As to Lubos: its dishonest of him to harp on about a factor error that I fixed before he even read the post (and the error wasn't even in the latent heat, which I had right. I'm not sure if Lubos has figured out what the real error was).
No matter really: while he is at that level of desperation he is, as you said, proving a point quite nicely, if not the one he intends (or is it? perhaps its all a cunning ploy to make us climate folk look good by comparison... :-)
crandles - interesting comment. If the results of supposedly identical runs were not identical, then I don't see why they should be even particularly similar, except climatologically. I'm used to linux; these are for windows mostly; possibly windows fudges stuff... but that seems an unlikely explanation.
Certainly hadXm3 is bit-identical when run on different processors on the same cluster (we have athlon 1600 and 2100; the results are (I'm fairly sure) identical.
cp.net is not-quite hadsm3 though, because of all that they had to do to make it run under windows (I guess; I'm not familiar with any of the details).
So... interesting. Where is the debate at cp.net? I found the discussion boards http://www.climateprediction.net/board/ but not anything obvious there.
The discussion got spread around a bit partly due to the php message boards being down for 2 months.
Some started here and continued here. There is also this which included the none identical in 47 comparisons.
I am sure there is more. I remember calculating the sum of the squares of the differences between the same run on different computers over time. It didn't surprise me that using just the seasonal averages, which were easily available, was insufficient resolution to see how quickly the models diverged before the differences stabilised at a modelled natural variability level.
Revisiting these threads, I think I now remember that we have found some identical results. These were all with AMD processors. I don't know how many pairs were looked at to find these.
I think climatologically similar is what was meant. Those threads include plotting differences on maps and the areas look pretty random to me. The GMST seems to stabilise at the same level with a bit more variation in the precipitation.
>cp.net is not-quite hadsm3
HADSM3 is written in fortran. Most of what has been added is control stuff in C to implement downloading application, uploading of files, trickles, checkpointing etc. I imagine they are careful to ensure they mess with the fortran code as little as possible. However, there are some changes. The main one I am aware of is cp.net uses 32 bit singles instead of 64 bit precision.
Different maths libraries has been suggested. If one PC uses a different function to calculate a result, the rounding might end up different. Then butterfly sensitivity takes over for different regional weather but the GMST stays similar. I haven't got to the bottom of when different math libraries may be used in different PCs. "most" or none identical seems a big gap though.
Maybe you are just lucky to be using Athlons that use the same maths library. I wonder if you would find a difference if you used a P4 or whether it is only 32bit CP.net that gets these differences or whether linux is immune.
I remember long discussions on comp.arch about floating point arithmetic gotchas, but can't locate them. There is stuff like this around
and I'd imagine there is some way of testing the systems on which simulations are run for their compliance to the various IEEE floating point standards.
"Just being there, standing in the way of the air currents, is enough to totally change the weather and hence all of history."
This is nonsense of the often quoted sort: "A butterfly flapping its wings in Hong Kong (or somewhere else) can caused a tornado on the other side of the world". Of course it does not work like that. Small perturbations die out.
Different maths libraries looks like the best explanation so far. Even a very fractionally different implementation of sin or sqrt or anything would be enough to remove bit-comp. I don't think the 32/64 bit stuff matters (as long as you use the same for all runs, of course; I have tested bit-cf at 32 bit for hadam3). Our environment propagates exactly the same software to all nodes.
I don't know whether the IEEE standard is strict enough to specify down to the last bit.
Note that the famous pentium bug *wouldn't* be a problem for repeatability, as long as you're all using the same chip.
Anon - perturbations, even the tiniest, amplify (I'm not aware of any proof that they *must* amplify but in practice they seem to). OTOH the butterfly stuff is often misused: a butterfly flapping its wings will certainly lead to totally different global weather in a months time, and within a year is rather likely to cause a tornado to appear somewhere that it wouldn't otherwise have done; but of course other tornadoes that would have appeared, won't. The mistake-in-interpretation that I think people make is to think that the *energy* for the tornado comes from the butterfly, which is obviously wrong.
Belette - Re: amplification of perturbations.
Isn't it more precise to say that it's unpredictable whether pertubations will amplify or not? If they always amplified, there would be no predictability to the system. Climate is a complex non-linear system with many positive and negative feedbacks. Positive feedbacks amplify, negative feedbacks damp. The predominance of negative feedbacks on the largest scales gives the system its predictability.
PS - I suggest you keep even Lumo's rude comments. They increase blog traffic and revenue!
Perturbations again: The general feature of physical systems is robustness and stability, not chaos and unpredictability. Furthermore, physical action is highly local, small perturbations simply don't propagate that far. (EM radiation from a source dropping of as 1/r^2, could be one of many examples.) The butterfly thinking is wrong - there simply aren't any conceivable physical channels to propagate the "butterfly effect" to the other side of the world, or far into the future for that matter. Believing in the butterfly effect is like believing in astrology, that far off constellations of stars and planets can affect things on Earth. There simply aren't any physical channels to propagate such effects. So what about chaos theory? If you have a nonlinear system of equations of the motion modelleling a would be physical system, you might get extreme sensibility to initial conditions. So what! This says nothing about propagating effects in space and time. Indeed, it says the very opposite, namely the loss of predictability tells you that the occurence of a tornado on the other side of the world cannot by traced to a particular butterfly flapping its wings in some other particular far off place. The locality of physical action prevents such tracing of effects - indeed it does not occur. Tornados are caused by fairly local circumstances.
CIP - as far as I can tell, no, all small perturbations amplify (I guess it can't be all, it must be possible at least in theory to design one that would die).
wrt perturbations, we're not really into the feedbacks area: the stuff I'm talking about is small changes to, say, sfc pressure: they would amplify within a pure dynamics model of the atmos. If we switch from GCMs to non-linear equations, then there must be perturbations that tend to decay, I suppose... but then if you tweak one model variable you will (fourier analysis) be tweaking a while pile of modes, in general, some of which will grow.
Within NWP, as far as I can tell, its simply taken for granted as long established that these things grow.
Ah, Lubos: I do keep the merely rude ones :-(
Anon - as far as physical channels go, of course there are: the atmosphere itself. A small perturbation won't reach across the globe instably, of course: it needs time to grow and spread, like any other dynamic effect.
I think you are making a common mistake about cause-and-effect. Any one tornado has multiple "causes": mainly, that the atmospheric conditions were about right. But there are others to trigger it, and anyway what "caused" those conditions? I agree that there is a rather irritating way people have of saying a butterfly can "cause" a tornado, and you'd be fully entitled to say "yes, and so did a whole host of other things". There is only one real world, so its impossible to do multiple runs, so its impossible to track these back exactly.
In a model, thats not so: you do one run, and keep the result. You do another, with only a tiny perturbation from the first in its initial conditions, and get another result. Which will include getting atmospheric disturbances that weren't in the first (not tornadoes, since GCMs wouldn't have the rez to see them in general). You could then meaningfully say that the small disturbance had "caused" the larger ones.
"The butterfly thinking is wrong - there simply aren't any conceivable physical channels to propagate the "butterfly effect" to the other side of the world, or far into the future for that matter. Believing in the butterfly effect is like believing in astrology"
No it isn't! One molecule hits another molecule, those 2 hit 2 others, those 4 hit 8 ...
Question is whether the size of effects increase. Ever heard of Browning motion? While that is with pollen floating on water, I don't really see any difference. Pollen is much larger than a molecule. Pollen could cause an animal to sneeze and I am sure animal interactions would be chaotic. It is reasonably obvious the effects would propagate in the real world. It is less clear with a climate model but lots of people have done experiments including me and found small changes do grow.
I suppose the great storm on Jupiter shows that it may be possible to set up a system where large changes generally disappear over time. On earth we don't seem to get such steady systems.
"No it isn't! One molecule hits another molecule, those 2 hit 2 others, those 4 hit 8 ...
Question is whether the size of effects increase. Ever heard of Browning motion? While that is with pollen floating on water, I don't really see any difference. Pollen is much larger than a molecule."
Brownian motion has nothing to do with the non-existent butterfly effect. If the analogy were right, we would expect the motion of the pollen to move the bucket with the floating pollen themselves. We never observe that. Effects don't amplify in that way.
Btw, the butterfly effect could very well be designated as a piece of pseudo-science. It is, for once, not falsifiable.
Within GCMs, its provable, by direct demonstration.
Perhaps you'd have better luck posting over at RP Sr's blog.
Belette – You are correct of course: small amplitude perturbations always amplify in an unstable dynamical system. I had confused myself by conflating short term predictability with long term predictability. Communing briefly with Eugenia Kalnay’s Atmospheric Modeling, Data Assimilation and Predictability clarified matters for me a bit. (She has a nice historical discussion of Lorentz’s original “Butterfly Effect” talk and even the Ray Bradbury SF story that prefigured it called The Sound of Thunder). She also discusses the underlying math and physics.
Nonlinear dynamical systems can have some long term (statistical) predictability if the dimension of the phase space their solutions explore (their strange attractor) is small compared to the whole phase space. Climatology, in this sense, consists of mapping the strange attractor of the climatic dynamical system, and how it is affected by changes in inputs (like CO2 and particulates), parameterizations, etc.
Does that sound more correct?
So now there are two physically non-intuitive things about climate models, in my opinion, and they confuse me.
1. Small average change that lies behind huge change in balance.
2. Butterfly effect
Examples (all AFAIK)
1. The tropical regions absorb more solar radiation than they radiate into space - an energy surplus - while at the upper latitudes there is an energy deficit. So, there is a net flow of energy from the tropics towards the poles.
Now, if I believe Imbrie and Imbrie, when the northern hemisphere was in an ice age with many times as much area as today was under ice - from New England to Washington and north and likewise in Europe and Asia - then the Caribbean was just 2K cooler! i.e., I imagine with all that ice, the area with an energy deficit was much larger, and the energy flow was much different from today, and yet the tropical sea was less than 1% cooler!
I still feel that my previously made point is correct - that huge changes in energy flows hide behind small changes in average temperatures.
2. If a butterfly flaps its wings, the effects should quite quickly dampen out. Suppose I'm in say, the Houston Astrodome, and a butterfly flaps its wings at the 50 yard line. IMO, that dissipates rather quickly, and it won't change anything at the boundary of the field. I think fluctuations have to be big enough - and if you're finding the butterfly effect in your GCMs it is a sign that your gridsize is still too coarse, so the metaphorical butterfly is many kilometers across and its wingflap is truly a large fluctation compared to a real butterfly wingflap.
I'd say that most physicists would want to understand why physical intuition is defied before they feel they understand why/how the climate models work.
Gosh, this is all a bit harder work that I expected. I thought I was just saying the obvious. Ho hum!
But just to show you how easy it is to get lost, RP Sr does get hopelessly lost on his own blog. Well, maybe not hopelessly: there is still time yet...
CIP - OK. I think we're in agreement (BTW: the sci-fi author I thought I was knocking was I though Asimov: the story I vaguely remember is about going back in time, onto a special walkway (so as to not damage the plants, ho ho) and shooting a dinosaur that was due to die anyway. And coming back to discover the Prez was different).
Tim Palmer has done some nice stuff on a "regime change" approach to climate change, saying (for example) that one way climate could ge warmer is to flip into a more-ensos phase, rather than just general warming. But I think he would admit that this is just speculation. He also had some stuff showing (via Lorentz equ's) that if you wrote a "forced L equ" you could show that although the system was chaotic, the response, climatologically (ie, averaging) was predictable. Also that did show a "regime shift" structure: the attractors don't move, but the relative population does.
Also (oh no...) you might like this much earlier post on climate predictability.
Arun: (1) I'm not sure I fully understand you. Firstly, calling a 2K T change "less than 1%" is not reasonable, because you're referring it to 0K. More reasonable might be to relate it to the pole-to-equ temperature difference. I'm not sure, either, that the area with the energy defecit grows all that much. Insolation changes are small - you mean the albedo effect I suppose? Possibly R=kt^4 comes into it... I'm not sure.
(2) Butterflies again. I think I see a misunderstanding: I'm *not* saying that the butterfly causes some wave that grows and amplifies: if we go back to waves on ponds perhaps, the butterfly is a thrown stone that goes plonk and ever diminishing ripples spread out. Yes. The effect is different: because the dynamics are (so?) non-linear, the small perturbations nonetheless effect the way other waves interact with each other. Its not resolution dependent.
1. Yes, I mean that with ice a large area is reflecting 80-90% of the sunlight; the 2K is small in the T^4 law.
2. Butterfly effect - I'm not thinking of amplification or anything - just, an extra butterfly flap in Florida in January should make virtually no change to the trajectory of Katrina.
I find it suprising how people are generally hostile to the concept of chaos.
Did Roger not read the caption under figure 6 that he referred to which said "However, any two initially nearby trajectories in the attractor do not remain nearby"?
If you think chaos is difficult to create or only affects things in exceptional circumstances, try the following:
Open a spreadsheet. Enter 3 in B1, 1.1 in A3, 1.2 in A4 and 1.201 in A5. In cell B3 enter =A3-(A3-1)*$B$1*A3*A3. Copy cell B3 to the range B3:IV5. Now graph those 3 series of numbers and admire a chaotic system.
Try different pairs of similar numbers in A4 and A5. Is using 1.2 and 1.201 an exceptional case or does the same happen from almost all pairs of numbers?
Why do people seem to reject chaos theory without carrying out simple experiments? My hypothesis on this is: People are conditioned by maths teaching to expect simple problems to have simple answers and complex answers can only arise from complex question. The problem is that people do not experience random problems. Maths teaching involves being given questions that have reasonably easy answers so they have only seen a very biased sample of possible problems.
If people did see a random selection of problems then chaos would arise much more frequently. So people don't expect it due to the biased sample of problems they have seen.
crandles - Hostility to chaos is not the problem. The problem is that it's highly nonintuitive. Poincare discovered in in 1900, but it took almost 70 years for main-stream physics to accept it. It's easier to get a feel for it now that simple computer experiments can demonstrate it.
Arun - Sensitive dependence on initial conditions is not strange to a physicist, it *is* physics (and mathematics). It's called dynamical systems theory.
It's not easy to think of that butterfly wing flap propagating across the world, but the key idea is very old: "for want of a nail, the shoe was lost, for want of a shoe the horse was lost, for want of .... the kingdom was lost."
The most crucial idea is instability. A band of winds headed West to East may be in unstable equilibrium - nudge it ever so slightly one way or the other and the dynamics will amplify the deviation. A somewhat similar process generates Rossby Waves, the long waves in the jet stream that are the big midlatitude weather makers. The differential heating of the equator and the poles is the power source that drives these amplifiers.
Lubos should certainly understand these ideas, even if JS doesn't, so I don't know what the heck his excuse is for his latest rant.
Congratulations to Mann and Bradley once again. Their work with Hughes was the topic of the winning article of the prestigious Dutch journalism award, see here. The 12-page long article described why the work, underlying the Kyoto protocol, is nothing else that flawed statistics. You must feel proud to have such colleagues!
Belette - I have read the cited article, and it looks persuasive to me. The points that resonated for me were:
1) The algorithms of Mann et al are not statistically neutral, but select for the hockey stick shape.
2) They misrepresented their methodology
3) They were sloppy in documenting their data and unresponsive when asked about it.
4) Results undermining their conclusions were systematically censored.
Any one of these would be a serious scientific error, and (1) and (4) are damning if true. I have read some of Mann's responses, but I've never seen him squarely address any of these issues. Has he?
CIP - there is a lot of plausible-looking disinformation around. I haven't read the pdf - might have something more interesting to say when I have.
1. I don't think so: my experiments on this are: . See also RC, point 2 - M&M focus on PC1, but thats wrong: the PCs are just being used to data-reduce a large number of time series. PC1 is not the end result. See also point 6 - M&M clearly got their analysis wrong.
2. Again, I don't think so.
3. Their doc wasn't perfect and neither is anyone elses. They were initially responsive but understandably didn't want to hand-hold people bent on attacking them.
4. Provably not, since M&M were published. MBH don't have the power to censor stuff, obviously, only journal editors do.
Finally... all this audit stuff. There is another very interesting temperature series, the MSU mid-troposphere series. The S+C version was much loved by the septics for showing cooling. Then it started showing warming. Then it got revised, revised again, and now shows quite a lot of warming, apparently because S+C made a sign error, though they don't admit this. Yet... where are all these septics screaming for audit? There is quite a bit of senate testimony from S+C that is now, demonstrably, wrong.
I tried posting on RPSr's blog, maybe it will appear eventually...
All perturbations do not grow initially - only the projection onto the growing lyapunov vector(s) will grow, the projections onto lv with growth rates of less than 1 will decay. However, a random perturbation _will_ have a non-zero projection onto growing lv, and although the shrinking components mean that it will initially decay in magnitude overall, the growing component will eventually dominate and cause the model trajectories to diverge. This might well take longer than a particular forecast duration for it to be noticeable, which is one reason why people search for the most rapidly growing perturbations (eg via bred vectors and SV methods) to generate ensemble forecasts. But on a long enough time scale, the runs will eventually diverge.
Running a NWP model with slightly perturbed initial conditions will show this clearly enough (note that if the perturbation is small enough, it could vanish due to rounding errors, but this is entirely due to limited digital precision).
James - thank you. That is the precise expression of what I was groping towards with "but then if you tweak one model variable you will (fourier analysis) be tweaking a while pile of modes, in general, some of which will grow.".
On RP: I can't see your comment yet, though it seems that Gavin has piled in. I suspect that RP is going to retreat in a cloud of ink, he still seems to have weather/climate mixed.
Isn't one problem with climate models that they are too crude? If you model a 100x100x100 grid (or whatever) with non-linear equations for certain parameters at the grid points, you get chaotic behavoiur, un-predictability et cetera. This is obviously (to me at least) far from real world climate. On cubic cm of air contains on the order of 10^23 molecules, multiply that with the volume of the atmoshere and you get the real number of degrees of freedom (times 6) for the atmospheric system (roughly). That's the context of the real butterfly effect (one (well many) molecule(s) hitting more, etc). It remains to show that "that" system contains the non-linearities required.
James - So if you tailor your perturbation to only have components in the subspace of shrinking Lyupanov vectors will it damp out or will it be rotated to project onto amplified L-vector subspaces in subsequent time steps?
Belette - On point 4, the reference is not to any censorship of M&M, but to the claim that MBH systematically excluded from their results a whole set of data, which, if included, change the result.
On point 1, the rather offensively named document you cite has its own ambiguities. I don't understand the most important graph, the one that says "what if we just look at all the data, without PCA." What processing has been applied to the data shown? It is clearly not raw data, but some kind of average. What are the weights, and how were they chosen? If the result is os clear without PCA, what was the point of the PCA?
Apparently neutral statisticians seem to agree that the MBH normalization and algorithms get "hockey sticks" out of red noise.
I think this issue might be a good one for the kind of "science court" I have suggested on my blog.
There is no hostili
Sorry about the last. I have no hostility to chaos, and I'm aware of sensitivity to initial conditions. Nevertheless, I'm skeptical of the butterfly effect, and I'll sketch out why.
Suppose we're simulating the solution of a bunch of coupled non-linear partial differential equations, at a specified precision (say 64 bits of accuracy) and resolution, in some volume of space, for some period of time.
In Run 1, I can choose a compact region of space, and record the history of what is going on at the boundary of that region in a specific simulation run. In Run 2, I can then equally well do my simulation in the space external to the region I chose by using the boundary conditions history I found in Run 1.
Yes, because of precision and resolution limitations, and because this is a chaotic system, my Run 2 simulation using the boundary conditions will differ from the original Run 1 simulation in details; but it should be describing the same physics.
But now I'm in a position to ask what if? type questions.
I now ask about an alternate history, where something different happens within the compact region of space that I chose - namely, a butterfly wing flap. Call this simulation Run 3. Now, if I can show that the butterfly wing flap does not result in a change in boundary conditions for Run 3 from those for Run 2 within the numerical precision and resolution of my simulation, then the butterfly wing flap is irrelevant, and Run 2 and Run 3 should produce the same results.
My argument is that on physical grounds my butterfly in the middle of the Houston astrodome cannot produce a significant change at the history of the boundary of the astrodome, where I hope I've made clear what significant means.
Notice that we don't have to simulate the whole world to figure out whether a butterfly wing flap is significant, as always, physics is local, and one can estimate the effect of a perturbation within a volume at the boundary, and if it is smaller than the precision and resolution of one's calculation, then it is irrelevant.
There are at least two other reasons why I'm skeptical of the butterfly effect, though they are much weaker objections than the one I just posted.
I'll pose the first as a question - is it true that for chaotic systems, renormalization group ideas don't work?
The second is that in a physical system (as opposed to a numerical simulation of that system on a computer), scale is significant. In the first place, we have various partial different equations in macroscopic quantities like fluid temperature, pressure, density that we can use for climatology because we are able to average over the molecular scale and ignore the discrete, quantum nature of matter. If the physics says that fluctuations at all scales matter, then the PDEs themselves are physically invalid. So, clearly, there is some length scale below which fluctuations are physically irrelevant.
The question then is, are fluctuations important at all macroscopic length scales, or does the immunity of macro-results from micro-fluctuations extend somewhat beyond the microscopic length scale?
I completely agree with the "arun" who obviously knows the concepts and tools of real physics. In particular, "physics is local".
I've blogged here "anonymous" for a day, let me choose the name: Mr Jones
"Mr Jones said... "
Lubos? Is that you?
It's easy to prove and exhibit butterfly like effects in mathematical models. These are very coarse approximations of reality, but there is good reason to believe that similar effects occur in the real world - the same kinds of nonlinearities are there, many physical phenomena behave like this.
The point is not that any particular butterfly wing flap causes any particular tornado - the point is that very small changes have large effects on the long term evolution of the system.
This is counterintuitive if you are used to thinking in terms of equilibrium thermodynamics - but this is a nonequilibrium system. The large scale atmospheric phenomena we see - hurricanes, frontal systems - are amplified versions of phenomena that were originally very small in scale. Somewhat similarly, inflationary cosmologists assume that galaxy clusters are amplified versions of tiny quantum fluctuations of the early universe.
CIP - OK, I've now gone as far into that dutch article as I care to. Its not even an attempt at balance, but blatant propaganda. Errrmm... that was obvious, wasn't it?
I got as far as: Even Geophysical Research Letters, an eminent scientific journal, now acknowledges a serious problem with the prevailing climate reconstruction by Mann and his colleagues. This is a complete misrepresentation: publication doesn't imply a journal policy! There are plenty of other obviously biased statements before that, though.
I have no idea what data MBH are supposed to have censored. It doesn't sound very plausible. If its from that article, its just repeating one of M&M's fancies, but I don't know which.
The "dummies guide" isn't offensively named: aren't you aware of the long series of books introducing people at a basic level to topics?
Re fig 5: why don't you think that is raw data (OK, it clearly isn't raw tree rings, its raw tree-ring-or-other-proxy-turned-into-t). Why do you think its averaged?
Why do you use PCA? (or EOF, if you prefer?). For data reduction. There are a large number of series. If you just average them all, you overweight areas with lots of data. So (as I understand it) PCA is used to combine series in (comparitively) data-rich areas.
I haven't heard about the neutral statisticians: but again, if thats from the article, its unlikely to be trustable.
Science court: I think the MSU series would be a better first case.
Arun - your argument seems to boil down to My argument is that on physical grounds my butterfly in the middle of the Houston astrodome cannot produce a significant change at the history of the boundary of the astrodome.
This I think is just wrong: to repeat what you may have missed I'm *not* saying that the butterfly causes some wave that grows and amplifies: if we go back to waves on ponds perhaps, the butterfly is a thrown stone that goes plonk and ever diminishing ripples spread out. Yes. The effect is different: because the dynamics are (so?) non-linear, the small perturbations nonetheless effect the way other waves interact with each other. Its not resolution dependent.
If we were thinking of a completely still atmosphere with a butterfly-scale perturbation, then you are correct. But we're not. As CIP has said, this is a dynamic equilibrium and the rules are different.
Again, in a model, the "butterfly effect" is *provably correct*: small perturbations really do grow.
How well tested is this in the real world? Well, all this arises in the context of numerical weather prediction, in which models match reality fairly well.
As far as scales go, as far as I know it extends down all the way to the depth to which the fluid dynamics equations are useful.
If you want to go into this in more detail, James has given you the keyword, viz, Lyaponov (sp?) exponents, and that should find you more. You keep even send him a message and ask him to expound a bit.
CIP said, "The point is not that any particular butterfly wing flap causes any particular tornado" - this is probably true, though I've tried to explain the sense in which a butterfly can - but it seems the effect is very easy to misunderstand if you think if it like that. Probably a better way to think of it is "sensitive dependence on initial conditions", which is the same thing, but easier to understand.
Also: the dutch junk is now being debated on sci.env. Lubos has made a transparent attempt to post it under an alias (or maybe his real name is Pedro, who knows?), we all hope he gets better at it.
Roger Coppock points out, there, one of the arguments I'd forgotten: viz, how can a protocol agreed in 1997 be supposed to be based on science that was published in 1998 and after? The article is junk from its beginning, whether its junk all the way through to the end I can't say since I didn't finish it!
William: wrt the Crok article, not long ago you were criticizing Lubos for wearing 'blinders', yet now you won't even finish an article you're attempting to criticize? Less than impressive.
"Lubos? Is that you?"
No, I'm not Lubos. But I just lost a rather long text explicating my point of view! Must've been the butterfly effect!
See if I can recreate it some other time if the discussion goes on.
Hi CSea, fairish point, I disagree, as follows:
The dutch thing was put forward as journalism - indeed, prize winning journalism. I read enough of it to be sure it wasn't: its just the same old M&M stuff, repackaged, and only lightly repackaged at that.
I've raised two specific problems with the article: (1) it asserts that 1998 onwards research underpins a treaty written in 1997. This is obvious nonsense. (2) It asserts that simple publication of an article means that GRL supports that article. Also wrong.
Point (1) isn't a minor matter. Its the whole lead-in to the thing, indeed the title page. How can something that starts so badly ever come right? (as I said about the US invasion of Iraq).
Ahem: One might more correctly say Anglo invasion, although I suppose that might unfairly impugn all of the small English-speaking countries that had nothing to do with this international crime. Could Bush and his neo-con claque have pulled this off without the fig-leaf (to say nothing of the dodgy dossier) provided by Tony?
William: Point 1, I'll agree, but,
the article is an english translation from Dutch, which might have altered the wording a bit. You are correct, but the treaty was still open for signing until 99, and the MBH98,99 graph had been frequently used as a Kyoto visual 'sales pitch' (if you will) for ratification by many gov's and NGO's until it finally came into effect. Even today on: http://unfccc.int/essential_background/feeling_the_heat/items/2917.phpwe see "The 1990s appear to have been the warmest decade of the last Millennium, and 1998 the warmest year." I apologize if I seem like I'm grasping at straws; suppose I'm just reluctant to concede as IMO it's a rather small point, and possibly not relevant to the original text. (unless you are reading the Dutch version)(I can't read Dutch so I can't say)
2. In the article Crok also noted the 'positive response' from the referees. About a month after Crok's article was published, the AGU issued a press release citing the article as one of the 10 GRL highlights. (see: http://www.agu.org/sci_soc/prrl/jh050309.html) I imagine it's possible Crok might have been privy to information about the selection ahead of time, but I don't know the policies, when selections are made, etc... so that's just a guess.
Anyway, I'm rambling but it'd be nice if you had at least read the entire article before criticizing.
(not that I don't put my blinders up as well, and end up not finishing articles because of that myself)
SB - "fig leaf" is probably correct; but I suspect they would have gone ahead without "us" anyway.
CSea - I guess we're all reading the English version. As to the dates: at the very best the article is being careless. However, the text you quote "The 1990s appear to have been the warmest decade of the last Millennium, and 1998 the warmest year" is true of *all* the reconstructions, not just MBH. This isn't a minor matter: the septics like to keep this personal, and they like to focus on MBH, so they never mention the other reconstructions. See here.
Referees - M&M have been rather cavalier in their treatment of supposedly confidential referees comments.
AGU - so they have: weird.
Reading the article: you forget, I didn't write a post about this thing, I was just asked for my opinion on it. Which opinion is: seen it all before, guv. As far as I read, its not journalism, its repackaged M&M.
I'll happily hunt down all I can on Lyapunov exponents and so on, 'fore I post again on this topic.
But to clarify my argument further, e.g, with molecules as billiard balls, the microscopic dynamics is chaotic, but that doesn't keep us from doing fluid dynamics in regimes that are not chaotic; the sensitive dependence on initial conditions of molecular trajectories is utterly irrelevant at a longer length scale. Similarly, the sensitive dependence on initial conditions of air, energy flows on the millimeter scale may get utterly washed out in kilometer scale averaging.
Anyway, you're the experts, I'm here to learn, so its time to hit the books....
Arun - only read the L stuff if you're interested. I think you have missed a key point: the sensitivity is only for some types of flow. At least the ones in the atmos, we're thinking about ones in dynamic equilibrium.
If you take a nice simple fluid flow situation like air flowing smoothly in a cavity with one side wall heated and the other cooled, then there is a stable solution and perturbations will indeed die away.
But in that case, there is no "weather" in the system, or rather the weather is always equal to the cliamte and is fixed.
In more complex situations like the atmosphere things are always in flux.
"Reading the article: you forget, I didn't write a post about this thing, I was just asked for my opinion on it."
You're right, and this is OT anyway, I apologize and I'll try to keep it short.
Re: the "septics"? C'mon, I really don't think a brush that big is really necessary. But if you are referring to MnM in particular, the focus certainly isn't only on MBH. McIntyre recently reported having an abstract (for a presentation I imagine) being accepted at the US CCSP Workshop next month entitled, "More on Hockey Sticks: the Case of Jones et al " On his blog he's detailed legion questions and/or complaints with respect to the full studies or datasets of many of the global reconstructions.
As far as I've seen, the *focus* of M&M has definitely been on MBH. But I think even they have realised that they are into diminishing returns. On the few occaisions I've dipped into their blog, I find legions of trivia.
But we can agree to differ on their value: time will tell, perhaps. For now, thats enough M&M.
It's late night in my part of the world, and why not take the opportunity to throw some more oil on the fire (as we say) without being unintentionally rude though. The butterfly effect rubbed out my previous post, so some time I might try and recreate it. But for now a sociological comment: When I first came across the chaos stuff (book by Prigogine, I think) it was very belligerent: ordinary (Mr Jones-type) physicists were just so narrow-minded – they were deluding themselves with the belief that the world was linear, whereas, as was now obvious (he he he), it was highly non-linear. The old physics was just wrong – scrap the linear PDE’s, the only cases the old-fashioned physicist could solve and therefore stuck to. Well, in this way it went on and on.
Now this did not square with my own impressions. I had studied physics as an engineering student, and then done a PhD in theoretical physics. I’d learned to solve linear PDE’s, but also – and this is the point – had regular courses on non-linear systems in control theory. Ok, they were difficult to solve and there were no general methods, but they were nevertheless a well known part of engineering and physics. The Mr Jones’s knew about this and knew about the proper context, and had known about it all along (not a very exciting story though).
So I came away with the impression that the guys who latched on to the chaos stuff were the guys who couldn’t even solve the linear PDE’s, and now got an excuse for not caring about it. Since no-one could solve the non-linear stuff, everyone was in the same boat so to speak. But by preaching non-linearity, you could scorn the Mr Jones’s.
But as everyone knows who have studied Dylan lyrics in detail, they are a double-sided sword:
“You never turned around the see the frowns,
on the jugglers and the clowns,
when they all come down and did tricks for you.”
Anyway, I was just wondering: anyone with the same impressions as mine?
Mr Jones - there is something to what you say. Whatever the intrinsic nonlinearities of the world, in many cases linear theory works. And indeed people often get carried away with the "fun" side of chaos.
But, it remains true that many aspects of fluid dynamics and weather *are* strongly non-linear. And the butterfly effect / sensitive dependence on initial conditions is one of them. Producing linear approximations to the appropriate equations and solving those, or finding the nature of the solutions, is hard and interesting.
But, the interesting point is that (perhaps in somewhat the same way in which complex molecular dynamics integrates up to Boyles law) some aspects of climate do appear to be linear, or at least smooth. After all there *is* a climate - it doesn't vary wildly. And (with internal noise too) it does respond apparently smoothly to imposed perturbations, within limits (see the is-cliammte-stable post).
Mr. Jones - Chaos is now well accepted, but that wasn't true when numerical weather prediction models were first developed. Even though Poincare had elucidated the key principles in 1900, I don't think anybody realized that these non-linear effects would be crucial in numerical weather prediction until Lorentz demonstrated it in the 1960s or so.
Belette - I am quite disappointed that you did not read far enough in the article to understand the "censored data" comment.
I found the "For Dummys" article disappointing because, unlike (at least some of) the real "for Dummies" books, it didn't deliver on it's promise to explain things in a way Aunt, Grandmother, or whomsoever could understand. As far as I could tell it didn't explain PCAs or multiproxy analysis at all.
I have worked with SVDs and PCAs and I couldn't follow their explanation. I have used these concepts to deal with well scattered data in spaces of high dimensionality. In such cases, the first PC is a vector in the direction that resolves the largest amount of the variance of the data. In the case of the "multi-proxy" analysis, I don't understand what the data space these vectors live in is.
When I say that the data in the "for Dummies" article has obviously been processed, I'm not complaining because I don't se tree rings, I'm complaining because I don't see "data points" but only some curves that I imagine to be averages of those points. I would expect the real data to be quite noisy, so replacing it with simple curves hides both the quantity and quality of the data.
I realize that you are neither MBH nor responsible for them, but if you casually dismiss criticism of their work, you ought to at least learn the points at issue.
CIP - "I am quite disappointed that you did not read far enough in the article to understand the "censored data" comment."
OK, I wouldn't do it for Lubos, but I'll do it for you. And I have. And I'm baffled. M&M seem to be complaining that MBH used the bristlecone pine data. This appears to be the opposite of "censored". If they had deliberately omitted this data, then that would be censorship.
I even read to the end of the article, which contains the appalling: "...does not allow us to draw any conclusions about its extent, relative to the past thousand years, which remains as much a mystery now as it was before Mann’s article in 1998." This is obviously untrue. MBH was the first study. Since then, there are many others, all of which broadly similar conclusions (in the sense that the IPCC 2001 text can be applied to all of them). The statement from the article above is just a bald-faced lie.
As to the Dummies article... for the data points, http://www.realclimate.org/FalseClaimsMcIntyreMcKitrick_html_7955bc86.png looks fairly noisy. Why don't you think thats the real (proxy) data. The actual data is available, though, if you want to check this for yourself its easy enough.
PCA (that I would call EOF): presumably you follow point 1 well enough? And point 2? And... well, why guess. Which bit is unclear? The data space is time series of temperature. I thought that was obvious. There are multiple realisations of these time series. The PCA is selecting the most variance amongst these.
I'm checking out for the time being. As you all understood (I guess) my comments were perturbations to see how the system responded.
Any individual who is interested in the cutting edge of Internet Marketing should consider the use of the Butterfly Marketing technique. It is as far as concept go a new revolution of viral marketing which if established and conducted in accordance to the set procedures will make the user stupid amount of visitors, which in turn will hopefully make you good cash. Butterfly Marketing
Post a Comment