2005-12-29
Clever bees
I suppose this really belongs on the bee blog, but anyway... from Science: honeybees, who have 0.01% of the neurons that humans do, can recognize and remember individual human faces (thanks to JF for the tip).
2005-12-22
Bryan Lawrence has a blog
I discover that Bryan Lawrence (head of BADC) has a blog: http://home.badc.rl.ac.uk/lawrence/blog. A mixture of climate and software... should be perfect for JF.
The Economist on Climate Change (sigh)
The last few issues of the Economist have seen a few climate change type articles. One of leaping penguins even made the front cover (headline "Don't Despair: grounds for hope on global warming"; however the grounds for optimism they find are thin: some grassroots action, and hints of voters changing their minds). The Economist (of course) isn't a very good source for the science of GW; its written by economist-types (oddly enough) not scientists. And they have their biases: mostly a free-market liberalism which makes them rather dislike the idea of anything that won't fit within that framework and which might badly strain it. I quite like their general tone usually: I have pinned up in my office two of their front covers arguing for greater immigration, just next to a nearly interchangeable one from Socialst Worker, which I found ironic; one of the first I saw was in favour of same-sex marriage: the Economist, whilst very free-market, is by no means std.right-wing.
As a side note, the most recent edition notes that Lee Raymond is going, which could well be good news.
The Economist has a good reputation in general, and is widely read by business-politician type folks, so we have to care what they say and how they say it. In particular, first paragraphs matter, because many people won't read past them. Which is why the 7th of Dec (or, in the paper version, 10-17th Dec; thanks CH) article is so bad:
This is std.septic.sh*t*. Not because its false, but because its misleading. Try this:
A more honest intro would reflect the std.consensus: that the recent climate change is likely to be unusual and likely to have been caused by people.
The rest of the article isn't too bad: somewhat skeptic (note we've got the k back now) but not too bad. E.g.:
Parallel is wrong, to be picky: the tropospheric trend should be larger, and is.
In case you're wondering, #1 was that its been warm recently, and #2 was the Arctic. #4 is detection of warming in the oceans; #5 is a bit dodgy in their words: The fifth is the observation in reality of a predicted link between increased sea-surface temperatures and the frequency of the most intense categories of hurricane, typhoon and tropical storm. If I were you, I'd read RC. #6 is the THC (again you want RC).
After a slightly dodgy solar bit, they continue with That the climate is warming now seems certain. And though the magnitude of any future warming remains unclear, human activity seems the most likely cause. The question is what, if anything, can or should be done. And thats a fair question. Too rapid or too great a warming, though, risks serious, unpleasant and in some cases irreversible changes, such as the melting of large parts of the Greenland and Antarctic ice caps. There is, to put it politely, a lively debate about how far the temperature can rise before things get really nasty and how much carbon dioxide would be needed to drive the process. Unfortunately, existing models of the climate are not accurate enough to resolve this dispute with the precision that policymakers would like. Again, pretty good, apart from that last bit (to me it implies that the poor old policymakers are just sitting there wondering when the GCMs will tell them what to do, which is nonsense: they all have agendas of their own).
Then lastly If greenhouse-gas emissions are to be capped, however, a mixture of political will and technological fixes will be needed. Seems fair to me, but we're heading out of my territory with that, so I'll just observe that political will seems distinctly missing, to me. I'm aiming for a post on Montreal soon.
As a side note, the most recent edition notes that Lee Raymond is going, which could well be good news.
The Economist has a good reputation in general, and is widely read by business-politician type folks, so we have to care what they say and how they say it. In particular, first paragraphs matter, because many people won't read past them. Which is why the 7th of Dec (or, in the paper version, 10-17th Dec; thanks CH) article is so bad:
THE climate changes. It always has done and it always will. In the past 2m years the temperature has gone up and down like a yo-yo as ice ages have alternated with warmer interglacial periods. Reflecting this on a smaller scale, the 10,000 years or so since the glaciers last went into full-scale retreat have seen periods of relative cooling and warmth lasting from decades to centuries. Against such a noisy background, it is hard to detect the signal from any changes caused by humanity's increased economic activity, and consequent release of atmosphere-warming greenhouse gases such as carbon dioxide.
This is std.septic.sh*t*. Not because its false, but because its misleading. Try this:
People die. They always have and they always will... therefore we shouldn't worry about whether to fund the health service or worry about cars on the roads or terrorists, its just more or less death.
A more honest intro would reflect the std.consensus: that the recent climate change is likely to be unusual and likely to have been caused by people.
The rest of the article isn't too bad: somewhat skeptic (note we've got the k back now) but not too bad. E.g.:
The third finding is the resolution of an inconsistency that called into question whether the atmosphere was really warming. This was a disagreement between the temperature trend on the ground, which appeared to be rising, and that further up in the atmosphere, which did not. Now, both are known to be rising in parallel.
Parallel is wrong, to be picky: the tropospheric trend should be larger, and is.
In case you're wondering, #1 was that its been warm recently, and #2 was the Arctic. #4 is detection of warming in the oceans; #5 is a bit dodgy in their words: The fifth is the observation in reality of a predicted link between increased sea-surface temperatures and the frequency of the most intense categories of hurricane, typhoon and tropical storm. If I were you, I'd read RC. #6 is the THC (again you want RC).
After a slightly dodgy solar bit, they continue with That the climate is warming now seems certain. And though the magnitude of any future warming remains unclear, human activity seems the most likely cause. The question is what, if anything, can or should be done. And thats a fair question. Too rapid or too great a warming, though, risks serious, unpleasant and in some cases irreversible changes, such as the melting of large parts of the Greenland and Antarctic ice caps. There is, to put it politely, a lively debate about how far the temperature can rise before things get really nasty and how much carbon dioxide would be needed to drive the process. Unfortunately, existing models of the climate are not accurate enough to resolve this dispute with the precision that policymakers would like. Again, pretty good, apart from that last bit (to me it implies that the poor old policymakers are just sitting there wondering when the GCMs will tell them what to do, which is nonsense: they all have agendas of their own).
Then lastly If greenhouse-gas emissions are to be capped, however, a mixture of political will and technological fixes will be needed. Seems fair to me, but we're heading out of my territory with that, so I'll just observe that political will seems distinctly missing, to me. I'm aiming for a post on Montreal soon.
2005-12-19
Connolley has done such amazing work...
Back to wikipedia... Nature has an article on wikipedia vs Britannica. It was an interesting exercise, and as the most notable climatologist on wiki :-) they interviewed me, which lead to the sidebar article "Challenges of being a Wikipedian" (see the Nature article; click on the "challenges" link near the bottom). It contains the rather nice quote from Jimbo Wales "Connolley has done such amazing work and has had to deal with a fair amount of nonsense" (does Lumo still read this?).
What Nature did was to take a number (50; of which 42 came back usefully) of wiki and Britannica articles, and send them out to experts for review. There was a fairly severe constraint on this: that the articles had to be of comparable length in the two sources; which is why I think no climate change type articles were done. I strongly suspect that if you try to find anything about, say, the satellite temperature record in Britannica it will either be entirely missing or badly out of date. The list of articles is here.
There were 8 serious errors in both sources. Then we move onto more minor inaccuracies. The oddest thing about this is that the average number of errors in Britannica is 3 and wiki 4; and Nature (genuinely) expected us to be *pleased* about this, as though being nearly as good as Britannica was something to be happy about! I rather suspect that this may be due to the choice of articles to some extent. The GW articles don't contain many errors (except the septic cr*p, sadly we can't get rid of it all :-().
The most pleasing part, though, is the accompanying editorial which actively encourages scientists to contribute (James, are you listening?): Nature would like to encourage its readers to help. The idea is not to seek a replacement for established sources such as the Encyclopaedia Britannica [oh yes it is... WMC] , but to push forward the grand experiment that is Wikipedia, and to see how much it can improve. Select a topic close to your work and look it up on Wikipedia. If the entry contains errors or important omissions, dive in and help fix them. It need not take too long. And imagine the pay-off: you could be one of the people who helped turn an apparently stupid idea into a free, high-quality global resource.
[Update: there is a Nature blog here and this includes a list of the errors in the EB and wiki versions; see-also [[Wikipedia:External_peer_review/Nature_December_2005/Errors]]. The Nature blogs report on Jimbo's visit is interesting too. And [[Wikipedia:Requests_for_arbitration/Climate change dispute 2#Removal of the revert parole imposed on William_M._Connolley is nice to have...]
What Nature did was to take a number (50; of which 42 came back usefully) of wiki and Britannica articles, and send them out to experts for review. There was a fairly severe constraint on this: that the articles had to be of comparable length in the two sources; which is why I think no climate change type articles were done. I strongly suspect that if you try to find anything about, say, the satellite temperature record in Britannica it will either be entirely missing or badly out of date. The list of articles is here.
There were 8 serious errors in both sources. Then we move onto more minor inaccuracies. The oddest thing about this is that the average number of errors in Britannica is 3 and wiki 4; and Nature (genuinely) expected us to be *pleased* about this, as though being nearly as good as Britannica was something to be happy about! I rather suspect that this may be due to the choice of articles to some extent. The GW articles don't contain many errors (except the septic cr*p, sadly we can't get rid of it all :-().
The most pleasing part, though, is the accompanying editorial which actively encourages scientists to contribute (James, are you listening?): Nature would like to encourage its readers to help. The idea is not to seek a replacement for established sources such as the Encyclopaedia Britannica [oh yes it is... WMC] , but to push forward the grand experiment that is Wikipedia, and to see how much it can improve. Select a topic close to your work and look it up on Wikipedia. If the entry contains errors or important omissions, dive in and help fix them. It need not take too long. And imagine the pay-off: you could be one of the people who helped turn an apparently stupid idea into a free, high-quality global resource.
[Update: there is a Nature blog here and this includes a list of the errors in the EB and wiki versions; see-also [[Wikipedia:External_peer_review/Nature_December_2005/Errors]]. The Nature blogs report on Jimbo's visit is interesting too. And [[Wikipedia:Requests_for_arbitration/Climate change dispute 2#Removal of the revert parole imposed on William_M._Connolley is nice to have...]
News from NZ...
I'm back. And to celebrate, here is a story from a local paper over there. My apologies to all the good folk of NZ, this is not a fair reflection of your country, but it is very funny, I'm thinking of sending it in to Private Eye.
Also, a joke: what do you call a woman who stands between goalposts? A: Annette. And by odd coincidence, Mt Annette was the peak I climbed from the Mueller hut. More on that in the photo-essay to follow "soon".
Also, a note: I'm switching comments to "only registered users" just as soon as I can work out how to do it. I'm a bit fed up with anonymous trolls, named trolls are so much better...
[Update: BL points out the obvious: that Cairns is in Australia (the "West Island") not NZ. Oops. I knew that... He also says that he would have posted that here, except I insist on only registered users. So for the moment, I'm turning that off again, since he is the second person to somewhat dislike that feature, and on reflection I don't need it]
Also, a joke: what do you call a woman who stands between goalposts? A: Annette. And by odd coincidence, Mt Annette was the peak I climbed from the Mueller hut. More on that in the photo-essay to follow "soon".
Also, a note: I'm switching comments to "only registered users" just as soon as I can work out how to do it. I'm a bit fed up with anonymous trolls, named trolls are so much better...
[Update: BL points out the obvious: that Cairns is in Australia (the "West Island") not NZ. Oops. I knew that... He also says that he would have posted that here, except I insist on only registered users. So for the moment, I'm turning that off again, since he is the second person to somewhat dislike that feature, and on reflection I don't need it]
2005-12-08
Dunedin: sea ice conf
I haven't blogged much about this conf. Mostly because there is no wireless access (dinosaurs...) but also because much of it is deeply technical sea ice stuff of rather limited general interest. Which is in itself a point of interest: the amount of climate change related stuff is small. A few people have shown the std.pic of Arctic september ice, which shows decline (no-one has shown a similar for the Ant) but only a few people have done anything with it (trying to look at the changes in different ice types: first-yeay, multi-year; but then this is tricky). Also there are only a handfull of papers using climate models.
So what is it actually about? A major theme is sea ice depth (ice fraction is fairly easy (err, thouigh see my brilliant presentation deomonstrating that there are problems even there), depth is much harder). Lots of people are using satellites (radiometer on ERS-2; laser on ICEsat) or helicopter or ship or sfc bourne methods to estimate ice depth. The problem is that while its fairly easy to drill a hole in the ice and measure depth at a point, to get an area value is much harder. EM (electro-magnetic) sensors can detect the water level (on a ship they need to be only 3m above the ice; hung from a helicopter they can be 10 m above the ice, with the heli another 20 m higher up, which apparently makes for exciting flying). So those get you transects. From satellite, you can measure ice freeboard (radar) or top of snow (laser) if you can find enough leads to reference the values to a sea level; this is a major problem. Also measurements from underneath: the late lamented autosub; some stuff from military submarines (their CTDs didn't work so they used the entire sub as a billion pound CTD) which at true cost would be incredibly expensive, but since they are there anyway (not really clear what they *are* doing) they can do some science, even if their sounding kit is a bit dodgy for science.
Apart from that, various things: properties of ice; today a pile of talks about the ice formation mechanism, which isn't really my thing: platelets and congelation and frazil and so on. Special mention for the chap running a molecular simulation of ice formation: with 1,500 molecules his simulation of the 9 ns it takes to freeze took 4 days processing; his value for the freezing temp is 271 (+/- 9) which he regards as extremely accurate. How ice freezes from underneath; new ice dynamics models; measurements from campaigns; etc etc.
Last night we had the conf dinner in Lanarch "Castle" a magnificent but truely fake building, more of a manor house or small chateau. And we had the piping in of the haggis and address to same. And the drinking till midnight.
Note to PH: yes the foxgloves are non-native. But they are lovely.
So what is it actually about? A major theme is sea ice depth (ice fraction is fairly easy (err, thouigh see my brilliant presentation deomonstrating that there are problems even there), depth is much harder). Lots of people are using satellites (radiometer on ERS-2; laser on ICEsat) or helicopter or ship or sfc bourne methods to estimate ice depth. The problem is that while its fairly easy to drill a hole in the ice and measure depth at a point, to get an area value is much harder. EM (electro-magnetic) sensors can detect the water level (on a ship they need to be only 3m above the ice; hung from a helicopter they can be 10 m above the ice, with the heli another 20 m higher up, which apparently makes for exciting flying). So those get you transects. From satellite, you can measure ice freeboard (radar) or top of snow (laser) if you can find enough leads to reference the values to a sea level; this is a major problem. Also measurements from underneath: the late lamented autosub; some stuff from military submarines (their CTDs didn't work so they used the entire sub as a billion pound CTD) which at true cost would be incredibly expensive, but since they are there anyway (not really clear what they *are* doing) they can do some science, even if their sounding kit is a bit dodgy for science.
Apart from that, various things: properties of ice; today a pile of talks about the ice formation mechanism, which isn't really my thing: platelets and congelation and frazil and so on. Special mention for the chap running a molecular simulation of ice formation: with 1,500 molecules his simulation of the 9 ns it takes to freeze took 4 days processing; his value for the freezing temp is 271 (+/- 9) which he regards as extremely accurate. How ice freezes from underneath; new ice dynamics models; measurements from campaigns; etc etc.
Last night we had the conf dinner in Lanarch "Castle" a magnificent but truely fake building, more of a manor house or small chateau. And we had the piping in of the haggis and address to same. And the drinking till midnight.
Note to PH: yes the foxgloves are non-native. But they are lovely.
Road deaths and terrorism
Continuing an old theme, but this time with some actual numbers, via Nature:
114 deaths per million people occurred in road crashes in 29 countries in the developed world during 2001.
0.293 deaths per million people were caused by terrorism each year in the same countries in 1994–2003.
390:1 is the ratio of road deaths to deaths from terrorism.
Source: N. Wilson and G. Thomson Injury Prevention 11, 332–333 (2005).
And if I didn't mention it before, I'm currently in Dunedin because of this.
114 deaths per million people occurred in road crashes in 29 countries in the developed world during 2001.
0.293 deaths per million people were caused by terrorism each year in the same countries in 1994–2003.
390:1 is the ratio of road deaths to deaths from terrorism.
Source: N. Wilson and G. Thomson Injury Prevention 11, 332–333 (2005).
And if I didn't mention it before, I'm currently in Dunedin because of this.
2005-12-05
NZ: eternal sunset
I'm in NZ, Dunedin. And it turns out they do have internet here. To prove it, here are some pics of the flight over.
Heathrow to LA was good: we had a long slow sunset as we headed north, sunrise as we went due W, then sunset again into LA. Its a shame they don't have a better quality "photography" window in the back somewhere. We got to see greenland (briefly; the W side; the E side was in cloud) and sea ice over Hudson bay (see pix: this is the firt sea ice I've ever seen in real life; its from 11,000 m) and the vast expanses of frozen Canada. LA to Dunedin is 12+ hours; I managed to sleep much of it thankfully.
Trivia: on the flight into LA, we had all-plastic cutlery. Out of LA, we got metal forks and spoons.
Heathrow to LA was good: we had a long slow sunset as we headed north, sunrise as we went due W, then sunset again into LA. Its a shame they don't have a better quality "photography" window in the back somewhere. We got to see greenland (briefly; the W side; the E side was in cloud) and sea ice over Hudson bay (see pix: this is the firt sea ice I've ever seen in real life; its from 11,000 m) and the vast expanses of frozen Canada. LA to Dunedin is 12+ hours; I managed to sleep much of it thankfully.
Trivia: on the flight into LA, we had all-plastic cutlery. Out of LA, we got metal forks and spoons.
2005-12-01
Catherine Bennett can FOAD
As if the gulf stream stuff wasn't enough to wind me up, the Grauniad published Catherine Bennett Climate march, but will it work?: Going on the climate change protest this Saturday is like marching for niceness - and just as ineffectual. Though the first headline is only in the online edition. The article itself is just blather; she has nothing to say; I interpret it to mean that she has grown too old and fat to bother, and has nothing better to do than mock people who do care.
Anyway, on that temperate note, I'll sign off for the moment, and probably for the next two weeks, unless NZ is connected to the internet.
ps: thanks to those who commented on the poster, I corrected most of the typos.
Anyway, on that temperate note, I'll sign off for the moment, and probably for the next two weeks, unless NZ is connected to the internet.
ps: thanks to those who commented on the poster, I corrected most of the typos.
"Alarm over dramatic weakening of Gulf Stream"?
By now you'll all have read the RC post: Decrease in Atlantic circulation? (see, thats where I got my question mark from); which is about Bryden et al. in Nature. RC, of course, has a nice science analysis; I just wanted to compare it to the Grauniad story: Alarm over dramatic weakening of Gulf Stream. Now the headline is nonsense, because the Nature paper itself sez: the northwards transport in the gulf stream across 25 oN has remained nearly constant. Something more complicated is going on (return flow shallower, hence warmer, hence overall heat transport N is less because more is coming back S), which I'm not going to explain because (a) its over at RC and (b) I haven't read the paper properly yet (I look forward to doing so tomorrow during my long flight). It looks to me like it was too complicated for the Grauniad sci writers.
There are I think various caveats to interpreting this: most notably (as RC notes), that if this really has already happened, you might expect some signal in the SSTs which doesn't seem to have been seen.
Also, the Nature article and the Nature commentary are a bit selective in their reading of GCM results to support this.
[Update: also see James Annan's take and wise words re Nature]
There are I think various caveats to interpreting this: most notably (as RC notes), that if this really has already happened, you might expect some signal in the SSTs which doesn't seem to have been seen.
Also, the Nature article and the Nature commentary are a bit selective in their reading of GCM results to support this.
[Update: also see James Annan's take and wise words re Nature]
2005-11-29
Reading the entrails: New Nukes?
The BBC says:
I like the any decision will be taken in the national interest. This fails the try-negating it test: any decision will be taken against the national interest is unsayable. So ItNI means "prepare for an unpopular decision".
Although there is always a techno-industrial lobby in favour of Nukes, I'd guess that and also help on climate change may be quite accurate. Blur has been talking about Kyoto options and as I noted I think the govt has realised we're (they're?) not going to hit our targets. So he needs to pull something out of the hat. These nukes won't do it: they won't be onsteam by 2012 unless they arrive rather fast; but they could probably be folded into the plans if pushed.
So... is this a runner? Lots of people don't like nukes: Greenpeace protesters have disrupted a speech used by Tony Blair to launch an energy review which could lead to new nuclear power stations in the UK. Two protesters climbed up into the roof of the hall where Mr Blair was due to address the Confederation of British Industry conference. After a 48-minute delay, Mr Blair made his speech in a smaller side-hall. Forcing Blur off into a side-hall is a success, and will have annoyed him a lot.
But the "debate" about their (de)merits is as poor as ever: at least judging from radio 4 this morning. We had someone who doesn't like nukes, and then Bernard Ingham who does (I think, like Bellamy, out of an unstated assumption that its Nukes or Windfarms and he doesn't like windfarms). The green chap said Nukes are uneconomic; BI said they are. I rather suspect that they aren't, under current conditions: our present Nukes barely manage to stay afloat even with all their building costs written off; and I don't see piles of commercial applications waiting to be built. Of course some of this is due to the endless wrangling which costs; and how to cost the long term storage is obviously a bit of a poser since no-one yet knows how it will be done.
One of the arguments that the Green side is starting to push is that Nukes aren't that good for CO2: that over their lifecycle, they emit lots, comparable with coal/gas. I rather doubt that makes sense. I've never seen the figures. If it *is* true then it would account for the economics being so bad. If anyone has them, do please leave a comment.
You'll have noticed that I haven't explicitly given my opinion on this, though which side I lean should be clear enough. I excuse this by it being far from my expertise: I'm not sure why you should want my opinion. If offer this observation, though: that through the years on sci.env I have observed that the people in favour of Nukes invariably know more about them, and those against know little. Blur is likely to be an exception to this, though.
Blair says nuclear choice needed
Tony Blair says "controversial and difficult" decisions will have to be taken over the need for nuclear power to tackle the UK energy crisis.
The prime minister told the Liaison Committee, made up of the 31 MPs who chair Commons committees, any decision will be taken in the national interest.
He is said to believe nuclear power can improve the security of the UK's energy supply and also help on climate change.
A government review of energy options is expected to be announced next week.
I like the any decision will be taken in the national interest. This fails the try-negating it test: any decision will be taken against the national interest is unsayable. So ItNI means "prepare for an unpopular decision".
Although there is always a techno-industrial lobby in favour of Nukes, I'd guess that and also help on climate change may be quite accurate. Blur has been talking about Kyoto options and as I noted I think the govt has realised we're (they're?) not going to hit our targets. So he needs to pull something out of the hat. These nukes won't do it: they won't be onsteam by 2012 unless they arrive rather fast; but they could probably be folded into the plans if pushed.
So... is this a runner? Lots of people don't like nukes: Greenpeace protesters have disrupted a speech used by Tony Blair to launch an energy review which could lead to new nuclear power stations in the UK. Two protesters climbed up into the roof of the hall where Mr Blair was due to address the Confederation of British Industry conference. After a 48-minute delay, Mr Blair made his speech in a smaller side-hall. Forcing Blur off into a side-hall is a success, and will have annoyed him a lot.
But the "debate" about their (de)merits is as poor as ever: at least judging from radio 4 this morning. We had someone who doesn't like nukes, and then Bernard Ingham who does (I think, like Bellamy, out of an unstated assumption that its Nukes or Windfarms and he doesn't like windfarms). The green chap said Nukes are uneconomic; BI said they are. I rather suspect that they aren't, under current conditions: our present Nukes barely manage to stay afloat even with all their building costs written off; and I don't see piles of commercial applications waiting to be built. Of course some of this is due to the endless wrangling which costs; and how to cost the long term storage is obviously a bit of a poser since no-one yet knows how it will be done.
One of the arguments that the Green side is starting to push is that Nukes aren't that good for CO2: that over their lifecycle, they emit lots, comparable with coal/gas. I rather doubt that makes sense. I've never seen the figures. If it *is* true then it would account for the economics being so bad. If anyone has them, do please leave a comment.
You'll have noticed that I haven't explicitly given my opinion on this, though which side I lean should be clear enough. I excuse this by it being far from my expertise: I'm not sure why you should want my opinion. If offer this observation, though: that through the years on sci.env I have observed that the people in favour of Nukes invariably know more about them, and those against know little. Blur is likely to be an exception to this, though.
Sea ice: what I do in my spare time
Fairly soon now I'm off to NZ (oh dear, my CO2 burden...) to present some sea ice work. The poster part of it is nz-hadcm3.pdf. I have a day or two left, so feel free to point out typos and gross scientific errors.
The theme of the work is upgrading the sea ice dynamics in HadCM3, which has occurred just in time for it to be replaced by HadGEM. Never mind, we learnt a lot in the process. Mostly we learnt how hard it is to force the sea ice to behave itself in a coupled model.
The poster (in theory) says it all, so I won't explain at length here: but feel free to ask questions...
The theme of the work is upgrading the sea ice dynamics in HadCM3, which has occurred just in time for it to be replaced by HadGEM. Never mind, we learnt a lot in the process. Mostly we learnt how hard it is to force the sea ice to behave itself in a coupled model.
The poster (in theory) says it all, so I won't explain at length here: but feel free to ask questions...
2005-11-28
Topping Punts
There is an air of "tipping points" about. This is an idea (possibly coined by Schellnhuber) where "the balance of particular systems has reached the critical point at which potentially irreversible change is immenent, or actually occurring". That quote, somewhat bizarrely, comes from the Books and Arts section of Nature (here), which is a slightly dodgy regular section where they make a feeble stab at pretending the "two cultures" ever talk to each other.
And so the piccy is S's attempt to find an "icon" for climate change. But (ibid) "the issues surrounding climate change are extraordinarily complex. Can an image be found that is both simple and good science? Given the contentious nature of the debates, particularly in the United States, it is unwise to offer hostages to fortune by parading vulnerable predictions". I don't think the image is simple: is it good science?
But first of all, what about the "tipping points" concept anyway? I've previously pushed the idea that the climate is stable (in the absence of perturbation). You could argue, quite plausibly, that we shall soon have emitted enough CO2 to raise the T enough that we will be committed to melting Greenland. Perhaps that counts as a tipping point. But its slow. Its on the map as "instability of the greenland ice sheet" which is an odd way of phrasing it but has the "virtue" of implying speed.
But enough quibbling. The one I reacted badly to was "Antarctic ozone hole". Its an envoronmental icon, but hardly a tipping point: as far as its known its reversible, and on a long slow trend to being reversed (err, as long as GW doesn't cool the stratosphere too much...).
As to all the rest... I dunno, its a bit vague isn't it? I'm not sure I'm too keen on this search for an icon stuff.
A couple of BTW's to finish off: (1) I'm down to wiggly worm, so it looks like status is based on snapshot rather than accumulated - must get posting again. (2) I'm off conferencing for a while at the end of the week, so will be dropping further down. (3) I may get assimilated by the Borg in the near future anyway... Mark seems to have self-assimilated.
And so the piccy is S's attempt to find an "icon" for climate change. But (ibid) "the issues surrounding climate change are extraordinarily complex. Can an image be found that is both simple and good science? Given the contentious nature of the debates, particularly in the United States, it is unwise to offer hostages to fortune by parading vulnerable predictions". I don't think the image is simple: is it good science?
But first of all, what about the "tipping points" concept anyway? I've previously pushed the idea that the climate is stable (in the absence of perturbation). You could argue, quite plausibly, that we shall soon have emitted enough CO2 to raise the T enough that we will be committed to melting Greenland. Perhaps that counts as a tipping point. But its slow. Its on the map as "instability of the greenland ice sheet" which is an odd way of phrasing it but has the "virtue" of implying speed.
But enough quibbling. The one I reacted badly to was "Antarctic ozone hole". Its an envoronmental icon, but hardly a tipping point: as far as its known its reversible, and on a long slow trend to being reversed (err, as long as GW doesn't cool the stratosphere too much...).
As to all the rest... I dunno, its a bit vague isn't it? I'm not sure I'm too keen on this search for an icon stuff.
A couple of BTW's to finish off: (1) I'm down to wiggly worm, so it looks like status is based on snapshot rather than accumulated - must get posting again. (2) I'm off conferencing for a while at the end of the week, so will be dropping further down. (3) I may get assimilated by the Borg in the near future anyway... Mark seems to have self-assimilated.
2005-11-27
Campaign against Climate Change - march Dec 3rd
A foray into explicit politics: promotion for the Campaign against Climate Change and the London march on Dec 3rd.
2005-11-26
The Parker Paper
The Parker UHI paper (see [[Urban Heat Island]]) from Nature 2004 (and the Peterson 2003) strengthens the TAR contention that the UHI isn't important; and perhaps negligible. Now RP Sr has taken a shot at it. Unfortunately his paper is... difficult. You can take his word for what it says if you like, but I'd rather not. Happily, RP is so confident of his position that he has followed up with a whinge about Nature rejecting him, which includes the reviewers responses: Pielke has failed to adequately assess whether there are any trends in windiness in the Parker data set. Parker stratified by wind conditions, both at rural and urban sites, so any trends in windiness (even if this were possible in a stratified data set) would occur both at rural and urban sites. To suggest that there would be different turbulent mixing at rural and urban sites would then require differences in trends in temperature to be found, which is exactly what Parker found not to be the case. The logic presented in Pielke’s comment is circular and incorrect is the briefest.
One day I may actually read it, or meet someone who has. Until then I don't have a good way to assess it.
One day I may actually read it, or meet someone who has. Until then I don't have a good way to assess it.
2005-11-25
Its cold and Scott Adams gets whacked by Dogbert
Today we had the first (and who knows, maybe the only) snow of winter. Just a flurry; nothing settled, sadly.
Meanwhile, although I really like Dilbert, it looks like Scott Adams needs a whack from Dogbert to chase out the demons of stupidity aka ID/Creationism: via some rather circuitous routes I found Stein and Wolfgang.
And while I'm here, there is nice blog starting by Robert Friedman about his trip to the South Pole. Take the virtual tour!
Meanwhile, although I really like Dilbert, it looks like Scott Adams needs a whack from Dogbert to chase out the demons of stupidity aka ID/Creationism: via some rather circuitous routes I found Stein and Wolfgang.
And while I'm here, there is nice blog starting by Robert Friedman about his trip to the South Pole. Take the virtual tour!
Grauniad: Sea level rise doubles in 150 years
Yes, back to the familiar old topic: bashing science coverage in the papers. This time that old lefty favourite, the Grauniad, which has an article on Sea level rise doubles in 150 years. Who have discovered that Global warming is doubling the rate of sea level rise around the world... The oceans will rise nearly half a metre by the end of the century... Scientists believe the acceleration is caused mainly by... fossil fuel burning... during the past 5,000 years, sea levels rose at a rate of around 1mm each year, caused largely by the residual melting of icesheets from the previous ice age. But in the past 150 years, data from tide gauges and satellites show sea levels are rising at 2mm a year.
To which the obvious reply is "is this supposed to be news"? Slightly garbled of course (satellites say 3mm/y; the longer time tide gauge record is ~2 m/y; see the wiki [[Sea Level Rise]] page and the refs to the IPCC therein). The other interesting bit of garbling is the 1mm/y over the last 5kyr... the TAR says Based on geological data, global average sea level may have risen at an average rate of about 0.5 mm/yr over the last 6,000 years and at an average rate of 0.1 to 0.2 mm/yr over the last 3,000 years. So, *if* they haven't garbled it, they story is that the folk from Rutgers University have upped the estimates of SRL over the last 5kyr. But I'd bet on garbling myself. The abstract from Science is here but I can't read the full contents (I had an offer of a subs for $99/y and am considering taking it up...) but it seems to be more interested in the Myr timescale.
The Grauniad also covers the latest EPICA stuff, but thats much better covered over at RC so you should go there for that.
[Update: thanks to the kindness of not one but two readers, I now have a copy of the article from Science. In true blog style, I've quickly skimmed it far enough to discover that the 1 mm/y over the last 5kyr is a bit of a sideshow, and fortunately for you, its in the supplementary online material freely available. They say "Sealevel rise slowed at about 7 to 6 ka (fig. S1). Some regions experienced a mid-Holocene sealevel high at 5 ka, but we show that global sea level has risen at È1 mm/year over the past 5 to 6 ky." So I must apologise to the Grauniad: no garbling.
So how do we reconcile that to the pic I show (which is from wiki, not the Miller paper)? Both show a rise of about 15m over the last 8 kyr. The wiki pic has that very steep (15 to 4-) from 8 to 7 kyr; then much shallower. The Miller article fig S1 starts a bit deeper and has a much more uniform slope. Since the Miller data is almost entirely from one area and appears to contradict what I think I already know, I'll stick with wiki and the TAR for now. But informed comment is welcome. I do find it a teensy bit surprising that the Miller paper doesn't comment on the discrepancy between their Holocene results and "accepted wisdom": its possible I have the AW wrong.
Update 2 (minor): switch href on the figure to the wiki page]
To which the obvious reply is "is this supposed to be news"? Slightly garbled of course (satellites say 3mm/y; the longer time tide gauge record is ~2 m/y; see the wiki [[Sea Level Rise]] page and the refs to the IPCC therein). The other interesting bit of garbling is the 1mm/y over the last 5kyr... the TAR says Based on geological data, global average sea level may have risen at an average rate of about 0.5 mm/yr over the last 6,000 years and at an average rate of 0.1 to 0.2 mm/yr over the last 3,000 years. So, *if* they haven't garbled it, they story is that the folk from Rutgers University have upped the estimates of SRL over the last 5kyr. But I'd bet on garbling myself. The abstract from Science is here but I can't read the full contents (I had an offer of a subs for $99/y and am considering taking it up...) but it seems to be more interested in the Myr timescale.
The Grauniad also covers the latest EPICA stuff, but thats much better covered over at RC so you should go there for that.
[Update: thanks to the kindness of not one but two readers, I now have a copy of the article from Science. In true blog style, I've quickly skimmed it far enough to discover that the 1 mm/y over the last 5kyr is a bit of a sideshow, and fortunately for you, its in the supplementary online material freely available. They say "Sealevel rise slowed at about 7 to 6 ka (fig. S1). Some regions experienced a mid-Holocene sealevel high at 5 ka, but we show that global sea level has risen at È1 mm/year over the past 5 to 6 ky." So I must apologise to the Grauniad: no garbling.
So how do we reconcile that to the pic I show (which is from wiki, not the Miller paper)? Both show a rise of about 15m over the last 8 kyr. The wiki pic has that very steep (15 to 4-) from 8 to 7 kyr; then much shallower. The Miller article fig S1 starts a bit deeper and has a much more uniform slope. Since the Miller data is almost entirely from one area and appears to contradict what I think I already know, I'll stick with wiki and the TAR for now. But informed comment is welcome. I do find it a teensy bit surprising that the Miller paper doesn't comment on the discrepancy between their Holocene results and "accepted wisdom": its possible I have the AW wrong.
Update 2 (minor): switch href on the figure to the wiki page]
2005-11-23
Stability in a control run of HadCM3
One of the things I do is port [[HadCM3]] to new platforms (although I shouldn't over emphasise my role in that: much of the hard work of portabilising it was done at the Hadley Centre; nonetheless new platforms throw up new problems). HadCM3 was written for a Cray T3E; its known to be stable when run without forcing for thousands of years on that platform. There is a portable version of the model, which requires a little bit of effort to make it run on new platforms. The first thing to do is make it compile; the second to make it run through the first timestep; the third through the first meaning period; and then hopefully all that remains is to check that it is stable.
Which brings in this picture. Black is a 200 year control run, with the g95 compiler on a 4-processor Opteron system (using 3 procs for most of the time). Blue is a rather older run on an Athlon system under the antique fujitsu/lahey compiler. Red is an in between run on Opteron with the Portland Group compiler (pgi). All are seasonal data, differenced from 100 year means of an "official" control run. What you'll notice is that the red run has a distinct climate drift, which is enough to make it unusable. Blue looks OK; black has been run out long enough to be sure its OK. The grey shaded bit is some kind of 95% confidence limit based on the variability of the 100 year "official" run.
Quite why the Opteron/pgi runs drifts I don't know. Its 99.999% the same code as the other runs (differing only in whatever it took to make the compiler accept it). Most likely there is some compiler bug in there; but I will probably never know.
By eye, the 200 year run has no drift. By line fitting, the results are:
where I've shown the (95%) confidence intervals for a line fit over the first 50, 100, 150 and 200 years. Which shows up the internal variability quite nicely. If I'd just taken the first 50 years I might have believed in a drift of 0.3 oC/century, which is small but not perhaps totally negligible. By 100 years the "drift" has a central value of 0.1 oC/Century which would be negligible. Out to 150 years there is no statistical trend. Out to 200, a trivial cooling. Note, BTW, that all these sig estimates are rather thrown-together and should be a bit wider to take proper account of autocorrelation.
Which brings in this picture. Black is a 200 year control run, with the g95 compiler on a 4-processor Opteron system (using 3 procs for most of the time). Blue is a rather older run on an Athlon system under the antique fujitsu/lahey compiler. Red is an in between run on Opteron with the Portland Group compiler (pgi). All are seasonal data, differenced from 100 year means of an "official" control run. What you'll notice is that the red run has a distinct climate drift, which is enough to make it unusable. Blue looks OK; black has been run out long enough to be sure its OK. The grey shaded bit is some kind of 95% confidence limit based on the variability of the 100 year "official" run.
Quite why the Opteron/pgi runs drifts I don't know. Its 99.999% the same code as the other runs (differing only in whatever it took to make the compiler accept it). Most likely there is some compiler bug in there; but I will probably never know.
By eye, the 200 year run has no drift. By line fitting, the results are:
0- 50: [ 0.00168505, 0.00451390]
0-100: [ 0.00052441, 0.00159977]
0-150: [-0.00050959, 0.00009494]
0-200: [-0.00060853,-0.00015253]
where I've shown the (95%) confidence intervals for a line fit over the first 50, 100, 150 and 200 years. Which shows up the internal variability quite nicely. If I'd just taken the first 50 years I might have believed in a drift of 0.3 oC/century, which is small but not perhaps totally negligible. By 100 years the "drift" has a central value of 0.1 oC/Century which would be negligible. Out to 150 years there is no statistical trend. Out to 200, a trivial cooling. Note, BTW, that all these sig estimates are rather thrown-together and should be a bit wider to take proper account of autocorrelation.
2005-11-22
The mirror world
RP has what I regard as a posting full of mistakes: Reflections on the Challenge (my post The Big Picture refers). And he doesn't get any better in the comments.
One of mine got through. This one, below, got stopped for "questionable content" - judge for yourself - so since I have my own blog I'll post it here.
The words in []'s are ones I experimented with deleting in the hope of getting past the content filters. No such luck.
One of mine got through. This one, below, got stopped for "questionable content" - judge for yourself - so since I have my own blog I'll post it here.
Roger - you're still getting it wrong; Tom Rees is essentially right.
You say "So your position now is that the hockey stick was in 2001 a key study in making the case for attribution. That is, that without the hockey stick the case for attribution in 2001 would have been somewhat weaker? I disagree."
No. I didn't say *key*. But I *do* say that without MBH the attribution case in the TAR would have been *somewhat weaker* (but not *very much weaker*). [Good grief], you can just read the thing (surely youre familiar with it): http://www.grida.no/climate/ipcc_tar/wg1/007.htm. Which makes it clear that MBH is part of, but by not means the whole of, the attribution case.
Yes, MBH wasn't in the SAR, but then as the TAR sez "Since the SAR, progress has been made" and MBH was part of that progress.
If you want to position yourself as some kind of referee in this [bizarre] process, you need to be much clearer about the structure of things.
The words in []'s are ones I experimented with deleting in the hope of getting past the content filters. No such luck.
2005-11-17
2005-11-14
Testing the Fidelity of Methods Used in Proxy-Based Reconstructions of Past Climate
There's an intersting new paper just out in J Climate, Testing the Fidelity of Methods Used in Proxy-Based Reconstructions of Past Climate by Michael E. Mann, Scott Rutherford, Eugene Wahl & Caspar Ammann (hat tip to John Fleck).
[Update: the actual article is now available: thanks John & Mike]
This is similar to (but I think there is more than... I really should finish reading it before I post...) von S's Science thing of last year, of which it sayeth:
We shall see.
[Update: the actual article is now available: thanks John & Mike]
Two widely used statistical approaches to reconstructing past climate histories from climate 'proxy' data such as tree-rings, corals, and ice cores, are investigated using synthetic 'pseudoproxy' data derived from a simulation of forced climate changes over the past 1200 years. Our experiments suggest that both statistical approaches should yield reliable reconstructions of the true climate history within estimated uncertainties, given estimates of the signal and noise attributes of actual proxy data networks.
This is similar to (but I think there is more than... I really should finish reading it before I post...) von S's Science thing of last year, of which it sayeth:
One study by Von Storch et al. (2004--henceforth 'VS04'), however, concludes that a substantial bias may arise in proxy-based estimates of long-term temperature changes using CFR methods. VS04 based this conclusion on experiments using a simulation of the GKSS coupled model (similar experiments described by VS04 using an alternative simulation of the HadCM3 coupled model showed little such bias). The GKSS simulation was forced with unusually large changes in natural radiative forcing in past centuries [the peak-to-peak solar forcing changes on centennial timescales (~1 W/m2) were about twice that used in other studies (e.g. Crowley, 2000) and much larger than the most recent estimates (~0.15 W/m2--see Lean et al., 2002; Foukal et al., 2004)]. A substantial component of the low-frequency variability in the GKSS simulation, furthermore, appears to have been a 'spin-up' artifact: the simulation was initialized from a very warm 20th century state at AD 1000, prior to the application of preanthropogenic radiative forcing, leading to a long-term drift in mean temperature (Goosse et al., 2005).... These arguably unrealistic features in the GKSS simulation make the simulation potentially inappropriate for use in testing climate reconstruction methods.
We shall see.
Momentum
No, not another in the butterfly series, you'll be pleased to hear. Eli wants to know about momentum in GCMs. Specifically, "how momentum is transferred from the Earth to the atmosphere as it rotates". Well as far as GCMs are concerned the rotation of the earth is a lower boundary condition and its fixed (in the real world variations in the atmospheres angular momentum, from exchanges with the earth, do cause tiny but detectable changes in the solid earth rotation rate. But these changes are so tiny that for GCM purposes they should be neglected).
However in a GCM the atmosphere does exchange momentum fluxes with the earth (which affect the atmos if not the earth) and with the ocean (which *do* affect the ocean and are the main cause of the various oceanic currents). In the boundary layer above the earth (or ocean) the exchange is represented by Monin-Obukhov similarity theory which I won't go into (BL met is a thing in itself) but the momentum exchange is proportional to the near-surface windspeed, the roughness length of the underlying surface (which is a combination of the real roughness of the surface as you would measure it, enhanced to represent the form drag from orography below resolved scales if your model supports that), and a parameter, call it C, related to the stability of the atmosphere (very stable conditions (i.e. strong inversions) have little coupling of sfc to atmos and hence small C (theoretically, zero for very strong inversions). Unstable (convecting) atmos has lots of coupling and a large C. Above this there is some friction between the various atmospheric layers leading to momentum exchanges. As well as this there some other terms: the form drag of mountain ranges leads to more mom flux (up to half the total I think?). And Gravity Wave Drag which represents the effects of momentum transfer from surface orography to breaking gravity waves high up (300 hPa?).
But quite apart from that, there is another interesting thing. My picture shows the near-surface (10m) winds from HadCM3 (it would look almost the same in the re-analyses, if you're silly enough not to trust GCMs...). BTW, I apologise for the lack of anything drawn on top of the positive colours: I've no idea why the IDL Z-buffer insists on this: any IDL gurus out there?). Its an annual mean - it would look somewhat different in different seasons. No matter. The contours are the zonal (EW) component and the horizontal wind arrows are drawn on top. The most obvious feature (apart from the low speeds over the continents: they are much rougher than the oceans; and perhaps the strong southern ocean westerlies) is the tropical easterlies: this is an inescapable dynamical consequence of the earths rotation and the heating at the equator: air rises there, hence there must be equatorwards flow near the surface, hence (Coriolis) these winds are deflected towards the west; hence the band of easterlies from 30N to 30S. Now (supposing you believe in conservation of angular momentum) this necessarily implies average *westerlies* over the rest of the globe, since we know that on average the atmosphere is neither slowing down nor speeding up. This then touches on does-the-ferrel-cell-exist kind of stuff: because although there are good dynamical reasons (so people tell me...) for the mid-latitude westerlies, the actual reasons behind them are quite complex.
However in a GCM the atmosphere does exchange momentum fluxes with the earth (which affect the atmos if not the earth) and with the ocean (which *do* affect the ocean and are the main cause of the various oceanic currents). In the boundary layer above the earth (or ocean) the exchange is represented by Monin-Obukhov similarity theory which I won't go into (BL met is a thing in itself) but the momentum exchange is proportional to the near-surface windspeed, the roughness length of the underlying surface (which is a combination of the real roughness of the surface as you would measure it, enhanced to represent the form drag from orography below resolved scales if your model supports that), and a parameter, call it C, related to the stability of the atmosphere (very stable conditions (i.e. strong inversions) have little coupling of sfc to atmos and hence small C (theoretically, zero for very strong inversions). Unstable (convecting) atmos has lots of coupling and a large C. Above this there is some friction between the various atmospheric layers leading to momentum exchanges. As well as this there some other terms: the form drag of mountain ranges leads to more mom flux (up to half the total I think?). And Gravity Wave Drag which represents the effects of momentum transfer from surface orography to breaking gravity waves high up (300 hPa?).
But quite apart from that, there is another interesting thing. My picture shows the near-surface (10m) winds from HadCM3 (it would look almost the same in the re-analyses, if you're silly enough not to trust GCMs...). BTW, I apologise for the lack of anything drawn on top of the positive colours: I've no idea why the IDL Z-buffer insists on this: any IDL gurus out there?). Its an annual mean - it would look somewhat different in different seasons. No matter. The contours are the zonal (EW) component and the horizontal wind arrows are drawn on top. The most obvious feature (apart from the low speeds over the continents: they are much rougher than the oceans; and perhaps the strong southern ocean westerlies) is the tropical easterlies: this is an inescapable dynamical consequence of the earths rotation and the heating at the equator: air rises there, hence there must be equatorwards flow near the surface, hence (Coriolis) these winds are deflected towards the west; hence the band of easterlies from 30N to 30S. Now (supposing you believe in conservation of angular momentum) this necessarily implies average *westerlies* over the rest of the globe, since we know that on average the atmosphere is neither slowing down nor speeding up. This then touches on does-the-ferrel-cell-exist kind of stuff: because although there are good dynamical reasons (so people tell me...) for the mid-latitude westerlies, the actual reasons behind them are quite complex.
More UK CO2 emissions
Speed limit crackdown to cut emissions says todays Grauniad. Who are they fooling? UK car drivers have grown to expect to be able to violate speeding laws on the motorway with impunity: it will take more guts than this government has to try to enfore them.
See? the usual suspects are piling in favour of the poor downtrodden motorists inalienable right to break the law.
But there is more, because Government sets out challenge for greener Britain contains various policy options and how much they would save. Of the "frontrunners" one is an order of magnitude bigger than the rest: Extend UK participation in EU carbon trading scheme (4.2). Now I may be doing them a disservice, but what I think (in fact I'm practically sure) they mean by this is, don't actually produce less CO2, but buy permits to emit it. Of the "emerging" category, the two biggest are Introduce ways to store carbon pollution underground (0.5-2.5) (i.e., don't produce any less, just...) and Force energy suppliers to use more offshore wind turbines (Up to 1). Which would actually save CO2. In the "difficult" category the biggest is Change road speed limits (1.7) - a surprisingly large number.
All in all, I think they would *like* to reduce our CO2 emissions but don't have the determination required to even seriously try to do it. Too many sound bites, too little action.
It was drawn up by Elliot Morley, minister for climate change (did you know we have a minister for cliamte change?) at the Department for Environment, Food and Rural Affairs, and is being discussed (read: watered down) by the cabinet committee on energy and the environment, which is expected to publish a revised (read: watered down) version early next year.
Marked restricted, the review document says: "The government needs to strengthen its domestic credibility on climate change (ah, they've noticed that have they? Good)...
The review lists 58 possible measures to save an extra 11m-14m tons of carbon pollution each year, which it calls the government's "carbon gap". One of the options, a new obligation to mix renewable biofuels into petrol for vehicles, was announced last week (that one seemed distinctly dodgy). Stricter enforcement of the 70 mph limit, the document says, would save 890,000 tons of carbon a year - more than the biofuels obligation and many other listed measures put together.
Andrew Howard of the AA Motoring Trust said: "They would have to win a lot of hearts and minds to convince the public that this wasn't just a revenue generating exercise. It also raises some big questions about whether speed enforcement for environmental rather than road safety reasons should be an offence for which motorists get points on their licence."
See? the usual suspects are piling in favour of the poor downtrodden motorists inalienable right to break the law.
But there is more, because Government sets out challenge for greener Britain contains various policy options and how much they would save. Of the "frontrunners" one is an order of magnitude bigger than the rest: Extend UK participation in EU carbon trading scheme (4.2). Now I may be doing them a disservice, but what I think (in fact I'm practically sure) they mean by this is, don't actually produce less CO2, but buy permits to emit it. Of the "emerging" category, the two biggest are Introduce ways to store carbon pollution underground (0.5-2.5) (i.e., don't produce any less, just...) and Force energy suppliers to use more offshore wind turbines (Up to 1). Which would actually save CO2. In the "difficult" category the biggest is Change road speed limits (1.7) - a surprisingly large number.
All in all, I think they would *like* to reduce our CO2 emissions but don't have the determination required to even seriously try to do it. Too many sound bites, too little action.
Your comment was denied for questionable content.
Over at Jennifer Marohasy on politics and the environment there was some kind of debate over the stupid HoL economics-of-IPCC report. Belatedly, I thought I'd join in. So I posted the comment below, but got back Your comment was denied for questionable content... shades of an earlier post!
So I shall post it here, and you can judge. This version has a few words like "tedious" and "nitpicking" removed, but still it fails. Can anyone guess what the problem is?
So I shall post it here, and you can judge. This version has a few words like "tedious" and "nitpicking" removed, but still it fails. Can anyone guess what the problem is?
Am I too late to join this exciting debate?
Early on, someone said: The hockeystick, and the hockeystick alone, was the reason for the claims that this was the warmest century in the last long time.
But if you actually read the IPCC TAR (does anyone?) it says "Globally, it is very likely7 that the 1990s was the warmest decade and 1998 the warmest year in the instrumental record, since 1861" and "the increase in temperature in the 20th century is likely7 to have been the largest of any century during the past 1,000 years. It is also likely7 that, in the Northern Hemisphere, the 1990s was the warmest decade and 1998 the warmest year". What is *doesn't* say is that the 20C was the warmest.
The amusing thing, of course, is that everything the TAR said about the hockey stick remains valid for all the reconstructions subsequently published (see http://mustelid.blogspot.com/2005/10/increase-in-temperature-in-20th.html).
Continuing, someone challenged Mann to say why this hockey stick debate really really matters. Well the answer is: it doesn't really. See http://mustelid.blogspot.com/2005/11/big-picture.html
Oh, and as for all the SRES stuff... its tedious. If these poor dear marginalised economists want to produce their own CO2 projections... why don't they just do so?
2005-11-13
Arctic temperature trends and data sparsity
Whilst browsing the wilder shores of skepticism (well, I'd just been to Ikea and needed some light relief...) I came across the inaccurately titled "Reality in Arctic temperature trends. Scroll down about 1/4 of the way to the 1880-2004 temperature plot. So... temperatures higher in 1935-1945 than now? Interesting! And using CRU data too. How come... Well, one funny thing is that he calls this "A sobering dose of reality" - presumably forgetting that elsewhere he has attacked the Jones data as the spawn of the Devil. A second funny thing is that he is using [70,90]... [60,90] is more usual. Would you get the same results for [60,90]? And are there really many stations between [70,90] in the early period?
I'm sure you can guess the answers, and its "no" to both. Have a look at my pic (but be careful, there are lots of lines...). The top graph is [70,90]. The bottom graph is [60,90]. Both show the area-averaged temperature anomaly (in black; the 13-month running mean is in blue) from the HadCRUT2v dataset, in 100's of oC, which is why the left hand scale is 100 times bigger than you think it ought to be. Both plots have the same general shape, but for the wider area the current (last 10 years, mean given by red bar) temps are higher than for the 1935-1945 average. But even for [70,90] the temps in 1935-45 are only marginally higher than now - about 0.1 oC - hardly "much higher than today" as our septic claims.
But... look at the green lines. The lower green line on each plot is the fraction of the area covered by obs, on the right-hand scale. So for [60,90] about 40% of the area is observed, since 1960. In the 1940's, about 30%. For [70,90] about 20% is observed, recently (though with a huge annual cycle: far more people about in summer!) and less than 10% in the 1940's. Our septic complains that the "Arctic Climate Impact Assessment (ACIA) start their temperature records in 1960". Errm yes, well that might well be a good idea. Perhaps the ACIA people actually bothered to look at the data rather than just area-averaging it.
In fact, to my not-great-surprise, the ACIA people do indeed look at temperatures before 1960 (hint: if a septic sez something is true, its probably false...) and even draw nice maps of the trends at various time intervals: see the ACIA sci report, p36 and after. But they note the data sparsity problems early on.
The total *number* of filled 5x5 degree gridboxes is the upper green line on each plot, and the scale is (conveniently) the [0,400] of the upper half of the temperature scale (has your mind exploded yet?) *except* that for [70,90] that would be too small to see so I've multiplied it by 10 (boom!). So at the time of that huge (and rather suspicious...) jump in the upper plot at 1919, there were only 5 (=50/10) stations. For [60,90] there are nearly 200 filled boxes, recently.
Just looking at fraction-of-obs can be a bit dry, so here are maps of gridboxes filled (with their anomaly values, no in sensible units) for July 1919, 1940 and 2000. Note that using July maximises the filled boxes for the year. Its pretty obvious that 1919 is *very* sparse; 1940 is sparse; but even 2000 isn't exactly packed, north of 70; though its pretty good from 70 to 60 (oh, the black circles are 60 and 70 N, of course).
So... what do we learn from all this (apart from never trust the septics, but we knew that already)? We learn that plucking a dataset off the shelf and playing with it and only showing the end result may well mislead... we learn that you should be cautious with sparse data.
I'm sure you can guess the answers, and its "no" to both. Have a look at my pic (but be careful, there are lots of lines...). The top graph is [70,90]. The bottom graph is [60,90]. Both show the area-averaged temperature anomaly (in black; the 13-month running mean is in blue) from the HadCRUT2v dataset, in 100's of oC, which is why the left hand scale is 100 times bigger than you think it ought to be. Both plots have the same general shape, but for the wider area the current (last 10 years, mean given by red bar) temps are higher than for the 1935-1945 average. But even for [70,90] the temps in 1935-45 are only marginally higher than now - about 0.1 oC - hardly "much higher than today" as our septic claims.
But... look at the green lines. The lower green line on each plot is the fraction of the area covered by obs, on the right-hand scale. So for [60,90] about 40% of the area is observed, since 1960. In the 1940's, about 30%. For [70,90] about 20% is observed, recently (though with a huge annual cycle: far more people about in summer!) and less than 10% in the 1940's. Our septic complains that the "Arctic Climate Impact Assessment (ACIA) start their temperature records in 1960". Errm yes, well that might well be a good idea. Perhaps the ACIA people actually bothered to look at the data rather than just area-averaging it.
In fact, to my not-great-surprise, the ACIA people do indeed look at temperatures before 1960 (hint: if a septic sez something is true, its probably false...) and even draw nice maps of the trends at various time intervals: see the ACIA sci report, p36 and after. But they note the data sparsity problems early on.
The total *number* of filled 5x5 degree gridboxes is the upper green line on each plot, and the scale is (conveniently) the [0,400] of the upper half of the temperature scale (has your mind exploded yet?) *except* that for [70,90] that would be too small to see so I've multiplied it by 10 (boom!). So at the time of that huge (and rather suspicious...) jump in the upper plot at 1919, there were only 5 (=50/10) stations. For [60,90] there are nearly 200 filled boxes, recently.
Just looking at fraction-of-obs can be a bit dry, so here are maps of gridboxes filled (with their anomaly values, no in sensible units) for July 1919, 1940 and 2000. Note that using July maximises the filled boxes for the year. Its pretty obvious that 1919 is *very* sparse; 1940 is sparse; but even 2000 isn't exactly packed, north of 70; though its pretty good from 70 to 60 (oh, the black circles are 60 and 70 N, of course).
So... what do we learn from all this (apart from never trust the septics, but we knew that already)? We learn that plucking a dataset off the shelf and playing with it and only showing the end result may well mislead... we learn that you should be cautious with sparse data.
Weaselly behaviour
Via Wolfgang via CIP, I learn of Scott Adams Weasel Poll - weaseliest individual is Bush and weaseliest org is the Whitehouse. Reporting it as "finding supplies" when white people loot does creditably in the weaselly behaviour category. Though if you ask me SA can't draw weasels for toffee (his look like rats); and his weasel day mustelid is actually a ferret.
Also, this is a good recent one...
Also, this is a good recent one...
2005-11-11
Scary scaling
A while ago - back in 2002 I suppose - I heard vague refs to a paper about "scaling" which somehow demonstrated global climate models fail to reproduce real climate when they are tested against observed conditions. Since this was being posted to sci.env by the usual nutters I didn't pay too much attention, and as far as I can see neither did anyone else; though it occaisionally recurs. For one thing, the original article was published in Phys Rev Lett which I (and I think most climate folk) don't read; and pdfs weren't scattered across the web quite as freely in those days. And for another, whatever they were saying was so abstruse as to appear meaningless (even the nutters didn't push it much, because they had no idea what it was about either).
However, someone who isn't a nutter (thanks Nick! But I was right: its the Israelis) has re-drawn it to my attention, and even provided me with it on paper, so I've read it. You can too: its Global Climate Models Violate Scaling of the Observed Atmospheric Variability by R. B. Govindan Dmitry Vyushin Armin Bunde, Stephen Brenner, Shlomo Havlin and Hans-Joachim Schellnhuber. And it did get some attention: e.g. from Nature (subs req) (reputable of course, but sometimes over-excitable). But... is it any good?
Weeeeeelllll... probably not. This is yet more of the fitting power laws to things stuff. They use "detrended fluctuation analysis" (DFA) which I don't understand, but that doesn't matter, we'll just read the results. So... Govindan et al. do their DFA on observations from 6 (rather oddly chosen) stations; and 6 GCMs. The first oddness is their chosing Prague, Kasan, Seoul, Luling (Texas), Vancouver and Melbourne as represenatative of the world. Never mind. They get A ~ 0.65 for these stations. Don't worry too much about what A is; its related to the memory of the system: A ~ 0.5 is no memory (white noise); A ~ 1 is long memory (red noise). They assert boldly that this 0.65 is therefore an Universal Value. They discover that the GCMs, forced by GHGs only, by contrast get A ~ 0.5. Which, says Govindan et al., means that the GCMs overestimate the trends. Just to make sure that you won't miss this, they repeat the same at the end. But... this is not news. The fact that GCMs forced only by GHG's overestimate the trends is in the TAR (like just about everything else you need to know about climate change, its in the SPM, as fig 4). When you add in sulphates, the A from the models increases somewhat (to 0.56-0.62 ish); but thats arguably still too low. So whats up?
Which is where we turn to... Fraedrich and Blender, Scaling of Atmosphere and Ocean Temperature Correlations in Observations and Climate Models. Also in PRL. Who argue that G et al. are wrong: their Universal Value of A ~ 0.65 is not universal at all. They do a much wider analysis: instead of just a few stations, they use a gridded dataset across as much of the globe as they can. And they find (surprise!) exactly what you would expect: over the oceans, high A (~ 0.9) and over the continental interiors, low A (~ 0.5) and in between, mixed A (~ 0.65). Why is this exactly what you expect? Because the ocean has a long memory but the land doesn't. And... if you draw the same plot in a GCM (ECHAM4/HOPE) you get a remarkably similar pattern. So they come to a quite opposite conclusion: the DFA analysis actually shows the GCM performing rather well. And they conclude: The main results of this Letter follow in brief: (i) The exponent A ~ 0.65 is predominantly confined to coasts and land regions under maritime influence. (ii) Coupled atmosphere-ocean models are able to reproduce the observed behavior up to decades. (iii) Long time memory on centennial time scales is found only with a comprehensive ocean model. That last point arises because they tried the same analysis with a slab ocean and with fixed ocean; unsurprisingly, the scaling doesn't work in those cases.
F+B also picked their own seemingly odd station, Krasnojarsk, as a continental interior station, and showed (their fig 1) a scaling of A ~ 0.5 between 1y-decadal scales. At this point Govindan drops out, but some of the original authors reply, saying that (i) the scaling isn't 0.5 at K; and (ii) it isn't 0.5 at other interior points too (they pick yet another scatter of random stations). F+B reply, that (i) Oh yes it is (ii) maybe its the fitting interval: they use 1-15 years; the others are using 150-2500 days. On (i), looking at the pics, I'm with F+B and I can't see what the others are up to.
F+B, incidentally, argue that a control-run GCM (ie no external forcing) is quite good enough to get the long-timescale correlations, and that other forcing doesn't much help (for these purposes at least; you might perhaps have argued that adding in solar forcing and volcanic and stuff might help further). In Blender, R. and K. Fraedrich, 2004: Comment on "Volcanic forcing improves atmosphere-ocean coupled general circulation model scaling performance" by D. Vyushin, I. Zhidkov, S. Havlin, A. Bunde, and S. Brenner, Geophys. Res. Letters, 31 (22), L22502. DOI: 10.1029/2004GL021317 they criticise Vyushin (one of the et al. with G) for suggesting that volcanic helps, on the grounds that it simply isn't needed to get these A values right.
So after all that, what do we end up with, and what have we learnt? Assuming F+B are more right (and I think they probably are, based on what I've read so far) we've learnt very little. The fact that T increases are bigger sans aerosols is bleedin' obvious; as is the longer memoery of the oceans. We have a validation of the GCMs by another measure, but a rather abstruse measure and not an obviously useful one.
However, someone who isn't a nutter (thanks Nick! But I was right: its the Israelis) has re-drawn it to my attention, and even provided me with it on paper, so I've read it. You can too: its Global Climate Models Violate Scaling of the Observed Atmospheric Variability by R. B. Govindan Dmitry Vyushin Armin Bunde, Stephen Brenner, Shlomo Havlin and Hans-Joachim Schellnhuber. And it did get some attention: e.g. from Nature (subs req) (reputable of course, but sometimes over-excitable). But... is it any good?
Weeeeeelllll... probably not. This is yet more of the fitting power laws to things stuff. They use "detrended fluctuation analysis" (DFA) which I don't understand, but that doesn't matter, we'll just read the results. So... Govindan et al. do their DFA on observations from 6 (rather oddly chosen) stations; and 6 GCMs. The first oddness is their chosing Prague, Kasan, Seoul, Luling (Texas), Vancouver and Melbourne as represenatative of the world. Never mind. They get A ~ 0.65 for these stations. Don't worry too much about what A is; its related to the memory of the system: A ~ 0.5 is no memory (white noise); A ~ 1 is long memory (red noise). They assert boldly that this 0.65 is therefore an Universal Value. They discover that the GCMs, forced by GHGs only, by contrast get A ~ 0.5. Which, says Govindan et al., means that the GCMs overestimate the trends. Just to make sure that you won't miss this, they repeat the same at the end. But... this is not news. The fact that GCMs forced only by GHG's overestimate the trends is in the TAR (like just about everything else you need to know about climate change, its in the SPM, as fig 4). When you add in sulphates, the A from the models increases somewhat (to 0.56-0.62 ish); but thats arguably still too low. So whats up?
Which is where we turn to... Fraedrich and Blender, Scaling of Atmosphere and Ocean Temperature Correlations in Observations and Climate Models. Also in PRL. Who argue that G et al. are wrong: their Universal Value of A ~ 0.65 is not universal at all. They do a much wider analysis: instead of just a few stations, they use a gridded dataset across as much of the globe as they can. And they find (surprise!) exactly what you would expect: over the oceans, high A (~ 0.9) and over the continental interiors, low A (~ 0.5) and in between, mixed A (~ 0.65). Why is this exactly what you expect? Because the ocean has a long memory but the land doesn't. And... if you draw the same plot in a GCM (ECHAM4/HOPE) you get a remarkably similar pattern. So they come to a quite opposite conclusion: the DFA analysis actually shows the GCM performing rather well. And they conclude: The main results of this Letter follow in brief: (i) The exponent A ~ 0.65 is predominantly confined to coasts and land regions under maritime influence. (ii) Coupled atmosphere-ocean models are able to reproduce the observed behavior up to decades. (iii) Long time memory on centennial time scales is found only with a comprehensive ocean model. That last point arises because they tried the same analysis with a slab ocean and with fixed ocean; unsurprisingly, the scaling doesn't work in those cases.
F+B also picked their own seemingly odd station, Krasnojarsk, as a continental interior station, and showed (their fig 1) a scaling of A ~ 0.5 between 1y-decadal scales. At this point Govindan drops out, but some of the original authors reply, saying that (i) the scaling isn't 0.5 at K; and (ii) it isn't 0.5 at other interior points too (they pick yet another scatter of random stations). F+B reply, that (i) Oh yes it is (ii) maybe its the fitting interval: they use 1-15 years; the others are using 150-2500 days. On (i), looking at the pics, I'm with F+B and I can't see what the others are up to.
F+B, incidentally, argue that a control-run GCM (ie no external forcing) is quite good enough to get the long-timescale correlations, and that other forcing doesn't much help (for these purposes at least; you might perhaps have argued that adding in solar forcing and volcanic and stuff might help further). In Blender, R. and K. Fraedrich, 2004: Comment on "Volcanic forcing improves atmosphere-ocean coupled general circulation model scaling performance" by D. Vyushin, I. Zhidkov, S. Havlin, A. Bunde, and S. Brenner, Geophys. Res. Letters, 31 (22), L22502. DOI: 10.1029/2004GL021317 they criticise Vyushin (one of the et al. with G) for suggesting that volcanic helps, on the grounds that it simply isn't needed to get these A values right.
So after all that, what do we end up with, and what have we learnt? Assuming F+B are more right (and I think they probably are, based on what I've read so far) we've learnt very little. The fact that T increases are bigger sans aerosols is bleedin' obvious; as is the longer memoery of the oceans. We have a validation of the GCMs by another measure, but a rather abstruse measure and not an obviously useful one.
I'm an "Adorable Little Rodent"!
I've finally succumbed and got a blog counter (from blogpatrol). Its off down the side underneath the ads... I started it at 40k, which is where google adsense says I am (approximately); but now everyone can see not just me. Out of compliment to CIP (where I got the idea) I chose the same style as him. CIP also has a rather more gracious way with words than me, so I'll use his:
I do indeed thank you for stopping by... but just to prove that I am less gracious than CIP I'll add comments are only welcome providing you are polite.
However the title of this post refers to my place in the TTLB ecosystem (see the sidelink somewhere) where I have moved up from "Slithering Reptile" (back in September; then, CIP was only a Flippery Fish, now he's been promoted to Crawly Amphibian!). I wonder if there is a mustelid category, though Adorable Rodent is close-ish.
I would just like to thank all of you for stopping by. I'm especially grateful to those who leave a comment, even if it is just to tell me I'm wrong, crazy and or stupid! Especially if you explain your reasoning
I do indeed thank you for stopping by... but just to prove that I am less gracious than CIP I'll add comments are only welcome providing you are polite.
However the title of this post refers to my place in the TTLB ecosystem (see the sidelink somewhere) where I have moved up from "Slithering Reptile" (back in September; then, CIP was only a Flippery Fish, now he's been promoted to Crawly Amphibian!). I wonder if there is a mustelid category, though Adorable Rodent is close-ish.
2005-11-10
Timing of Dansgaard-Oeschger cycles
A tentative post this, unlike my usual strident opinions :-)
The starting point is Timing of abrupt climate change: A precise clock by Stefan Rahmstorf (GRL, 2003), and also recent RC: chaos and climate (check the comments). When I first read the GRL paper I somewhat distrusted it. I'm not sure why. The basic idea of that paper is that the [[Dansgaard-Oeschger events]], which occur with approximately 1,500 year spacings in the last glacial, really are regularly spaced, albeit with occaisional "misses". This somewhat overturns what I thought was the conventional wisdom, which is that the D-O events are responses to the Laurentide ice sheet internal instabilities, or somesuch, and if so would only be quasi-periodic.
One problem with that view is that if they *are* truly on a clock, then that probably requires an astronomical clock, nothing on earth being regular enough. In todays Nature Braun et al (inc Rahmstorf) propose a Possible solar origin of the 1,470-year glacial climate cycle demonstrated in a coupled model which they get from combinations of the De Vries (210) and Gleissberg (86.5) cycles. I only mention that to draw it to your attention; I have no opinion as yet.
You only get the nice 1,470 spacing if you use the GISP core, and only for the first 50 kyr of it. Which is maybe why I was suspicious... it smacked of choosing your data carefully. But now, having overplotted this stuff a few times, I've come to appreciate that the GISP and GRIP timescales aren't the same. And (so it is claimed) the layer counting for the first 50 kyr of GISP makes it most accurate. Quite likely.
My own little contribution is the plot here. Sorry about the garish colours. Its a [[wavelet]] decomposition of the same delta-O-18 data. To do that I had to regrid the data onto a regular 10 year time grid, which is why that plot is lying about the timescale: for "year" read "decade". One plot is GISP. The other is GRIP. I forget which: if you really know your data you can discover which is which. Your clue, if you need one, is to look near the 40 kyr date. However, on this plot at least, GRIP and GISP look fairly similar. My own view is that on this view, the data looks quite noisy.
Errm, and thats it for now. Sorry there's no conclusion!
The starting point is Timing of abrupt climate change: A precise clock by Stefan Rahmstorf (GRL, 2003), and also recent RC: chaos and climate (check the comments). When I first read the GRL paper I somewhat distrusted it. I'm not sure why. The basic idea of that paper is that the [[Dansgaard-Oeschger events]], which occur with approximately 1,500 year spacings in the last glacial, really are regularly spaced, albeit with occaisional "misses". This somewhat overturns what I thought was the conventional wisdom, which is that the D-O events are responses to the Laurentide ice sheet internal instabilities, or somesuch, and if so would only be quasi-periodic.
One problem with that view is that if they *are* truly on a clock, then that probably requires an astronomical clock, nothing on earth being regular enough. In todays Nature Braun et al (inc Rahmstorf) propose a Possible solar origin of the 1,470-year glacial climate cycle demonstrated in a coupled model which they get from combinations of the De Vries (210) and Gleissberg (86.5) cycles. I only mention that to draw it to your attention; I have no opinion as yet.
You only get the nice 1,470 spacing if you use the GISP core, and only for the first 50 kyr of it. Which is maybe why I was suspicious... it smacked of choosing your data carefully. But now, having overplotted this stuff a few times, I've come to appreciate that the GISP and GRIP timescales aren't the same. And (so it is claimed) the layer counting for the first 50 kyr of GISP makes it most accurate. Quite likely.
My own little contribution is the plot here. Sorry about the garish colours. Its a [[wavelet]] decomposition of the same delta-O-18 data. To do that I had to regrid the data onto a regular 10 year time grid, which is why that plot is lying about the timescale: for "year" read "decade". One plot is GISP. The other is GRIP. I forget which: if you really know your data you can discover which is which. Your clue, if you need one, is to look near the 40 kyr date. However, on this plot at least, GRIP and GISP look fairly similar. My own view is that on this view, the data looks quite noisy.
Errm, and thats it for now. Sorry there's no conclusion!
Rabett vs Pielke
Not everyone reads comments, so I point you to some interesting stuff in the latest RC post, in particular this by Eli Rabett criticising RP Jr's position: What you are doing here, and in your publications, and on Prometheus is to assert ownership of a series of issues, the latest of which is hurricane damage due to climate change. Your incessant self citation is a clear indication.... Strong stuff, and there is more. I look forward to the extended exchange.
2005-11-09
Politics: good news at last: Blur illiberalism routed briefly
"The prime minister has suffered a humiliating defeat" says R4 news at 6. Ho ho, schadenfreude, etc etc.
At last a bit of good news over the terrorist panic. MPs have finally stood up and told Blair to f*ck off over the proposal to hold people for 90 days without trial. So R4 5 o'clock news tells me, and this seems to confirm.
The margin is larger than expected: 31 votes. So the farce of recalling Brown from Israel to pack the lobby was a waste of time and money too.
In one aspect, though, the illiberals are already winning: the debate (insofar as anyone is seriously debating this rather than pontificating) is over how far the period should be extended from the current 14 days (more quietly, the news tells us that the HoC has just voted in favour of 28 days. Sigh. Celebrating too early... I would have suspected that 90 was all a cunning plot to get 28 days through quietly, except Blair nailed himself to the mast a bit too thoroughly for that). It should be about cutting it back down from 14.
But really, all this vast panic over terrorism is stupid. Car drivers kill far more people than terrorists do, but kill someone with a car and you probably won't get a 90 day sentence even if found guilty.
I'm curious: how long could you be held in the US (outside Guantanamo, of course) without being charged?
At last a bit of good news over the terrorist panic. MPs have finally stood up and told Blair to f*ck off over the proposal to hold people for 90 days without trial. So R4 5 o'clock news tells me, and this seems to confirm.
The margin is larger than expected: 31 votes. So the farce of recalling Brown from Israel to pack the lobby was a waste of time and money too.
In one aspect, though, the illiberals are already winning: the debate (insofar as anyone is seriously debating this rather than pontificating) is over how far the period should be extended from the current 14 days (more quietly, the news tells us that the HoC has just voted in favour of 28 days. Sigh. Celebrating too early... I would have suspected that 90 was all a cunning plot to get 28 days through quietly, except Blair nailed himself to the mast a bit too thoroughly for that). It should be about cutting it back down from 14.
But really, all this vast panic over terrorism is stupid. Car drivers kill far more people than terrorists do, but kill someone with a car and you probably won't get a 90 day sentence even if found guilty.
I'm curious: how long could you be held in the US (outside Guantanamo, of course) without being charged?
Vote for us!
Go on... click on the image... it will take you to the vote site, then you can vote for RealClimate, hurrah. Or click here for the current results... quick, click now...
2005-11-08
The Abdication of Oversight?
RP (Jr) has an interesting post The Abdication of Oversight. He begins by noting that Barton got his fingers burnt for his nonsense of last year (I paraphrase...) and this was one reason why Barton has wimped out of a follow up.
But he continues:
And challenges us all to condemn this as nonsense.
So... while I disagree with RP over some of the nuances of the hurricane issue (see Hurricanes and Global Warming - Is There a Connection? for my/RC's views) I would be happy to say that looking for a global warming signal in hurricanes is definitely the wrong place to start. Hurricanes are a noisy signal, hurricane damage is even worse: the least noisy signal is the temperature signal, and that the obvious place to look. Because of the particular track that Katrina took (and probably because levee money had been siphoned off to pay for a stupid war, but thats another matter...) it did an inordinate amount of damage. With a slightly different track (and there is no way to predict the exact track from GW) we would have a somewhat over-active season but no particularly exciting events.
The motion (as described above) is on completely the wrong track (ho ho) and looks like band-waggon jumping after an "exciting" event: from a climate science point of view what their motion should be about is something different. The real Bush failure is to acknowledge the considerable degree of certainty of the attribution of recent, well observed, climate change to anthropogenic factors. Bush/Republicans/Skeptics/Whoever need to start by acknowledging the existing warming as real (Bush has done this, but quietly and weakly) and stop quibbling about it; admit that the current best science attributes most of the warming to us and stop overplaying the uncertainty; and then have a proper policy-relevant type debate about what to do; in the meantime the scientist types can go back to quietly refining estimates of attribution an future warming.
Oh, and on a completely different topic: I liked this from CIP and point you to JA's latest failure to get the skeptics (Bill Gray) to ante up.
But he continues:
Providing ample evidence that the politicization of science by politicians is a bipartisan pastime, Congressman Dennis Kucinich (D-OH) and 150 fellow Democrats have introduced a rarely used "resolution of inquiry" to explore whether the Bush Administration has been hiding evidence that the current hurricane season has been caused by global warming. Kucinich said in press release last week:
"The American public deserve to know what the President knew about the effects climate change would have, and will continue to have, on our coasts. This Administration, and Congress, can no longer afford to overlook the overwhelming evidence of the devastating effect of global climate change. It is essential for our preparedness that we understand global climate change and take serious and immediate actions to slow its effects."
And challenges us all to condemn this as nonsense.
So... while I disagree with RP over some of the nuances of the hurricane issue (see Hurricanes and Global Warming - Is There a Connection? for my/RC's views) I would be happy to say that looking for a global warming signal in hurricanes is definitely the wrong place to start. Hurricanes are a noisy signal, hurricane damage is even worse: the least noisy signal is the temperature signal, and that the obvious place to look. Because of the particular track that Katrina took (and probably because levee money had been siphoned off to pay for a stupid war, but thats another matter...) it did an inordinate amount of damage. With a slightly different track (and there is no way to predict the exact track from GW) we would have a somewhat over-active season but no particularly exciting events.
The motion (as described above) is on completely the wrong track (ho ho) and looks like band-waggon jumping after an "exciting" event: from a climate science point of view what their motion should be about is something different. The real Bush failure is to acknowledge the considerable degree of certainty of the attribution of recent, well observed, climate change to anthropogenic factors. Bush/Republicans/Skeptics/Whoever need to start by acknowledging the existing warming as real (Bush has done this, but quietly and weakly) and stop quibbling about it; admit that the current best science attributes most of the warming to us and stop overplaying the uncertainty; and then have a proper policy-relevant type debate about what to do; in the meantime the scientist types can go back to quietly refining estimates of attribution an future warming.
Oh, and on a completely different topic: I liked this from CIP and point you to JA's latest failure to get the skeptics (Bill Gray) to ante up.
2005-11-07
Sh*t* frm Lindzen
Lindzen is a bit of a contrarian, but I had thought he mainly kept his skepticism within the bounds of reason and deserved his "k". I now find I'm wrong: I recently found Lindzens testimony for the House of Lords. Its so bad its funny. Consider:
What utter bilge! Lindzen is out on a limb on his Iris Hypothesis, which has by now been discarded by just about everyone. He's welcome to like his own research himself, of course, but pretending that anyone else does is dishonest. The rest of it is cr*p too.
Lord Kingsdown: Can I just go on to ask you how far your view of the role of water vapour is shared by other scientists?
Professor Lindzen: That is shared universally.
What utter bilge! Lindzen is out on a limb on his Iris Hypothesis, which has by now been discarded by just about everyone. He's welcome to like his own research himself, of course, but pretending that anyone else does is dishonest. The rest of it is cr*p too.
2005-11-03
How (coupled AO) GCMs work
Having done extensive research (a quick google search that threw up this excellent and well-referenced post but nothing much else; and reading comments at RC and elsewhere) its pretty clear to me that (a) almost no-one outside the immeadiate community knows how coupled ocean-atmosphere GCMs work and are used in climate modelling and prediction (or "projection" as the IPCC calls it); and (b) this may be because there are no webpages on it. If you fancy reading some GCM source code, then this will get you GISS ModelE; or this for HadCM3. But you're unlikely to learn much from it unless you're *very* determined.
So I'm going to write up a post on it. What I hope to do is produce a first draft here, publish it, get feedback from you lot on bits that are unclear (or mistaken? no...; still the ocean bit is thin) or missing, and update it until adequate. Or until I get bored. Also please comment if you can find a better description elsewhere. This from the Met Office is an example of something thats not much use...
For definiteness, I'm going to talk about coupled-atmos-ocean GCMs (AOGCMs, though I'll probably just say GCMs) which are the heavyweight tool for climate prediction. You can't do that with an atmos-only model. And the only ones I'm at all familiar with are HadCM3/HadGEM.
AOGCMs have two main components (atmosphere and ocean of course) and two more minor components (sea ice and land surface). I suppose sea ice modellers (me!) or land surface folk might complain about me calling them minor. Delete the word if it offends you. Traditionally the land surface scheme sits inside the atmos model, and might well be considered part of it. The sea ice scheme might well sit inside the ocean model. Mostly.
Those are (I think) the essential bits. You can also have various optional bits (for example carbon cycle or atmospheric chemistry) but those are not needed. One very common mistake is to think that GCMs predict CO2 levels. Most don't. Most are run with observed CO2 (if post-dicting the past) or prescribed CO2 (either from an economic model or an idealised 1% increase, say) is predicting the future. Even a carbon cycle model would be run with prescribed anthro CO2 inputs. Most GCMs don't contain a glacier or icesheet model either, because the scales are incompatible: glaciers are too small, and ice sheets have millenial scales (HadCM3 has been run with a Greenland ice sheet, but only once, it took ages, and I think it was specially speeded up).
I'll add a forcings section at the bottom.
I'll only say a bit about this. This seems quite helpful.
For the atmosphere and the ocean the basic fluid dynamics equations need to be converted from their continuous (partial differential equation) form and discretised so that they can be handled by numerical approximation. For the atmosphere, this can take the form of a spectral or finite difference decomposition. I'm not going to talk about the spectral stuff, cos it will only confuse, and the end results are not much different. For the oceans you can't use spectral stuff anyway. What happens then is that instead of a continuous equation d(f)/dt=g(x,t) you end up with something like f_{x,t+dt}=f_{x,t}+G({x,t},{x-dx,t},{x+dx,t})... I'm handwaving for effect here (apart from anything else in a GCM the x's are 3D (lat, long and height)). The point is to end up with an expression for the values at time t+dt, in terms of things at time t (or use an implicit solution...). But anyway, this gives you two important parameters to choose: the timestep, dt; and the spatial discretisation dx.
Typical values for the atmosphere are 1/2 hour (or less) for the timestep; and 300 km for the horizontal; and about 20-40 levels in the vertical (not evenly spaced). At least for HadCM3 the ocean timestep is longer (1h) and the spatial less (1.25 degrees, about 100 km).
Space and time steps are related by the CFL criterion: as the space steps get smaller so must the time, to avoid instability. Note that there is resource/accuracy trade of in the timestepping: longer timesteps allow the model to run faster; shorter timesteps allow more accurate integration. In practice, I think, people take the largest timestep compatible with stability, since errors elsewhere mean the loss of accuracy from as large as possible timestep doesn't matter.
This pic gives you some idea of the grid cell size; this has refs and stuff.
The atmosphere sort-of divides into two components: a dynamical core to handle the discretisation of the fluid dynamics; and a pile of physical parametrisations to handle things (clouds, for example) that don't get a fluid-dyn representation. Also radiation.
So the dynamical core handles the integration (i.e., getting from one time step to the next) of [u,v] (horizontal velocity and the various vertical levels) and p* (surface pressure) and omega (vertical velocity). Once the winds are known, other variables (q, moisture) can be advected around. It is generally reckoned that the GCM type scale (200-300 km gridpoints) is enough to resolve most of the energetic scales in the atmosphere.
At some point the bottom layer of the atmosphere needs to exchange fluxes (momentum and heat and moisture) with the surface, which is where the surface exchange scheme comes in, which counts as part of the atmosphere. Models typically have their lowest level at a few 10's of meters, which requires a parametrisation of the boundary layer exchange, point by point.
The radiation code handles the short wave (visible; solar) fluxes and the long-wave (infra-red) fluxes separately (since there is little overlap). The vertical column above each grid point is treated separately from the ones next door (since the cells are 100's of km wide but only 10's of km high, edge effects get neglected). SW comes in at the top, gets reflected, diffused, absorbed and generally bounced around of the atmos, the clouds and the sfc. Similarly the LW bounces around but also has sources. The radiation code, effectively, is the bit where enhanced CO2 (or other GHG forcing) gets inserted, by affecting the transmissivity of the atmosphere. In the real world radiation has a continuous spectrum (with lines in it...); in line-by-line codes thousands of lines and continua are specified; in GCM type codes each of the SW and LW radiation codes will deal with a small (~10) number of bands which amalgamate important lines and continua. Radiation codes are expensive: HadAM3 only calls the SW radiation 8 times a day.
There are separate schemes for the convective clouds and "large scale" clouds. LS clouds are those that are effectively resolved: if a grid box cools enough to get the cloud scheme invoked, then clouds form (once upon a time, this happened if the RH got above 100% (or perhaps 95%, with some ramping); nowadays I think its more complex). Convective clouds require a parametrisation: again this has evolved: once if a part of the column was convectively unstable it got overturned; now much more complex schemes exist. There is a lot of scope for different schemes I think. Ppn gets to fall as rain or snow according to temperature; it may re-evaporate on the way down if it falls through a dry layer. Once you have the clouds they need to feed into the radiation scheme. Clouds may be true model prognostics or get diagnosed at each timestep.
I know less about the ocean code. The ocean is different in that it has boundaries. It also has (in the real world) more energy at smaller spatial scales and so is rather harder to get down to a resolution which properly resolves it. But still there is a dynamical core which solves for the transport.
Radiation pretty well gets absorbed in the upper layers so is less interesting than in the atmos case. Convection is rather less common, and mostly associated with brine rejection from sea ice (?), which needs parametrisation just like cumulus convection in the atmosphere.
Unlike the atmosphere, which exchanges interesting fluxes with the land surface, the bottom of the ocean isn't very interesting.
Sea ice is effectively an interface between atmos and ocean and insulates one from the other. It gets pushed by wind stress from the atmosphere, ocean-ice drag underneath, coriolis force and internal stresses (its is usually modelled as an (elastic) viscous plastic substance; the details of this are really quite interesting but complex).
In the Hadley models, it exists on the same grid as the ocean model. By affecting the albedo it also affects the ice-ocean interaction. If it has a different roughness length to the ocean, it will affect the momentum fluxes too.
Sea ice effectively splits into "dynamics" (moving it around) and "thermodynamics" (heat transfer through it, melting/freezing, albedo, etc).
The land surface scheme needs to allow us to calculate the fluxes of heat, moisture and momentum with the atmosphere; and the radiative fluxes. Fortunately it doesn't move so doesn't need any dynamics so is often not a separate model.
Fluxes of heat are done by calculating the temperature at various depths in the soil which gets you a surface temperature, together with a surface roughness length (which depends on...). Fluxes of moisture are done by knowing the "soil" moisture based on some more or less sophisticated scheme (see PILPS) which will also affect the way falling precipitation is handled. This includes representations of evapotranspiration etc etc. Momentum just needs a roughness length, the stability or otherwise of the atmos BL, and possibly some representation of the unresolved orography.
Most such schemes have prescribed vegetation; but more exciting ones can have interactive vegetation schemes (the UKMO is called TRIFFID).
Any of this gets affected by an overlying snow cover; which obviously affects the albedo, but also insulates.
This will be short, since I suspect you can find it better elsewhere.
A fully coupled model needs an initialisation state (usually 1860 or thereabouts if its to be used from simulating 20C and the future, to avoid cold start and stuff), prescribed CO2 (and other minor GHG) concentrations which vary through time; solar forcing (variable or not); volcanic and other aerosols. It may also get a varying land use. And thats about it (did I forget anything?). The point being to let it get on with it.
People occaisionally suggest that it would be a good idea to run them in semi-NWP mode and assimilate weather obs along the way, so that they track 20C temps as accurately as possible and predict the future as well as poss. This is plausible (in some ways) but not done.
At the end of all this, you end up with values of temperature, velocity, humidity, cloud at 2*105 atmospheric gridpoints (or thereabouts) together with more in the ocean and many another variable besides, every half hour, for 200 years (or however long). Assuming you bothered to save them.
Oddly enough that level of detail is often not what you want. So the first thing to look at tends to be an area-average (often global) and time average (monthly; yearly) values of one variable of particular interest.
So I'm going to write up a post on it. What I hope to do is produce a first draft here, publish it, get feedback from you lot on bits that are unclear (or mistaken? no...; still the ocean bit is thin) or missing, and update it until adequate. Or until I get bored. Also please comment if you can find a better description elsewhere. This from the Met Office is an example of something thats not much use...
For definiteness, I'm going to talk about coupled-atmos-ocean GCMs (AOGCMs, though I'll probably just say GCMs) which are the heavyweight tool for climate prediction. You can't do that with an atmos-only model. And the only ones I'm at all familiar with are HadCM3/HadGEM.
Components
AOGCMs have two main components (atmosphere and ocean of course) and two more minor components (sea ice and land surface). I suppose sea ice modellers (me!) or land surface folk might complain about me calling them minor. Delete the word if it offends you. Traditionally the land surface scheme sits inside the atmos model, and might well be considered part of it. The sea ice scheme might well sit inside the ocean model. Mostly.
Those are (I think) the essential bits. You can also have various optional bits (for example carbon cycle or atmospheric chemistry) but those are not needed. One very common mistake is to think that GCMs predict CO2 levels. Most don't. Most are run with observed CO2 (if post-dicting the past) or prescribed CO2 (either from an economic model or an idealised 1% increase, say) is predicting the future. Even a carbon cycle model would be run with prescribed anthro CO2 inputs. Most GCMs don't contain a glacier or icesheet model either, because the scales are incompatible: glaciers are too small, and ice sheets have millenial scales (HadCM3 has been run with a Greenland ice sheet, but only once, it took ages, and I think it was specially speeded up).
I'll add a forcings section at the bottom.
Discretisation and resolution
I'll only say a bit about this. This seems quite helpful.
For the atmosphere and the ocean the basic fluid dynamics equations need to be converted from their continuous (partial differential equation) form and discretised so that they can be handled by numerical approximation. For the atmosphere, this can take the form of a spectral or finite difference decomposition. I'm not going to talk about the spectral stuff, cos it will only confuse, and the end results are not much different. For the oceans you can't use spectral stuff anyway. What happens then is that instead of a continuous equation d(f)/dt=g(x,t) you end up with something like f_{x,t+dt}=f_{x,t}+G({x,t},{x-dx,t},{x+dx,t})... I'm handwaving for effect here (apart from anything else in a GCM the x's are 3D (lat, long and height)). The point is to end up with an expression for the values at time t+dt, in terms of things at time t (or use an implicit solution...). But anyway, this gives you two important parameters to choose: the timestep, dt; and the spatial discretisation dx.
Typical values for the atmosphere are 1/2 hour (or less) for the timestep; and 300 km for the horizontal; and about 20-40 levels in the vertical (not evenly spaced). At least for HadCM3 the ocean timestep is longer (1h) and the spatial less (1.25 degrees, about 100 km).
Space and time steps are related by the CFL criterion: as the space steps get smaller so must the time, to avoid instability. Note that there is resource/accuracy trade of in the timestepping: longer timesteps allow the model to run faster; shorter timesteps allow more accurate integration. In practice, I think, people take the largest timestep compatible with stability, since errors elsewhere mean the loss of accuracy from as large as possible timestep doesn't matter.
This pic gives you some idea of the grid cell size; this has refs and stuff.
Atmosphere
The atmosphere sort-of divides into two components: a dynamical core to handle the discretisation of the fluid dynamics; and a pile of physical parametrisations to handle things (clouds, for example) that don't get a fluid-dyn representation. Also radiation.
So the dynamical core handles the integration (i.e., getting from one time step to the next) of [u,v] (horizontal velocity and the various vertical levels) and p* (surface pressure) and omega (vertical velocity). Once the winds are known, other variables (q, moisture) can be advected around. It is generally reckoned that the GCM type scale (200-300 km gridpoints) is enough to resolve most of the energetic scales in the atmosphere.
At some point the bottom layer of the atmosphere needs to exchange fluxes (momentum and heat and moisture) with the surface, which is where the surface exchange scheme comes in, which counts as part of the atmosphere. Models typically have their lowest level at a few 10's of meters, which requires a parametrisation of the boundary layer exchange, point by point.
The radiation code handles the short wave (visible; solar) fluxes and the long-wave (infra-red) fluxes separately (since there is little overlap). The vertical column above each grid point is treated separately from the ones next door (since the cells are 100's of km wide but only 10's of km high, edge effects get neglected). SW comes in at the top, gets reflected, diffused, absorbed and generally bounced around of the atmos, the clouds and the sfc. Similarly the LW bounces around but also has sources. The radiation code, effectively, is the bit where enhanced CO2 (or other GHG forcing) gets inserted, by affecting the transmissivity of the atmosphere. In the real world radiation has a continuous spectrum (with lines in it...); in line-by-line codes thousands of lines and continua are specified; in GCM type codes each of the SW and LW radiation codes will deal with a small (~10) number of bands which amalgamate important lines and continua. Radiation codes are expensive: HadAM3 only calls the SW radiation 8 times a day.
There are separate schemes for the convective clouds and "large scale" clouds. LS clouds are those that are effectively resolved: if a grid box cools enough to get the cloud scheme invoked, then clouds form (once upon a time, this happened if the RH got above 100% (or perhaps 95%, with some ramping); nowadays I think its more complex). Convective clouds require a parametrisation: again this has evolved: once if a part of the column was convectively unstable it got overturned; now much more complex schemes exist. There is a lot of scope for different schemes I think. Ppn gets to fall as rain or snow according to temperature; it may re-evaporate on the way down if it falls through a dry layer. Once you have the clouds they need to feed into the radiation scheme. Clouds may be true model prognostics or get diagnosed at each timestep.
Ocean
I know less about the ocean code. The ocean is different in that it has boundaries. It also has (in the real world) more energy at smaller spatial scales and so is rather harder to get down to a resolution which properly resolves it. But still there is a dynamical core which solves for the transport.
Radiation pretty well gets absorbed in the upper layers so is less interesting than in the atmos case. Convection is rather less common, and mostly associated with brine rejection from sea ice (?), which needs parametrisation just like cumulus convection in the atmosphere.
Unlike the atmosphere, which exchanges interesting fluxes with the land surface, the bottom of the ocean isn't very interesting.
Sea ice
Sea ice is effectively an interface between atmos and ocean and insulates one from the other. It gets pushed by wind stress from the atmosphere, ocean-ice drag underneath, coriolis force and internal stresses (its is usually modelled as an (elastic) viscous plastic substance; the details of this are really quite interesting but complex).
In the Hadley models, it exists on the same grid as the ocean model. By affecting the albedo it also affects the ice-ocean interaction. If it has a different roughness length to the ocean, it will affect the momentum fluxes too.
Sea ice effectively splits into "dynamics" (moving it around) and "thermodynamics" (heat transfer through it, melting/freezing, albedo, etc).
Land surface
The land surface scheme needs to allow us to calculate the fluxes of heat, moisture and momentum with the atmosphere; and the radiative fluxes. Fortunately it doesn't move so doesn't need any dynamics so is often not a separate model.
Fluxes of heat are done by calculating the temperature at various depths in the soil which gets you a surface temperature, together with a surface roughness length (which depends on...). Fluxes of moisture are done by knowing the "soil" moisture based on some more or less sophisticated scheme (see PILPS) which will also affect the way falling precipitation is handled. This includes representations of evapotranspiration etc etc. Momentum just needs a roughness length, the stability or otherwise of the atmos BL, and possibly some representation of the unresolved orography.
Most such schemes have prescribed vegetation; but more exciting ones can have interactive vegetation schemes (the UKMO is called TRIFFID).
Any of this gets affected by an overlying snow cover; which obviously affects the albedo, but also insulates.
Forcing
This will be short, since I suspect you can find it better elsewhere.
A fully coupled model needs an initialisation state (usually 1860 or thereabouts if its to be used from simulating 20C and the future, to avoid cold start and stuff), prescribed CO2 (and other minor GHG) concentrations which vary through time; solar forcing (variable or not); volcanic and other aerosols. It may also get a varying land use. And thats about it (did I forget anything?). The point being to let it get on with it.
People occaisionally suggest that it would be a good idea to run them in semi-NWP mode and assimilate weather obs along the way, so that they track 20C temps as accurately as possible and predict the future as well as poss. This is plausible (in some ways) but not done.
Output
At the end of all this, you end up with values of temperature, velocity, humidity, cloud at 2*105 atmospheric gridpoints (or thereabouts) together with more in the ocean and many another variable besides, every half hour, for 200 years (or however long). Assuming you bothered to save them.
Oddly enough that level of detail is often not what you want. So the first thing to look at tends to be an area-average (often global) and time average (monthly; yearly) values of one variable of particular interest.
2005-11-02
Blur on climate change
Our glorious leader has been saying things about the politics of climate change again, which have been generally interpreted as weakening of Kyoto-type stuff. See the Grauniad. In fact he has been talking mostly about post-Kyoto (ie post 2012) so in some ways the question must be: why should he bother? He will be well out of it by then. BTW, I've ventured to spice this post up with a nice picture of a cloud from the Pictures blog by way of advertising.
Lets quote the Grauniad:
Well, so far so politics so who cares? FOE do (different article):
First of all, the request for clarification is unlikely to be met: ambiguity is what is being aimed for. Second, if TB really is "the only world leader who's pushing climate change" then nothing will be done: one against so many obviously won't work. Third I don't understand the re-writing bit: this is future not past.
The article continues...
Terrorism: did he? I thought that was Bob May. Although since terrorism kills so few people ranking X above terrorism as a threat hardly says much about the importance of X. Anyway, this now lines up a possible explanation, that Blair is angling to lead the post-Kyoto organisation in retirement from being PM. Farfeteched perhaps. Anyway, although Blur has been seen as a Kyoto supporter, thats mostly rhetoric and practical action is thin (Stoat passim). Nothing came out of G8 (Stoat passim). We (the UK) have Kyoto targets that look unattainable and voluntary additional targets (reaffirmed in the last election) that look even less attainable.
Lets quote the Grauniad:
He said when the Kyoto protocol expires in 2012, the world would need a more sensitive framework for tackling global warming. "People fear some external force is going to impose some internal target on you ... to restrict your economic growth," he said. "I think in the world after 2012 we need to find a better, more sensitive set of mechanisms to deal with this problem." His words come in the build-up to UN talks in Montreal this month on how to combat global warming after Kyoto. "The blunt truth about the politics of climate change is that no country will want to sacrifice its economy in order to meet this challenge," he said.
Well, so far so politics so who cares? FOE do (different article):
Tony Juniper, director of Friends of the Earth, said: "We need to understand immediately what he means by that. His role at the moment is pivotal. He's the only world leader who's pushing climate change as an issue that has to be dealt with. So what he says is going to carry particular weight and he's basically just rewritten the history of climate change politics."
First of all, the request for clarification is unlikely to be met: ambiguity is what is being aimed for. Second, if TB really is "the only world leader who's pushing climate change" then nothing will be done: one against so many obviously won't work. Third I don't understand the re-writing bit: this is future not past.
The article continues...
Mr Blair has been seen as a strong supporter of the Kyoto protocol and was thought to be keen on working towards finding a successor to the treaty... As part of his support, the prime minister made tackling climate change his priority for the presidency of G8 and the EU this year, describing it as a greater threat to the world than terrorism.
Terrorism: did he? I thought that was Bob May. Although since terrorism kills so few people ranking X above terrorism as a threat hardly says much about the importance of X. Anyway, this now lines up a possible explanation, that Blair is angling to lead the post-Kyoto organisation in retirement from being PM. Farfeteched perhaps. Anyway, although Blur has been seen as a Kyoto supporter, thats mostly rhetoric and practical action is thin (Stoat passim). Nothing came out of G8 (Stoat passim). We (the UK) have Kyoto targets that look unattainable and voluntary additional targets (reaffirmed in the last election) that look even less attainable.
Second minister resigns for second time
David Blunkett has resigned again. So thats him *and* Mandleson who have resigned twice. Not bad for a goverment that promised to be whiter than white. The grauniad quotes Blur as saying: Mr Blunkett left office "with no stain of impropriety against him whatsoever" which is... err... why he resigned, of course (just like Mandleson). And in a stunning piece of irrelevance Blunkett apparently said having investments and holding shares in modern Britain is not a crime. Stuff likes that makes it hard to be sympathetic.
But... little sympathy as I have, the "crimes" here seem to be far less than those of people in Bush's administration. And the witch hunting (was it?) bears a certain resemblance to the M&M stuff.
Meanwhile (and probably more importantly) the terrible terrorism bill goes through by one vote (actually its not through yet: the even more controversial detention without trial for 3 months is yet to come and may well go down, hurrah).
Coming soon: Blair on climate change.
But... little sympathy as I have, the "crimes" here seem to be far less than those of people in Bush's administration. And the witch hunting (was it?) bears a certain resemblance to the M&M stuff.
Meanwhile (and probably more importantly) the terrible terrorism bill goes through by one vote (actually its not through yet: the even more controversial detention without trial for 3 months is yet to come and may well go down, hurrah).
Coming soon: Blair on climate change.
2005-11-01
The Big Picture
Its become pretty clear that many people are losing sight of the wood for the trees, or even the twigs, in the latest rounds of the Hockey Stick Wars. Fortunately some of the more intelligent watchers of the debates have realised they need help. So here it is.
What it the Big Picture? From the point of view of climate change, the top level is
The world is getting warmer, we're causing it, and it will continue to get warmer in the future. This is pretty well universally agreed on now.
Going down a level, the point at issue is then the various palaeoclimatic reconstructions of the [[temperature record of the past thousand years]] (or, now, two thousand). Here the important point is ...the increase in temperature in the 20th century is likely to have been the largest of any century during the past 1,000 years... and so on: which you'll doubtless recognise as a quote from the TAR. But more than that, all the headline points that the TAR made about the MBH record it used are true of all the other reconstructions too. So all the nonsense about whether the fall of the Hockey Stick would disprove global warming or whatever is just nonsense. Because there is plenty of backup. The other point that the septics do their best to push is the idea that all the attribution of climate change arises from the palaeo reconstructions. That too is nonsense, & discussed at RC. Or just read the TAR.
Going another level down, we come to the various arguments about the details of the Hockey Stick. Thats the level of the recent RC post Hockey sticks: Round 27, where we discuss two recent GRL papers. This is interesting stuff - if you're keen on statistics. If you're not, and you're baffled by the claims and counter claims, then you have two options: hop back up a level, because you've got to a too specialised for your understanding; or improve your understanding. Don't misunderstand me: there is a lot of interesting work to be done at this level. There are, as shown by the graph, a whole pile of records that agree on the main points but disagree in detail. Resolving this is an active and valuable area of research. If you're interested in policy, though, you've gone too far down. Go back.
Some people think that that the debate over the so-called "hockey stick" temperature reconstruction is a distraction from the development and promulgation of climate policy. And I agree (though I would replace "policy" with "science" cos I'm more interested in the science). And this is what we've been saying in the recent comments at RC. So if anyone were, hypothetically, to enquire why *others* should continue to care about it... Why is this fight important to the rest of us? the answer is: you shouldn't. It isn't. There: that was easy.
Oops: I forgot something and blew my dramatic ending. Sigh. There is (yet another) odd inversion about: the idea that if we were to switch from, say, MBH (less variance) to Moberg (more) that would somehow imply a reduction in expected future warming. That is completely wrong. If the past temperatures varied more, it implies a *higher* sensitivity to forcing, and therefore a *higher* future change.
[Updated to fix broken href; nothing new to see; move along now folks... :-)]
What it the Big Picture? From the point of view of climate change, the top level is
The world is getting warmer, we're causing it, and it will continue to get warmer in the future. This is pretty well universally agreed on now.
Going down a level, the point at issue is then the various palaeoclimatic reconstructions of the [[temperature record of the past thousand years]] (or, now, two thousand). Here the important point is ...the increase in temperature in the 20th century is likely to have been the largest of any century during the past 1,000 years... and so on: which you'll doubtless recognise as a quote from the TAR. But more than that, all the headline points that the TAR made about the MBH record it used are true of all the other reconstructions too. So all the nonsense about whether the fall of the Hockey Stick would disprove global warming or whatever is just nonsense. Because there is plenty of backup. The other point that the septics do their best to push is the idea that all the attribution of climate change arises from the palaeo reconstructions. That too is nonsense, & discussed at RC. Or just read the TAR.
Going another level down, we come to the various arguments about the details of the Hockey Stick. Thats the level of the recent RC post Hockey sticks: Round 27, where we discuss two recent GRL papers. This is interesting stuff - if you're keen on statistics. If you're not, and you're baffled by the claims and counter claims, then you have two options: hop back up a level, because you've got to a too specialised for your understanding; or improve your understanding. Don't misunderstand me: there is a lot of interesting work to be done at this level. There are, as shown by the graph, a whole pile of records that agree on the main points but disagree in detail. Resolving this is an active and valuable area of research. If you're interested in policy, though, you've gone too far down. Go back.
Some people think that that the debate over the so-called "hockey stick" temperature reconstruction is a distraction from the development and promulgation of climate policy. And I agree (though I would replace "policy" with "science" cos I'm more interested in the science). And this is what we've been saying in the recent comments at RC. So if anyone were, hypothetically, to enquire why *others* should continue to care about it... Why is this fight important to the rest of us? the answer is: you shouldn't. It isn't. There: that was easy.
Oops: I forgot something and blew my dramatic ending. Sigh. There is (yet another) odd inversion about: the idea that if we were to switch from, say, MBH (less variance) to Moberg (more) that would somehow imply a reduction in expected future warming. That is completely wrong. If the past temperatures varied more, it implies a *higher* sensitivity to forcing, and therefore a *higher* future change.
[Updated to fix broken href; nothing new to see; move along now folks... :-)]
2005-10-31
Our pumpkin
2005-10-30
In praise of moderation
One of the reasons the various blog fora are valuable is because they moderate comments: which is to say, some are deleted/not posted. The obvious advantage is that this keeps out spam from the porno and online poker folk, rude and abusive nonsense, and the nutters. The obvious disadvantage is that none of the nutters think they are nutters and tend to whine about censorship. This is difficult: no-one is going to spend time and effort writing interesting comments if they think they will be blocked. But my spam/trolling/incivility threshold may be different to yours.
To use an example thats deliberately not the one you're thinking of, over at RC recently one John Dodds has been pushing his own wacky theories of GHE (or lack thereof...); see comments on this thread - 16, 18, etc. RC has probably been too gentle in handling him (nonetheless we (I) got bored in the end). The problem with that stuff is it disrupts the flow of sense.
James Annan suggests posting to sci.environment. If you have no proper newsfeed, go to groups.google.com and its fairly easy. Sci.env has the advantage (?) of being unmoderated, and of course it could aggregate stuff across many blogs, instead of the balkanised blogscape that exists. Why not try it? If your comments here are sufficiently valuable, post them there too, and watch them drown under a sea of junk. An interesting reverse experiment is being tried by mt, who has tried copying a posting from sci.env to his blog to try to have a more sensible debate there, but it doesn't seem to be working.
Refs
* Your comment was denied for questionable content...
* “like being inside Hansens head”
* Moderation at The Conversation - ATTP
To use an example thats deliberately not the one you're thinking of, over at RC recently one John Dodds has been pushing his own wacky theories of GHE (or lack thereof...); see comments on this thread - 16, 18, etc. RC has probably been too gentle in handling him (nonetheless we (I) got bored in the end). The problem with that stuff is it disrupts the flow of sense.
James Annan suggests posting to sci.environment. If you have no proper newsfeed, go to groups.google.com and its fairly easy. Sci.env has the advantage (?) of being unmoderated, and of course it could aggregate stuff across many blogs, instead of the balkanised blogscape that exists. Why not try it? If your comments here are sufficiently valuable, post them there too, and watch them drown under a sea of junk. An interesting reverse experiment is being tried by mt, who has tried copying a posting from sci.env to his blog to try to have a more sensible debate there, but it doesn't seem to be working.
Refs
* Your comment was denied for questionable content...
* “like being inside Hansens head”
* Moderation at The Conversation - ATTP
2005-10-29
Shaken by Tossers. Or not.
Eli Rabbett is laying into Taken by Storm. He has a link to their "briefing" if you really want to read it (I found this via Tim Lambert, who provides you with a link to his earlier demolition).
But what I wanted to discuss was not their arguments, but why they have been totally ignored. And they have been. Not even the knee-jerk septics have done more than nod in their direction.
Firstly, because their idea that global mean temperature is meaningless is such obvious pap that anyone can see that.
Secondly, because their arguments to try to prop up this assertion are sufficiently convoluted that there is not the slightest chance of non-specialists understanding them.
The latter I think is important. If you want to sound off as a GW septic, you need a good sound bite that appears to make sense. Waving your glass at a party saying "of course, global temperature doesn't exist, because..." and then trying to spout pages of gobbledegook that you can't remember is just going to make you sound like a fool.
The obvious contrast is with the M&M attack on the MBH hockey stick stuff. Here, instead of trying to replace the bleedin' obvious with some subtle convolutions, they are trying to attack a fairly subtle technique and pick nits in it. This is far easier to sell, and it has.
But what I wanted to discuss was not their arguments, but why they have been totally ignored. And they have been. Not even the knee-jerk septics have done more than nod in their direction.
Firstly, because their idea that global mean temperature is meaningless is such obvious pap that anyone can see that.
Secondly, because their arguments to try to prop up this assertion are sufficiently convoluted that there is not the slightest chance of non-specialists understanding them.
The latter I think is important. If you want to sound off as a GW septic, you need a good sound bite that appears to make sense. Waving your glass at a party saying "of course, global temperature doesn't exist, because..." and then trying to spout pages of gobbledegook that you can't remember is just going to make you sound like a fool.
The obvious contrast is with the M&M attack on the MBH hockey stick stuff. Here, instead of trying to replace the bleedin' obvious with some subtle convolutions, they are trying to attack a fairly subtle technique and pick nits in it. This is far easier to sell, and it has.
2005-10-28
Exxonmobil title-tattle: www.europeanvoice.com
EuropeanVoice (which appears to be a business of The Economist) is running a meeting "Climate Change Now: what can Europe deliver?". The blurb sayeth;
Nothing especially weird there (though t' Economist would normally downplay recent scientific assessments suggest that the climate is changing even faster than previously thought and the pressure is even greater to develop and deliver new technologies) until you notice that the sponsor is... Exxonmobil. Exxon used to be heavily anti-GW (this from 2000 is distinctly mendacious, but still I must admit on the cautious side), now its hard to find their views. This from 2003 pushes uncertainty and says nothing really about the state of the science; not much change from 2001. And by 2005 nothing much has changed: they have no position at all on whether the world has warmed and how much it might in the future.
Perhaps they are dipping their toe in the water... to see how warm it is?
With climate change at the top of the G8 and UK Presidency agendas, how can real progress be made in achieving the necessary reductions in CO² [sic] emissions? What can governments and industry do now to deliver cleaner solutions to our energy and transport needs?
The most recent scientific assessments suggest that the climate is changing even faster than previously thought and the pressure is even greater to develop and deliver new technologies which can dramatically reduce greenhouse gases now and in the short- to medium-term.
What more should governments be doing through fiscal or other financial measures to support R&D and innovation and help industry boost the process?
In the run-up to the COP-11 Kyoto talks in Montreal, this conference aims to assess the EU's key objectives going into the talks and policy-makers' and industry's response to the challenges.
Nothing especially weird there (though t' Economist would normally downplay recent scientific assessments suggest that the climate is changing even faster than previously thought and the pressure is even greater to develop and deliver new technologies) until you notice that the sponsor is... Exxonmobil. Exxon used to be heavily anti-GW (this from 2000 is distinctly mendacious, but still I must admit on the cautious side), now its hard to find their views. This from 2003 pushes uncertainty and says nothing really about the state of the science; not much change from 2001. And by 2005 nothing much has changed: they have no position at all on whether the world has warmed and how much it might in the future.
Perhaps they are dipping their toe in the water... to see how warm it is?
2005-10-26
Butterflies: notes for a post
[Updated: see end]
This is more in the nature of notes for a proper post, but I'll put it here. If you want to read some sense, check James Annan: still flapping. If you want some nonsense, then follow the link therein to RP :-)
But first: something completely different:
So, I took HadAM3 (64 bit version) and did two runs: one standard, and one where I perturbed the surface pressure at a single grid point (I forget where: somewhere in the Arctic I think) by a tiny amount (1e-10, or was it 1e-12?).
The graph below shows the growth in global-mean area-weighted RMS of the difference of the MSLP field between the two runs:
The x-axis is time in days (48 timesteps per day): 0-5 (top); then 0-15 (mid); then 0-89 (bottom). By day about 25, and definitely by day 60, the difference between the runs has saturated: their weather is totally different, so no further growth in RMS occurs. The 5-10 day oscillations in the last month are, I think, what you'd expect to see from weather evolution.
The y-axis is in Pascals. 100 Pascals is an hPa, ie 1 mb. Standard weather charts tend to plot pressure in contours of 4 hPa, so in real weather terms the diffs are sort-of negligible out to about day 15 (although this is a global value, so locally there will probably be bigger values).
There are clearly different phases in the difference growth. After day 25-ish there is a slow rise to saturation. From day about 10 to about 25 there is exponential growth. From day 2 to 10 ish there isn't much growth. And from the start to about day 1 there is another exponential phase.
If you look carefully, you'll see that the diff appears to be zero for the first few timesteps. This isn't really true. But the model output (as opposed to its internal variables) has been converted from 64 to 32 bit floats; and the difference is identically zero at 32 bit (but not 64).
Meanwhile, its interesting to look at the pattern of the diffs.
The top pic is about day 4 (in the not-much-happening phase). The middle, day 15 (in the exp growth). The bottom, day 31 (saturated). Note that the pics have a different contour interval. By days 15/31 we're into "real meteorology" and hence the MSLP field is most different in extratropics, as it always is (its tropical dynamics, folks). The fact that the biggest diffs on day 4 are in the tropics (is this convection being jigged?) says we're in a different regime, but I'm a bit unsure.
So there we are for now. Over to you, James.
[The original post was oct 15. I've updated the timestamp.
Update: I've been playing with the timestep-dependence of this. In the pic below, the black line (solid) is run "a" (std) minus run "b" (small pert); black dashed is a-c, where "c" is a different small pert. Blue is the same but for with all three runs done at 1/2 the timestep. Red also, but for 1/4 the timestep. The std timestep is half hour. There are plenty of caveats here: firstly, that this is really only playing. Secondly, that all I did was change the value marked timestep (actually, the value marked "steps per period" from 48 to 96 to 192) without checking that anything was going hideously wrong elsewhere. Thirdly, that if you compare this to the previous plot you'll notice that its spikier: because its 6-h data (instantaneous) not timestep data. Fourthly, that even if nothing is going hideously wrong, changing the timestep does give a different model (is the climatology the same? I don't know. Maybe). For example, the atmos is fourier-filtered at the poles for CFL reasons. A smaller timestep means less area if filtered.
Also: to answer JA's q: does the pert grow one-box-per-timestep (ie, unphysically?). Well no. It grows faster than that, because, ha ha, the perturbation is within the filtering zone so it grows into the entire northern polar cap within a few timesteps.
But: the plot below shows that the initial growth is *slower* with smaller timestep (well even this is complex: the 1/2 timestep run grows faster up to day 1; but then the "plateau" level reached by it is lower between days 2 and 10; and the 1/4 timestep plateau is lower again. Does this mean that a very very small timestep (arguably, as physical reality has?) would have a very low (zero?) plateau? I must try some even shorter timesteps. BUT then you end up in territory where the model was never designed for and it becomes rather dodgy.
Having 3 different runs at each timestep (more would be nice of course) allows you to see that during the day 15-30 growth phase, the timestep doesn't matter. But earlier, it clearly does.
Curious. I wonder what it means...
ps: sorry James. I'll get back to it :-(
This is more in the nature of notes for a proper post, but I'll put it here. If you want to read some sense, check James Annan: still flapping. If you want some nonsense, then follow the link therein to RP :-)
But first: something completely different:
So, I took HadAM3 (64 bit version) and did two runs: one standard, and one where I perturbed the surface pressure at a single grid point (I forget where: somewhere in the Arctic I think) by a tiny amount (1e-10, or was it 1e-12?).
The graph below shows the growth in global-mean area-weighted RMS of the difference of the MSLP field between the two runs:
The x-axis is time in days (48 timesteps per day): 0-5 (top); then 0-15 (mid); then 0-89 (bottom). By day about 25, and definitely by day 60, the difference between the runs has saturated: their weather is totally different, so no further growth in RMS occurs. The 5-10 day oscillations in the last month are, I think, what you'd expect to see from weather evolution.
The y-axis is in Pascals. 100 Pascals is an hPa, ie 1 mb. Standard weather charts tend to plot pressure in contours of 4 hPa, so in real weather terms the diffs are sort-of negligible out to about day 15 (although this is a global value, so locally there will probably be bigger values).
There are clearly different phases in the difference growth. After day 25-ish there is a slow rise to saturation. From day about 10 to about 25 there is exponential growth. From day 2 to 10 ish there isn't much growth. And from the start to about day 1 there is another exponential phase.
If you look carefully, you'll see that the diff appears to be zero for the first few timesteps. This isn't really true. But the model output (as opposed to its internal variables) has been converted from 64 to 32 bit floats; and the difference is identically zero at 32 bit (but not 64).
Meanwhile, its interesting to look at the pattern of the diffs.
The top pic is about day 4 (in the not-much-happening phase). The middle, day 15 (in the exp growth). The bottom, day 31 (saturated). Note that the pics have a different contour interval. By days 15/31 we're into "real meteorology" and hence the MSLP field is most different in extratropics, as it always is (its tropical dynamics, folks). The fact that the biggest diffs on day 4 are in the tropics (is this convection being jigged?) says we're in a different regime, but I'm a bit unsure.
So there we are for now. Over to you, James.
[The original post was oct 15. I've updated the timestamp.
Update: I've been playing with the timestep-dependence of this. In the pic below, the black line (solid) is run "a" (std) minus run "b" (small pert); black dashed is a-c, where "c" is a different small pert. Blue is the same but for with all three runs done at 1/2 the timestep. Red also, but for 1/4 the timestep. The std timestep is half hour. There are plenty of caveats here: firstly, that this is really only playing. Secondly, that all I did was change the value marked timestep (actually, the value marked "steps per period" from 48 to 96 to 192) without checking that anything was going hideously wrong elsewhere. Thirdly, that if you compare this to the previous plot you'll notice that its spikier: because its 6-h data (instantaneous) not timestep data. Fourthly, that even if nothing is going hideously wrong, changing the timestep does give a different model (is the climatology the same? I don't know. Maybe). For example, the atmos is fourier-filtered at the poles for CFL reasons. A smaller timestep means less area if filtered.
Also: to answer JA's q: does the pert grow one-box-per-timestep (ie, unphysically?). Well no. It grows faster than that, because, ha ha, the perturbation is within the filtering zone so it grows into the entire northern polar cap within a few timesteps.
But: the plot below shows that the initial growth is *slower* with smaller timestep (well even this is complex: the 1/2 timestep run grows faster up to day 1; but then the "plateau" level reached by it is lower between days 2 and 10; and the 1/4 timestep plateau is lower again. Does this mean that a very very small timestep (arguably, as physical reality has?) would have a very low (zero?) plateau? I must try some even shorter timesteps. BUT then you end up in territory where the model was never designed for and it becomes rather dodgy.
Having 3 different runs at each timestep (more would be nice of course) allows you to see that during the day 15-30 growth phase, the timestep doesn't matter. But earlier, it clearly does.
Curious. I wonder what it means...
ps: sorry James. I'll get back to it :-(