Coffee’s been drunk, interviews have been filmed and much has been discussed over a beer (or two), but where have our few days in Trieste left us? Filippo Giorgi, Head of Earth Systems Physics at ICTP helpfully wrapped up the meeting for us in the final plenary session.
“Why do we need large infrastructures for climate change?” Giorgi asked the assembled room. You can read this question two ways of course – what kind of things do we want them to do, but alternatively, do we need them at all for this kind of work?
To answer both of these questions, he took a look back over the previous sessions and reminded us that the Earth is one of the most complex systems in nature – probably exceeded in complexity only by the human brain. Climate can be influenced by factors that are human in origin, such as aerosols, greenhouse gases and changes in land use, but also by events that are entirely natural, including volcanic eruptions and solar variation.
Back in the 70s when I first (dimly!) remember ‘global warming’ making the news, global models for climate change only considered a few elements, such as carbon dioxide and rainfall. Today these models include a dizzying range of interconnecting inputs, including interactive vegetation, sulphates, rivers, air chemistry and many more. To double the resolution of the models you roughly need to increase your computing power by a factor of ten – according to Moore’s Law, you can only achieve this every 5 or 6 years.
As we heard from the Africa Adaptation Programme yesterday, it’s extreme events such as floods, droughts and so on that cause the most damage in both human and economic terms. So for climate scientists, it’s important to understand the ‘long tail’, or what happens at the fringes of normal climate behaviour – this is why increasing the resolution of your models at a local level is so crucial.
For politicians, as anyone who followed the media storm around the COP events in Copenhagen and Cancun will remember, it’s the uncertainty of these models that causes the problems. Small changes in the assumptions you make before you run a simulation can lead to huge discrepancies in the predictions for the years ahead. And there are also competing models, so not every climate scientist will arrive at the same answer from the same starting point.
So to come back to the original question, why do we need e-Infrastructures? According to Giorgi, there are two ways to approaching the problems. You can effectively build an LHC for climate science, the ‘billion dollar’ approach – ultra high resolution models, considering a huge variety of factors but making predictions over shorter time scales. Or you can take a large multi-model approach, which gives you a lower resolution but longer term view using less intensive computing power. You also need to consider how to scale down global scale models to that crucial local level.
There are a range of platforms to choose from depending on the approach you take – earth simulators, volunteer computing such as through climateprediction.net, the PRACE HPC network or grid computing as offered by EGI and others. These are all still up for discussion, but Giorgi’s point was to make sure that you fit the computing to the question you want to answer, rather than make your models fit the ‘big iron’ you might have to hand.
And finally, a cautionary note that is being heard across many areas of science at the moment – data. Climate science generates vast amounts of it. Where do you store it and how do you share it? To be meaningful, analysis needs common formats, agreed metadata, a common set of variables and visualisation tools. “The problem of data may be even larger than getting computing time,” warned Giorgi.
Thursday, May 19, 2011
Wrapping up the e-Infrastructures and climate change conference
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment