Tuesday, July 19, 2011

Under the Salt Lake City clouds for TeraGrid and OGF32

This week I'm at Teragrid and OGF32 in Salt Lake City. Living in the Netherlands, I thought I was used to flat, open landscapes, but the vista here is on a completely different scale.

Flying in on Sunday after a dash up from the South of France on Saturday, I unfortunately missed a lot of OGF32, but was able to join the workshop on Science Agency uses of Clouds and Grids on Monday. It's been a pretty intensive day, with 21 different snapshot presentations, so it's a difficult workshop to summarise (particularly when my body clock now thinks its roughly Tuesday). So here are a few snapshots that I've picked up on in between caffeine hits during the breaks...

David Wallom of Oxford University updated us on the SIENA roadmap effort, and pointed out a key quote for the standards community from Neelie Kroes, VP of the European Commission:

"International standardisation efforts will also have a huge impact on cloud computing. Open specifications are key in creating competitive and flourishing markets that deliver what customers need. Europe can play a big role here – building on, for example, the SIENA initiative and its development of a 'standardisation roadmap for clouds and grids for e-Science and beyond'."

SIENA has surveyed the standards work being done by the various DCI projects, and is now working on a gap analysis. You can download their roadmap to interoperable infrastuctures at

Ruth Pordes of Open Science Grid introduced OSG and outlined their Virtual Organisation structure - I was intrigued to hear about some of their multi-disciplinary VOs, which seems to be a growing trend. OSG is also considering how the cyberinfrastructure landscape will change now that XD XSEDE is replacing TeraGrid. Could they have a role as cloud brokers?'s Steven Newhouse talked about the federation of virtualised resources from an EGI context. Discussions are now focussing in on key usage scenarios such as running a predefined VM image, running "my" VM image (with the user's own data), how to decide which virtualised resource to use, how to manage accounting across resource providers, including monitoring reliability/availability of these resources and notification of VM state changes.

Daniel Katz of the University of Chicago gave us a run down on the open challenges for production DCIs. The goal is to deliver maximum science but the discussion is always around sustainability. We need to achieve useful work but ideally with someone else paying for it! Another issue is how can we measure delivered science? We can track papers and citations but these are blunt instruments for measuring impact. Another challenge is to develop tools that allow the infrastructure to deliver maximum science. Currently we do this well on a case by case basis, but offering scientists an off-the-shelf set of interoperable tools is still a bit of a dream.

Kate Keahey, Argonne National Laboratory, showed us the Nimbus cloud project, which is working with hybrid clouds ie combinations of private, community and public clouds. Nimbus allows users to build turnkey dynamic virtual clusters based on these resources, and to try out applications that don't work on the grid, such as very complex non-portable software. According to Kate, cloud outsourcing is now no longer a choice. Benefits of clouds are their economy of scale, flexible access to different resources and lack of operational overheads but before picking a cloud, you have to consider a host of factors - is it scaleable, easy to use, cost effective?

That said, clouds are definitely changing the patterns of how people work.

The Teragrid11 event proper starts tomorrow. The hot topic in the US at the moment is the transformation of NSF's TeraGrid program, which has provided cyberinfrastructure resources to the research community for more than 10 years, into XSEDE - Extreme Science and Engineering Digital Environment.

More tomorrow (or whatever day it is where you are!)

No comments: