I’m somewhat of a newcomer to this grid-computing business. I arrived in Switzerland last month to start writing for international Science Grid This Week for a couple of months. I don’t know very much about computing (I’ve never tried to program anything in my life) and I only heard about ‘the grid’ when I heard about the internship!
But it sparked my interest and I’m really glad I have the chance to be a part of the Gridtalk project. In my short time here I have become increasingly interested in grid computing (and clouds too), and am thrilled to be learning more every day about what it means for science. I’m a convert to all things grid!
I’m still getting my head round it all. I still can’t remember any of the countless acronyms (though the acronym soup on GridPP is a very handy resource!). Apart from EGEE, and that’s only because it’s written on every poster in the office. And I can’t quite pin down the difference between grids and clouds yet, but the GridBriefing on the subject helps. In fact, I would recommend reading all the GridBriefings to any grid-newcomer trying to get their head round the main issues.
As well as being a great place to learn about grid computing, CERN is also a lot of fun too. I went to visit the service cavern for the CMS experiment today. Unfortunately the detector itself was closed, because they are currently running the CRAFT-09 (Cosmic Rays At Four Tesla) test. The magnet itself is not switched on, but they are running CMS 24 hours a day for six weeks, using cosmic rays to test both the detectors themselves and the whole chain of data processing throughout the different grid sites. Two days out of every week are spent running CMS at 100kHz to see how much improvement has been made on calibrating the detectors and the grid.
The service cavern was interesting enough to see in its own right though. A friendly guy called Apollo showed me round all the servers and computers they have there as well, as the control room (I was pleased to see the emergency shutdown mechanism is actually a big red button) and the smaller control room actually down in the cavern which they call the “counting room” from back in the days of bubble chambers when the data transfer was slow enough to manually count.
The scale of the number of servers needed is a remarkable sight. There is one rack devoted to safety precautions (detecting smoke, fire, gases etc,) based in the cavern itself, whilst an exact duplicate runs concurrently upstairs as a back-up. For the CMS detector itself there are two floors containing hundreds of dedicated servers, rack after rack with bundles of wires of every color on show. I expected all of the cables to be carefully organized and bundled together; some are, but others are as tangled as Christmas tree lights. Some of the servers have stickers on proudly announcing which country they come from. It seems that each server has someone assigned to look after it, Apollo showed us the three servers which are “his” (the cables in these were very tidy!)
You have to wear headphones if working down in the service cavern for a long time. The noise is loud but not unbearable, but Apollo tells us working in that noise-environment tires you out very quickly. It’s also surprisingly cool (temperature-wise) down there.
Seeing the amount of computer hardware dedicated to one LHC detector, at just the tier 0 site really put into perspective the magnitude of grid computing. I never really appreciated until arriving here the importance of computing for both the CERN experiments and for science more broadly.
I’ve only got to know it over the last couple of weeks, but I’m already convinced that grid is very important, both now and for the future.
Friday, August 14, 2009
Grid computing - a newcomer's perspective
Subscribe to:
Post Comments (Atom)
1 comment:
I wouldn't worry Seth I still have massive problems with all the acronyms! (I think most people do) Thanks for the post, I hope you remembered to take a photo of yourself in hard hat with the detector :)
Post a Comment