Whilst from the EGI side, this is the User Forum, and focusing on the end user (and how to support them), the middleware guys of EMI have a few meetings colocated. I snuck in to the session on Data side of EMI. The first talk, from , was a good overview, so that's what I'm blogging about.
There are three storage engines within EMI: Disk Pool Manager (DPM), dCache and Storage Resource Manager (StoRM). DPM is deployed at most WLCG sites, and dCache stores most of the WLCG data. Both work with a set of disk servers, and dCache can support tape robots too. StoRM is the new kid, and kicking the heels of the other two; being run over a cluster filesystem such as Lustre or GPFS, which makes a set of disk servers look like a single large one.
In addition to the storage engines, there is the File Transfer Service (FTS), which moves data around in a controlled manner, and the Amga metadata catalogue, to help find the interesting data.
We're expecting the first release from EMI shortly. There'll be three main releases, one a year, for the next 3 years. EMI-1 is in release candidate state, and we can expect final release within a couple of weeks. EMI-2 and EMI-3 should follow in turn over the next couple of years; so these plans are a mixture of short term and longer terms goals.
Much of the talk was really a slide or two about each of several different features that are in the works.
WebDAV gives end users a file system like interface; and a client interface is built into most operating system. dCache and Storm should have this in EMI-1, DPM promised for EMI-3, maybe earlier.
Parallel NFS (pNFS). NFS is an old standby, but has always been limited at the large scale by being limited to a single server. The Server redirection has been one of the key things needed to handle the flood of data at WLCG scale; pNFS gives a standards based way of using that, at the single site scale. As Posix, the OS can do smart caching of this data, comes for free. dCache has an implementation in EMI 1; Storm doesn't do that, so you get pNFS if used with an underlying filesystem that supports it; and DPM will support it in EMI 2 (beta read and write at the moment).
This is proper cutting edge stuff, and kernel support is needed, so needs a special kernel, and maintenance of this is tricky. Unless some vender backports it, it might take until SL6.2 till it's ubiquitious.
Replacing the Globus GSI with conventional SSLv3. GSI couples the transfer with the delegation - and there are advantages and disadvantages with this. With SSLv3, this delegation needs to be handled explicitly. However, the payback of standards are large; including esoteric things like hardware acceleration of the crypto parts, and more direct benefits like multiple clients, easier development because of existing tool kits etc.
Storage accounting is, in principle, straightforward: Who is using what on a storage element; and reporting this in a consistent and useful way. Haven't had this till now - but it's needed as the utilisation of storage rises to nearer the capacity - at which point it needs to know who's using what.
A couple of key areas have no standard - delegation with SSLv3, and Storage Accounting. So EMI working with OGF to build standards in this area.
Argus is the EMI security policy mechanism, which can do lots of complicated and interesting things, in a way that is easy to use. Argus integration is starting with blacklisting, in order to ban a user from the site. That way a single action can apply across all the storage and computer resources at a site.
Data client libraries - gLite and Arc have similar libraries, so these are merging, in order to reduce the maintenance overhead.
Glue 2.0 migration. EMI-1 will publish the Glue1.3 subset of data, but in Glue2.0 (i.e. move over). EMI-2 will have the full Glue 2.0 stack.
One of the more interesting point was around the catalogue synchronisation problem. Appplications write to storage, and then separately tell the catalogue about it. This is not atomic, so there needs to be a tighter coupling between the SE and the catlogue. The approach used is using message passing, so the LFC can listen to events from the SE. That way when something changes on the SE, or it notices a problem, the catalogue can update it's records. Other catalogues can also listen in, for those that use user communities that use other catalogues.
Phew! That's a lot of ground covered; but it's good to get a feel on where the tools are heading.
Monday, April 11, 2011
EMI data systems roadmap
Subscribe to:
Post Comments (Atom)
1 comment:
thanks for this update. It's extremely useful to know what the EMI plans are for data management. We're planning a data curation initiative here in SA. Are we going to see something like GFAL becoming more standard with its merge into whatever ARC did/does ?
Post a Comment