Pages

Friday, October 19, 2012

Multiscale modelling at the peta and exascale


Fractals are famous for looking the same at all magnifications – as you zoom into the picture, it remains the same at smaller and smaller scales. Other systems in nature, such as materials for new electronic devices, or blood vessels in the brain are not so self-similar. If you delve into these at different scales, the picture can change dramatically. For example, to model a brain aneurysm, medics can image the topology of the swelling in the cerebral blood vessel very minutely using HPC, move up a scale to set that in context within the brain’s circulatory network to work out how to bypass the affected area, and finally look at the position of the aneurysm and its effect on the whole body. Similarly, there is a huge potential market for developing new organic electronic devices through multiscale modelling, adding up to 100 billion Euros by 2015. Solving these sorts of problems was the focus of a session on multiscale modelling using e-Infrastructures, led by Andrew Emerson of CINECA and the MMM@HPC project.

Nature tends to present us with a continuum of length and time scales – in practice, modelling work focuses on a discrete set of scales because researchers need to use a wide variety of application codes, and these have different scalability. Due to the size of the datasets and the complexity involved, multiscale modelling often takes researchers into the realm of petascale computing i.e. high performance computers with one quadrillion floating point operations. These systems have very high parallelism but power consumption and heat dissipation become important engineering constraints. Typically, petascale computers use a large number of low power cores, or use accelerators with high performance and low power consumption (e.g. GPUs), or a hybrid of both. This gives you the magic high number of flops per watt.

The problem is that most of the codes stop scaling as communications between the cores becomes the rate determining step. And you need to make sure that the codes you are using scale up to a high enough number of jobs and cores – to book time on a supercomputer there is often a lower limit that you have to meet.
The problems get even greater with exascale, which could lead to computers with the same power consumption as a small town. Projects like Montblanc are focusing on using ARM processors to reduce the power consumption, with the aim of using 30 times less power than current computers. EESI (European Exascale Software Initiative) is bringing together industry and academia to drive the transition from petascale to exascale, as is the DEEP project (Dynamical Exascale Entry Platform).

Clearly this is an area where the multiscale modelling community will be keeping a close eye on developments. The requirement for exascale computing, rather than just more petascale machines, needs to be clear and will pose interesting questions for the community – will exascale merely give more capability to tackle larger data sets for similar problems? Or is there a whole new set of questions that exascale could answer, giving top to bottom answers to multiscale problems we haven't even thought of yet? And where might this take us in the future?

No comments: