Pages

Tuesday, September 28, 2010

Turning your brain blue at ICT2010

In the last 60 years, there has been a million billion times increase in computing power. By 2040, never mind exascale computing, we’ll be up at the yottascale, 10 to the power 24. Trillions of 0s and 1s are generated every day. If we build computers using existing technology, it would take 3 Gigawatts to deliver exascale computing – the aim is to use just 20MW.
Henry Markram of the imaginatively named Blue Brain project at EPFL thinks we should use our brains to tackle this problem – not just to think the issues though, but literally to make computers more brain-like. Already, we expect our interactions with computers to be human-like as possible so it’s a natural direction to take.
Despite its massive processing power, the brain uses an amount of energy equivalent to a weak light bulb – just 30 Watts. The CPUs in the brain are neurons – but neurons in fact are slow in comparison, a billion times slower. They are however, hugely parallel, with trillions of connections. The brain also shuts down any parts you’re not using to save energy, reacting in millliseconds.
These are all principles that chip designers can take on board, slower individual chips but vastly increased numbers of them. The problem is, if you put millions of chips together, the potential for failure and errors sky rockets. Your brain can gracefully lose 10,000 neurons a day without batting a metaphorical eyelid - theoretically you get wiser as you get older. Unlike chips, neurons are diverse and heterogenous, they match the solution to the problem. They also have many many connections to each other, there is no single point of failure. As neurons deteriorate, the signal weakens but doesn’t disappear altogether.
Neurons also have lessons to teach us about memory and storage – information is fragmented and distributed across millions of them, no one neuron holds the key to your most treasured memory. Each one holds tiny fragments of many memories. The secret is in combining these back together – and the brain reuses old memories to store new ones. If you’ve already stored a and t, you don’t need to store them again, just reuse them.
As anyone who has ever waited for a big file to download will know, we also have a bandwidth problem – you can’t cheat, you have to transfer all the data. The brain gets round this by ignoring large chunks of the data it absorbs every day through your senses – it actually only sends you hints. As the receiver of the signals from your eyes and ears, you may think you’re getting the whole picture but your brain imagines most of what you experience as the world around you. Every now and then it picks up hints to check that this imaginary version bears some relationship to reality – the rest is millions of years of evolution, plus your own experiential learning.
If computers could visualise data at the same time as processing it, like the brain does, some exciting possibilities could emerge, such as interactive holographic environments, real and imagined worlds that could be used for business and research interactions, or entertainment. With a better understanding of how the brain pulls off this trick, truly intelligent robots could become possible, able to learn, adapt, to develop behaviour and cognition.
As panel member Professor John Wood said in reaction to this presentation – spooky!

No comments: