Developing neuroscience knowledge with the Human Brain Project

By admin,

  Filed under: Computing, Neuroscience
  Comments: Comments Off on Developing neuroscience knowledge with the Human Brain Project

ORIGINAL: Science Omega
by Richard Walker
02 July 2013
Photo: Vesna Njagulj
What we don’t know today is how much detail is actually needed to reproduce brain function. The proof will come when we have a model with a certain level of detail that actually exhibits the function we are interested in.
Richard Walker
The Human Brain Project Spokesman Richard Walker tells Editor Lauren Smith why he believes the initiative is capable of producing important new basic science, enabling medical discoveries and allowing new technologies…In January, the European Commission announced two EU Future and Emerging Technologies (FET) flagship programmes, which would each be allocated €1bn in funding to drive forward radical scientific research across the continent and enable greater understanding of key elements in our society. One focus area was the wonder material graphene; the other the vast Human Brain Project (HBP), which hopes to gain profound insight into the nature of humanity, to develop new treatments for brain diseases and progress revolutionary computing technologies.In the first of a two-part special on the latter, Richard Walker, of the HBP, explains to Editor Lauren Smith the vast scope, expec­tations and hopes that are being placed on this initiative, looking this time particularly at the elements of developing neuroscience understanding and brain disease treatments.

The HBP will build on, and subsume, what has been learnt already from the Blue Brain Project, both being led by Henry Markram at École polytechnique fédérale de Lausanne (EFPL) in Switzerland, and other neuroscience related models.

The original idea did come through the Blue Brain Project,” Walker begins. “And the Blue Brain did in turn come from a wealth of experience in electrophysiology. Markram started specifically looking at this area in the 1990s and made many important discoveries. It became fairly obvious at that point that the quantity of experimental work that was necessary to really understand the brain was just enormous. There were different routes being taken by people working in different parts of the world, addressing different measures, looking at various problems and using various species. So, putting all of this together to try and get a coherent picture was not possible.

Markram considered the idea of using brain modelling simulation as an integration tool – to put everything that is known into a model that represents the brain at different levels of detail, whereby each new piece of knowledge is a new constraint on our modelling so that even the unknown parts of the model become even more tightly constrained as things go on.

The flagship concept
This fledgling idea was developed through the specifically Swiss initiative from 2005 to 2011, with the goal to provide a proof of concept and to illustrate the feasibility of this approach. The researchers involved built the basic tools necessary to integrate data into multiple level biologically detailed models.

In parallel,” Walker explains, “Europe was looking for new approaches to funding research. It came up with the flagship concept, to provide a very large amount of money for long-term visionary research. Markram was part of developing the concept of these flagships and so we saw the programme as a way of making a real leap in the scale of what we are doing.

The bigger international undertaking that would become the HBP looks to expand beyond the rat brain that was the focus of the earlier initiative, to the scope of the human brain. “Whereas Blue Brain looked exclusively at neurons,” he says, “we wanted to get down to the molecular level, which is fundamental as this is where, for instance, disease happens. In doing that, we were also able to look at the applications of brain research. In the HBP, brain simulation is only one-third of it. The other two-thirds cover medical research – actually using our models to get new insight into brain disease and how to treat them – and information technology – using our knowledge of the brain to build new computing technologies.

The power of ICT
The first stage of the overall plan is to consolidate the massive volume of data that already exists in the relevant fields. Walker explains that the best way to do this is through exploiting the power of information communication technologies (ICT). The first step is to introduce six ICT platforms – 

  • for neuroinformatics, 
  • brain simulation, 
  • high performance computing, 
  • medical informatics, 
  • neuromorphic computing, and 
  • neurorobotics.

These will be tools that we can use, on the one hand to collect the data and the other to put it into models, to provide the necessary computing power, to provide medical data (about the healthy brain) and add in developing technologies and to add in our brain models to robotics,” Walker says.

Our plan is to build these platforms and then make them available to the scientific community. The first version of these platforms is due to be ready in month 30, and from then on scientists will be able to apply to use our platforms just like you can apply for observation time at a telescope. Proposals will be peer reviewed and checked to make sure that they are feasible with the platform and be cost-effective; then the successful groups will be able to use our platforms to do experiments. We will not tell scientists what research they should do with the instruments, but we will help them to do their research using our instruments.

Once in place, it is anticipated that the brain modelling will have a wide-reaching impact on the research arena for pharmaceuticals and in turn support the development of new treatments for brain conditions over the decades to come.

In our medical research, the first thing we want to do is federate data from hospitals that are participating in our study,” Walker outlines. “Today, if you have a brain scan the doctor will look at it, diagnose you, and the scan will then go into the hospital archive for 10 or 15 years before it is destroyed. No research is going to use that data, which is immensely valuable. This is a very bad use of taxpayers’ money.

What we would like to do is anonymise data from each brain scan so that it cannot be traced back to the individual, and then mine the data for biological signatures of disease. So, if you have a particular form of Alzheimer’s, we could look at how your brain is different, in respect to someone who has a different form of Alzheimer’s, or in respect to someone who is healthy. That is very valuable in itself, as today it is very hard to objectively diagnose people with any brain disease.

Towards objective testing
As he explains, current diagnosis is usually done in terms of symptoms, but very often people with the same symptoms may have very different problems in their brains and, conversely, people with the same problem may present to a doctor with very different symptoms. It is hoped that, within two or three years of the project commencing, there should be sufficient data to start being able to distinguish between patients with objectively different diseases.

This has a very practical implication for pharmaceutical companies,” Walker comments. “Today, clinical trials compare the impact of drugs in control subjects and in people who are sick. But you will actually find that a large number of the controls are sick but no one knows it, and a large proportion of the people who are sick are not sick with the disease that the pharmaceutical company thinks they have. Once we have objective tests, we can select the people to participate in clinical trials much better, making it more likely that we will have positive results from the trials and find more effective treatments. We might be able to re-purpose drugs that already exist, which is a very cheap solution because they are already known to be safe, or we can find new ones.”

Once these differences are better understood, the next step would be to model those variations and make comparisons with the model of the healthy brain. This should vastly enhance mechanistic understanding of the causes of brain disease – from environmental influences, to secondary effects, illness and drug use, for example.

Untangling all of the potential causes is incredibly complicated,” he suggests. “If we’ve got a brain model, we can do experiments that we can’t do with real patients. You may suspect it’s a particular thing that’s had an effect, so if we take it out of the model, we can see if the disease goes away or if a secondary affect has been found that doesn’t matter so much. We can also use them to get a handle on treatments. For treatment of an individual with a particular disease and symptoms, there are a huge number of treatment combinations available. Unless we have an indication that a certain option may work, it would be completely unethical to try this regime on patients, as it may be too dangerous. For many brain diseases we can’t do tests on animals, since we don’t have animals who suffer from delusions, for example. On the model, we can do it free of risk, so we will have a tool for the industry to test new treatments.

No miracle cures
Walker is insistent that they are not promising miracle cures, highlighting that even if a new drug is found that appears to cure a major condition, such as Alzheimer’s, it would still take a decade or more for it to be used in a clinical setting, given the extensive amount of animal and human testing required before it is authorised for use. HBP is not looking to create a system to replace all of the existing processes, but to make them more effective, using simulation to drastically reduce the time and money that is wasted testing things that are never going to work.

The substantial funding that the project has been awarded, to the tune of €1bn over 10 years, reflects the economical and social expectations placed on the outcomes, but Walker feels it is reasonable proportional to what they are trying to achieve.

Although certainly considerable, if you think of how much it costs to design a new car, which is somewhat simpler than designing a brain, our budget is not so huge,” he says. “A manufacturer can put a billion dollars simply into the design of a new engine. So, we do need to have things in perspective, but of course in terms of science funding this is extraordinarily large.

We believe that we are going to produce very important new basic science, enable medical discoveries and allow new technologies. So the impact is going to be very large, but we also want to warn people against excessive expectations. This will take a very long time – that is the nature of real scientific research. If you know that you’re going to have an impact in one or two years’ time, it’s not research, it is development!

Risking failure
There are an immense number of variables involved in understanding brain function and translating this into computational models meaning that, as with any major project, there always remains a risk that the lofty aims of the project could fundamentally fail.

If there is no risk of failure, it is not research,” Walker states. “In our research we are making a lot of hypotheses about how things function and we cannot guarantee that those hypotheses are right. We have to test them. Today, we have quite a good comprehension of some of the basic mechanics of the brain. We know a lot about how neurons and synapses function, how they change and some of the basic mechanisms of learning. Cognitive neuroscientists know an awful lot about which areas of the brain light up when you do certain things, such as talking, moving and making decisions.

“But there is a gap. We have very little understanding of the low-level functioning neurons. Imagine it like the transistors on a smartphone and how that links to the high-level activity, like an app running on the device. We don’t understand that link and we want to resolve that through modelling and simulation. We are going to model the brain circuitry and the neurons one by one, with each having an individual behaviour and so on. What we don’t know today is how much detail is actually needed to reproduce brain function. The proof will come when we have a model with a certain level of detail that actually exhibits the function we are interested in. Until we reach that point, there is a risk that our hypotheses could indeed be wrong.

As Walker concludes, people have been trying to understand how humans think and how the brain works for millennia. “We can’t promise that we’re going to succeed in all of this but we do have a handle on how to do it. It is really amazing that this is no longer an unapproachable problem – it is a difficult problem, but we do begin to see how we could resolve it.

In the next edition of Science Omega Review, we discuss this issue further, exploring the potential the project offers for supercomputing.
Richard Walker
Project Spokesman
Human Brain Project
www.humanbrainproject.eu

[This article was originally published on 1st July 2013 as part of Science Omega Review Europe 02]

Read more: http://www.scienceomega.com/article/1154/developing-neuroscience-knowledge-human-brain-project#ixzz2Y0HNyd6Y

Comments are closed for this post.