Category: Hyppocampus


Scientists Just Found Evidence That Neurons Can Communicate in a Way We Never Anticipated

By Hugo Angel,

Andrii Vodolazhskyi/Shutterstock.com

A new brain mechanism hiding in plain sight. Researchers have discovered a brand new mechanism that controls the way nerve cells in our brain communicate with each other to regulate learning and long-term memory.

The fact that a new brain mechanism has been hiding in plain sight is a reminder of how much we have yet to learn about how the human brain works, and what goes wrong in neurodegenerative disorders such as Alzheimer’s and epilepsy.

These discoveries represent a significant advance and will have far-reaching implications for the understanding of 

  • memory, 
  • cognition, 
  • developmental plasticity, and 
  • neuronal network formation and stabilisation,”  

said lead researcher Jeremy Henley from the University of Bristol in the UK.

We believe that this is a groundbreaking study that opens new lines of inquiry which will increase understanding of the molecular details of synaptic function in health and disease.

The human brain contains around 100 billion nerve cells, and each of those makes about 10,000 connections – known as synapses – with other cells.

That’s a whole lot of connections, and each of them is strengthened or weakened depending on different brain mechanisms that scientists have spent decades trying to understand.

Until now, one of the best known mechanisms to increase the strength of information flow across synapses was known as LTP, or long-term potentiation.

LTP intensifies the connection between cells to make information transfer more efficient, and it plays a role in a wide range of neurodegenerative conditions –  

  • too much LTP, and you risk disorders such as epilepsy,  
  • too little, and it could cause dementia or Alzheimer’s disease.
As far as researchers were aware, LTP is usually controlled by the activation of special proteins called NMDA receptors.

But now the UK team has discovered a brand new type of LTP that’s regulated in an entirely different way.

After investigating the formation of synapses in the lab, the team showed that this new LTP mechanism is controlled by molecules known as kainate receptors, instead of NMDA receptors.

These data reveal a new and, to our knowledge, previously unsuspected role for postsynaptic kainate receptors in the induction of functional and structural plasticity in the hippocampus,the researchers write in Nature Neuroscience.

This means we’ve now uncovered a previously unexplored mechanism that could control learning and memory.

Untangling the interactions between the signal receptors in the brain not only tells us more about the inner workings of a healthy brain, but also provides a practical insight into what happens when we form new memories,said one of the researchers, Milos Petrovic from the University of Central Lancashire.

If we can preserve these signals it may help protect against brain diseases.

Not only does this open up a new research pathway that could lead to a better understanding of how our brains work, but if researchers can find a way to target these new pathways, it could lead to more effective treatments for a range of neurodegenerative disorders.

It’s still early days, and the discovery will now need to be verified by independent researchers, but it’s a promising new field of research.

This is certainly an extremely exciting discovery and something that could potentially impact the global population,said Petrovic.

The research has been published in Nature Neuroscience.

ORIGINAL: IFLScience

By FIONA MACDONALD
20 FEB 2017

First Human Tests of Memory Boosting Brain Implant—a Big Leap Forward

By Hugo Angel,

You have to begin to lose your memory, if only bits and pieces, to realize that memory is what makes our lives. Life without memory is no life at all.” — Luis Buñuel Portolés, Filmmaker
Image Credit: Shutterstock.com
Every year, hundreds of millions of people experience the pain of a failing memory.
The reasons are many:

  • traumatic brain injury, which haunts a disturbingly high number of veterans and football players; 
  • stroke or Alzheimer’s disease, which often plagues the elderly; or 
  • even normal brain aging, which inevitably touches us all.
Memory loss seems to be inescapable. But one maverick neuroscientist is working hard on an electronic cure. Funded by DARPA, Dr. Theodore Berger, a biomedical engineer at the University of Southern California, is testing a memory-boosting implant that mimics the kind of signal processing that occurs when neurons are laying down new long-term memories.
The revolutionary implant, already shown to help memory encoding in rats and monkeys, is now being tested in human patients with epilepsy — an exciting first that may blow the field of memory prosthetics wide open.
To get here, however, the team first had to crack the memory code.

Deciphering Memory
From the very onset, Berger knew he was facing a behemoth of a problem.
We weren’t looking to match everything the brain does when it processes memory, but to at least come up with a decent mimic, said Berger.
Of course people asked: can you model it and put it into a device? Can you get that device to work in any brain? It’s those things that lead people to think I’m crazy. They think it’s too hard,” he said.
But the team had a solid place to start.
The hippocampus, a region buried deep within the folds and grooves of the brain, is the critical gatekeeper that transforms memories from short-lived to long-term. In dogged pursuit, Berger spent most of the last 35 years trying to understand how neurons in the hippocampus accomplish this complicated feat.
At its heart, a memory is a series of electrical pulses that occur over time that are generated by a given number of neurons, said Berger. This is important — it suggests that we can reduce it to mathematical equations and put it into a computational framework, he said.
Berger hasn’t been alone in his quest.
By listening to the chatter of neurons as an animal learns, teams of neuroscientists have begun to decipher the flow of information within the hippocampus that supports memory encoding. Key to this process is a strong electrical signal that travels from CA3, the “input” part of the hippocampus, to CA1, the “output” node.
This signal is impaired in people with memory disabilities, said Berger, so of course we thought if we could recreate it using silicon, we might be able to restore — or even boost — memory.

Bridging the Gap
Yet this brain’s memory code proved to be extremely tough to crack.
The problem lies in the non-linear nature of neural networks: signals are often noisy and constantly overlap in time, which leads to some inputs being suppressed or accentuated. In a network of hundreds and thousands of neurons, any small change could be greatly amplified and lead to vastly different outputs.
It’s a chaotic black box, laughed Berger.
With the help of modern computing techniques, however, Berger believes he may have a crude solution in hand. His proof?
Use his mathematical theorems to program a chip, and then see if the brain accepts the chip as a replacement — or additional — memory module.
Berger and his team began with a simple task using rats. They trained the animals to push one of two levers to get a tasty treat, and recorded the series of CA3 to CA1 electronic pulses in the hippocampus as the animals learned to pick the correct lever. The team carefully captured the way the signals were transformed as the session was laid down into long-term memory, and used that information — the electrical “essence” of the memory — to program an external memory chip.
They then injected the animals with a drug that temporarily disrupted their ability to form and access long-term memories, causing the animals to forget the reward-associated lever. Next, implanting microelectrodes into the hippocampus, the team pulsed CA1, the output region, with their memory code.
The results were striking — powered by an external memory module, the animals regained their ability to pick the right lever.
Encouraged by the results, Berger next tried his memory implant in monkeys, this time focusing on a brain region called the prefrontal cortex, which receives and modulates memories encoded by the hippocampus.
Placing electrodes into the monkey’s brains, the team showed the animals a series of semi-repeated images, and captured the prefrontal cortex’s activity when the animals recognized an image they had seen earlier. Then with a hefty dose of cocaine, the team inhibited that particular brain region, which disrupted the animal’s recall.
Next, using electrodes programmed with the “memory code,” the researchers guided the brain’s signal processing back on track — and the animal’s performance improved significantly.
A year later, the team further validated their memory implant by showing it could also rescue memory deficits due to hippocampal malfunction in the monkey brain.

A Human Memory Implant
Last year, the team cautiously began testing their memory implant prototype in human volunteers.
Because of the risks associated with brain surgery, the team recruited 12 patients with epilepsy, who already have electrodes implanted into their brain to track down the source of their seizures.
Repeated seizures steadily destroy critical parts of the hippocampus needed for long-term memory formation, explained Berger. So if the implant works, it could benefit these patients as well.
The team asked the volunteers to look through a series of pictures, and then recall which ones they had seen 90 seconds later. As the participants learned, the team recorded the firing patterns in both CA1 and CA3 — that is, the input and output nodes.
Using these data, the team extracted an algorithm — a specific human “memory code” — that could predict the pattern of activity in CA1 cells based on CA3 input. Compared to the brain’s actual firing patterns, the algorithm generated correct predictions roughly 80% of the time.
It’s not perfect, said Berger, but it’s a good start.
Using this algorithm, the researchers have begun to stimulate the output cells with an approximation of the transformed input signal.
We have already used the pattern to zap the brain of one woman with epilepsy, said Dr. Dong Song, an associate professor working with Berger. But he remained coy about the result, only saying that although promising, it’s still too early to tell.
Song’s caution is warranted. Unlike the motor cortex, with its clear structured representation of different body parts, the hippocampus is not organized in any obvious way.
It’s hard to understand why stimulating input locations can lead to predictable results, said Dr. Thoman McHugh, a neuroscientist at the RIKEN Brain Science Institute. It’s also difficult to tell whether such an implant could save the memory of those who suffer from damage to the output node of the hippocampus.
That said, the data is convincing,” McHugh acknowledged.
Berger, on the other hand, is ecstatic. “I never thought I’d see this go into humans,” he said.
But the work is far from done. Within the next few years, Berger wants to see whether the chip can help build long-term memories in a variety of different situations. After all, the algorithm was based on the team’s recordings of one specific task — what if the so-called memory code is not generalizable, instead varying based on the type of input that it receives?
Berger acknowledges that it’s a possibility, but he remains hopeful.
I do think that we will find a model that’s a pretty good fit for most conditions, he said. After all, the brain is restricted by its own biophysics — there’s only so many ways that electrical signals in the hippocampus can be processed, he said.
The goal is to improve the quality of life for somebody who has a severe memory deficit,” said Berger. “If I can give them the ability to form new long-term memories for half the conditions that most people live in, I’ll be happy as hell, and so will be most patients.
ORIGINAL: Singularity Hub

Brain waves may be spread by weak electrical field

By Hugo Angel,

The research team says the electrical fields could be behind the spread of sleep and theta waves, along with epileptic seizure waves (Credit:Shutterstock)
Mechanism tied to waves associated with epilepsy
Researchers at Case Western Reserve University may have found a new way information is communicated throughout the brain.
Their discovery could lead to identifying possible new targets to investigate brain waves associated with memory and epilepsy and better understand healthy physiology.
They recorded neural spikes traveling at a speed too slow for known mechanisms to circulate throughout the brain. The only explanation, the scientists say, is the wave is spread by a mild electrical field they could detect. Computer modeling and in-vitro testing support their theory.
Others have been working on such phenomena for decades, but no one has ever made these connections,” said Steven J. Schiff, director of the Center for Neural Engineering at Penn State University, who was not involved in the study. “The implications are that such directed fields can be used to modulate both pathological activities, such as seizures, and to interact with cognitive rhythms that help regulate a variety of processes in the brain.
Scientists Dominique Durand, Elmer Lincoln Lindseth Professor in Biomedical Engineering at Case School of Engineering and leader of the research, former graduate student Chen Sui and current PhD students Rajat Shivacharan and Mingming Zhang, report their findings in The Journal of Neuroscience.
Researchers have thought that the brain’s endogenous electrical fields are too weak to propagate wave transmission,” Durand said. “But it appears the brain may be using the fields to communicate without synaptic transmissions, gap junctions or diffusion.
How the fields may work
Computer modeling and testing on mouse hippocampi (the central part of the brain associated with memory and spatial navigation) in the lab indicate the field begins in one cell or group of cells.
Although the electrical field is of low amplitude, the field excites and activates immediate neighbors, which, in turn, excite and activate immediate neighbors, and so on across the brain at a rate of about 0.1 meter per second.
Blocking the endogenous electrical field in the mouse hippocampus and increasing the distance between cells in the computer model and in-vitro both slowed the speed of the wave.
These results, the researchers say, confirm that the propagation mechanism for the activity is consistent with the electrical field.
Because sleep waves and theta waves–which are associated with forming memories during sleep–and epileptic seizure waves travel at about 1 meter per second, the researchers are now investigating whether the electrical fields play a role in normal physiology and in epilepsy.
If so, they will try to discern what information the fields may be carrying. Durand’s lab is also investigating where the endogenous spikes come from.
ORIGINAL: Eurkalert
14-JAN-2016

Memory capacity of brain is 10 times more than previously thought

By Hugo Angel,

Data from the Salk Institute shows brain’s memory capacity is in the petabyte range, as much as entire Web

LA JOLLA—Salk researchers and collaborators have achieved critical insight into the size of neural connections, putting the memory capacity of the brain far higher than common estimates. The new work also answers a longstanding question as to how the brain is so energy efficient and could help engineers build computers that are incredibly powerful but also conserve energy.
This is a real bombshell in the field of neuroscience,” said Terry Sejnowski from the Salk Institute for Biological Studies. “Our new measurements of the brain’s memory capacity increase conservative estimates by a factor of 10 to at least a petabyte (215 Bytes = 1000 TeraBytes), in the same ballpark as the World Wide Web.
Our memories and thoughts are the result of patterns of electrical and chemical activity in the brain. A key part of the activity happens when branches of neurons, much like electrical wire, interact at certain junctions, known as synapses. An output ‘wire’ (an axon) from one neuron connects to an input ‘wire’ (a dendrite) of a second neuron. Signals travel across the synapse as chemicals called neurotransmitters to tell the receiving neuron whether to convey an electrical signal to other neurons. Each neuron can have thousands of these synapses with thousands of other neurons.
When we first reconstructed every dendrite, axon, glial process, and synapse from a volume of hippocampus the size of a single red blood cell, we were somewhat bewildered by the complexity and diversity amongst the synapses,” says Kristen Harris, co-senior author of the work and professor of neuroscience at the University of Texas, Austin. “While I had hoped to learn fundamental principles about how the brain is organized from these detailed reconstructions, I have been truly amazed at the precision obtained in the analyses of this report.
Synapses are still a mystery, though their dysfunction can cause a range of neurological diseases. Larger synapses—with more surface area and vesicles of neurotransmitters—are stronger, making them more likely to activate their surrounding neurons than medium or small synapses.
The Salk team, while building a 3D reconstruction of rat hippocampus tissue (the memory center of the brain), noticed something unusual. In some cases, a single axon from one neuron formed two synapses reaching out to a single dendrite of a second neuron, signifying that the first neuron seemed to be sending a duplicate message to the receiving neuron.
At first, the researchers didn’t think much of this duplicity, which occurs about 10 percent of the time in the hippocampus. But Tom Bartol, a Salk staff scientist, had an idea: if they could measure the difference between two very similar synapses such as these, they might glean insight into synaptic sizes, which so far had only been classified in the field as small, medium and large.
In a computational reconstruction of brain tissue in the hippocampus, Salk scientists and UT-Austin scientists found the unusual occurrence of two synapses from the axon of one neuron (translucent black strip) forming onto two spines on the same dendrite of a second neuron (yellow). Separate terminals from one neuron’s axon are shown in synaptic contact with two spines (arrows) on the same dendrite of a second neuron in the hippocampus. The spine head volumes, synaptic contact areas (red), neck diameters (gray) and number of presynaptic vesicles (white spheres) of these two synapses are almost identical. Credit: Salk Institut
To do this, researchers used advanced microscopy and computational algorithms they had developed to image rat brains and reconstruct the connectivity, shapes, volumes and surface area of the brain tissue down to a nanomolecular level.
The scientists expected the synapses would be roughly similar in size, but were surprised to discover the synapses were nearly identical.
We were amazed to find that the difference in the sizes of the pairs of synapses were very small, on average, only about 8 percent different in size,” said Tom Bartol, one of the scientists. “No one thought it would be such a small difference. This was a curveball from nature.
Because the memory capacity of neurons is dependent upon synapse size, this eight percent difference turned out to be a key number the team could then plug into their algorithmic models of the brain to measure how much information could potentially be stored in synaptic connections.
It was known before that the range in sizes between the smallest and largest synapses was a factor of 60 and that most are small.
But armed with the knowledge that synapses of all sizes could vary in increments as little as eight percent between sizes within a factor of 60, the team determined there could be about 26 categories of sizes of synapses, rather than just a few.
Our data suggests there are 10 times more discrete sizes of synapses than previously thought,” says Bartol. In computer terms, 26 sizes of synapses correspond to about 4.7 “bits” of information. Previously, it was thought that the brain was capable of just one to two bits for short and long memory storage in the hippocampus.
This is roughly an order of magnitude of precision more than anyone has ever imagined,” said Sejnowski.
What makes this precision puzzling is that hippocampal synapses are notoriously unreliable. When a signal travels from one neuron to another, it typically activates that second neuron only 10 to 20 percent of the time.
We had often wondered how the remarkable precision of the brain can come out of such unreliable synapses,” says Bartol. One answer, it seems, is in the constant adjustment of synapses, averaging out their success and failure rates over time. The team used their new data and a statistical model to find out how many signals it would take a pair of synapses to get to that eight percent difference.
The researchers calculated that
  • for the smallest synapses, about 1,500 events cause a change in their size/ability (20 minutes) and
  • for the largest synapses, only a couple hundred signaling events (1 to 2 minutes) cause a change.
This means that every 2 or 20 minutes, your synapses are going up or down to the next size,” said Bartol. “The synapses are adjusting themselves according to the signals they receive.
From left: Terry Sejnowski, Cailey Bromer and Tom Bartol. Credit: Salk Institute
Our prior work had hinted at the possibility that spines and axons that synapse together would be similar in size, but the reality of the precision is truly remarkable and lays the foundation for whole new ways to think about brains and computers,” says Harris. “The work resulting from this collaboration has opened a new chapter in the search for learning and memory mechanisms.” Harris adds that the findings suggest more questions to explore, for example, if similar rules apply for synapses in other regions of the brain and how those rules differ during development and as synapses change during the initial stages of learning.
The implications of what we found are far-reaching. Hidden under the apparent chaos and messiness of the brain is an underlying precision to the size and shapes of synapses that was hidden from us.
The findings also offer a valuable explanation for the brain’s surprising efficiency. The waking adult brain generates only about 20 watts of continuous power—as much as a very dim light bulb. The Salk discovery could help computer scientists build ultra-precise but energy-efficient computers, particularly ones that employ deep learning and neural nets techniques capable of sophisticated learning and analysis, such as speech, object recognition and translation.
This trick of the brain absolutely points to a way to design better computers,”said Sejnowski. “Using probabilistic transmission turns out to be as accurate and require much less energy for both computers and brains.
Other authors on the paper were Cailey Bromer of the Salk Institute; Justin Kinney of the McGovern Institute for Brain Research; and Michael A. Chirillo and Jennifer N. Bourne of the University of Texas, Austin.
The work was supported by the NIH and the Howard Hughes Medical Institute.
ORIGINAL: Salk.edu
January 20, 2016

Peeking into the brain’s filing system

By admin,

Aspects of a single memory can be scattered throughout the outer “cortex” of the brain
Storing information so that you can easily find it again is a challenge. From purposefully messy desks to indexed filing cabinets, we all have our preferred systems. How does it happen inside our brains?
Somewhere within the dense, damp and intricate 1.5kg of tissue that we carry in our skulls, all of our experiences are processed, stored, and – sometimes more readily than others – retrieved again when we need them.
It’s what neuroscientists call “episodic memory” and for years, they have loosely agreed on a model for how it works. Gathering detailed data to flesh out that model is difficult.
But the picture is beginning to get clearer and more complete.
A key component is the small, looping structure called the hippocampus, buried quite deep beneath the brain’s wrinkled outer layer. It is only a few centimetres in length but is very well connected to other parts of the brain.
People with damage to their hippocampus have profound memory problems and this has made it a major focus of memory research since the 1950s.
Quick learning
It was in the hippocampus, and some of its neighbouring brain regions, that scientists from the University of Leicester got a glimpse of new memories being formed, in a study published this week.
Single brain cells in the hippocampus can form associations very rapidly
They used a rare opportunity to record the fizz and crackle of single human brain cells at work, in epilepsy patients undergoing brain surgery.
Individual neurons that went crazy for particular celebrities, like Clint Eastwood, could be “trained” to respond to, for example, the Statue of Liberty as well – as soon as the patients were given a picture of Clint in front of the statue.
It seemed that single brain cells, in the hippocampus, had been caught in the act of forming a new association. And they do it very fast.
But that outer wrapping of the brain – the cortex – is also important. It is much bigger than the hippocampus and does myriad jobs, from sensing the world to moving our limbs.
When we have a particular experience, like a trip to the beach, different patches of the cortex are called up to help us process different elements: recognising a friend, hearing the seagulls, feeling the breeze.
So traces of that experience are rather scattered across the cortex.To remember it, the brain needs some sort of index to find them all again.
And that, neuroscientists generally agree, is where the hippocampus comes in.
Think of the [cortex] as a huge library and the hippocampus as its librarian,” wrote the prominent Hungarian neuroscientist Gyorgy Buszaki in his 2006 book Rhythms of the Brain.

Does the brain have a librarian?
The elements of our day at the beach might litter the cortex like specific books along miles of shelving; the hippocampus is able to link them together and – if all goes well – pull them off the shelf when we want to reminisce.
Completing patterns
Another brand new study, out this week in the journal Nature Communications, looks inside the brain using fMRI imaging to see this filing system in action.
By getting people to learn and remember imaginary scenarios while inside a brain scanner, Dr Aidan Horner and his colleagues at University College London collected the first firm evidence for “pattern completion” in the human hippocampus.
Pattern completion is the mechanism behind a phenomenon we all recognise, when one particular aspect of a memory – the smell of salt in the air, perhaps – brings all the other aspects flooding back.
If you have an event that involves the Eiffel tower, your friend and, say, a pink balloon… I can show you a picture of the Eiffel tower, and you remember not only your friend, but also the pink balloon,” Dr Horner told the BBC.
While his volunteers had just this sort of experience inside the scanner, Dr Horner saw interplay between different parts of the cortex, associated with different parts of a memory, and the hippocampus.
The brain activity flowed in a way that showed “pattern completion” was indeed underway – and the cortex and the hippocampus were working just like the library and the librarian in Prof Buzsaki’s analogy.

The hippocampus (darker brown) is centrally located and very well connected
If I cue you with the location, and I get you to explicitly retrieve the person, what we also see is activation in the region that’s associated with the object for that event,” Dr Horner explained. “So even though it’s task-irrelevant, you don’t have to retrieve it, it seems that we still bring that object to mind.
And the extent to which we see that activation in the ‘object’ region correlates with the hippocampal response. So that suggests that it’s the hippocampus that’s doing the pattern completion, retrieving all these elements.
It’s able to act as an index, I suppose, by linking these things together – and doing it very very quickly, that’s the key thing.
If the cortex were left to make its own connections between the fragments of a memory, he added, it would be far too slow.
That’s clearly not a system we want, if we’re going to remember a specific event that happens once in a lifetime.
Beat this: Episodic memory is a key challenge for artificial intelligence systems
Dr Horner said the findings also dovetail nicely with the single-neuron, celebrity-spotting results from the Leicester study.
We can look across the cortex and the hippocampus, and we can relate it to recollection. But what they can do is say look, these cells [in the hippocampus] have learned really quickly.
So that’s the sort of underlying neural basis of what we’re looking at, at a slightly broader scale.
Science, it seems, is finally managing to unpick the way our brains record our lives. It is a remarkable, beautiful, fallible system.
Building some sort of memory storage like this is regarded as one of the next key challenges for researchers trying to build intelligent machines.
Our own memories, for all their flaws, are a hard act to follow.
ORIGINAL: BBC
By Jonathan WebbScience reporter, BBC News
5 July 2015