Category: Memory


A Giant Neuron Has Been Found Wrapped Around the Entire Circumference of the Brain

By Hugo Angel,

Allen Institute for Brain Science

This could be where consciousness forms. For the first time, scientists have detected a giant neuron wrapped around the entire circumference of a mouse’s brain, and it’s so densely connected across both hemispheres, it could finally explain the origins of consciousness.

Using a new imaging technique, the team detected the giant neuron emanating from one of the best-connected regions in the brain, and say it could be coordinating signals from different areas to create conscious thought.

This recently discovered neuron is one of three that have been detected for the first time in a mammal’s brain, and the new imaging technique could help us figure out if similar structures have gone undetected in our own brains for centuries.

At a recent meeting of the Brain Research through Advancing Innovative Neurotechnologies initiative in Maryland, a team from the Allen Institute for Brain Science described how all three neurons stretch across both hemispheres of the brain, but the largest one wraps around the organ’s circumference like a “crown of thorns”.
You can see them highlighted in the image at the top of the page.

Lead researcher Christof Koch told Sara Reardon at Nature that they’ve never seen neurons extend so far across both regions of the brain before.
Oddly enough, all three giant neurons happen to emanate from a part of the brain that’s shown intriguing connections to human consciousness in the past – the claustrum, a thin sheet of grey matter that could be the most connected structure in the entire brain, based on volume.

This relatively small region is hidden between the inner surface of the neocortex in the centre of the brain, and communicates with almost all regions of cortex to achieve many higher cognitive functions such as

  • language,
  • long-term planning, and
  • advanced sensory tasks such as
  • seeing and
  • hearing.

Advanced brain-imaging techniques that look at the white matter fibres coursing to and from the claustrum reveal that it is a neural Grand Central Station,Koch wrote for Scientific American back in 2014. “Almost every region of the cortex sends fibres to the claustrum.”

The claustrum is so densely connected to several crucial areas in the brain that Francis Crick of DNA double helix fame referred to it a “conductor of consciousnessin a 2005 paper co-written with Koch.

They suggested that it connects all of our external and internal perceptions together into a single unifying experience, like a conductor synchronises an orchestra, and strange medical cases in the past few years have only made their case stronger.

Back in 2014, a 54-year-old woman checked into the George Washington University Medical Faculty Associates in Washington, DC, for epilepsy treatment.

This involved gently probing various regions of her brain with electrodes to narrow down the potential source of her epileptic seizures, but when the team started stimulating the woman’s claustrum, they found they could effectively ‘switch’ her consciousness off and on again.

Helen Thomson reported for New Scientist at the time:
When the team zapped the area with high frequency electrical impulses, the woman lost consciousness. She stopped reading and stared blankly into space, she didn’t respond to auditory or visual commands and her breathing slowed.

As soon as the stimulation stopped, she immediately regained consciousness with no memory of the event. The same thing happened every time the area was stimulated during two days of experiments.”

According to Koch, who was not involved in the study, this kind of abrupt and specific ‘stopping and starting‘ of consciousness had never been seen before.

Another experiment in 2015 examined the effects of claustrum lesions on the consciousness of 171 combat veterans with traumatic brain injuries.

They found that claustrum damage was associated with the duration, but not frequency, of loss of consciousness, suggesting that it could play an important role in the switching on and off of conscious thought, but another region could be involved in maintaining it.

And now Koch and his team have discovered extensive neurons in mouse brains emanating from this mysterious region.

In order to map neurons, researchers usually have to inject individual nerve cells with a dye, cut the brain into thin sections, and then trace the neuron’s path by hand.

It’s a surprisingly rudimentary technique for a neuroscientist to have to perform, and given that they have to destroy the brain in the process, it’s not one that can be done regularly on human organs.

Koch and his team wanted to come up with a technique that was less invasive, and engineered mice that could have specific genes in their claustrum neurons activated by a specific drug.

When the researchers fed the mice a small amount of the drug, only a handful of neurons received enough of it to switch on these genes,Reardon reports for Nature.

That resulted in production of a green fluorescent protein that spread throughout the entire neuron. The team then took 10,000 cross-sectional images of the mouse brain, and used a computer program to create a 3D reconstruction of just three glowing cells.

We should keep in mind that just because these new giant neurons are connected to the claustrum doesn’t mean that Koch’s hypothesis about consciousness is correct – we’re a long way from proving that yet.

It’s also important to note that these neurons have only been detected in mice so far, and the research has yet to be published in a peer-reviewed journal, so we need to wait for further confirmation before we can really delve into what this discovery could mean for humans.

But the discovery is an intriguing piece of the puzzle that could help up make sense of this crucial, but enigmatic region of the brain, and how it could relate to the human experience of conscious thought.

The research was presented at the 15 February meeting of the Brain Research through Advancing Innovative Neurotechnologies initiative in Bethesda, Maryland.

ORIGINAL: ScienceAlert

BEC CREW
28 FEB 2017

Scientists Just Found Evidence That Neurons Can Communicate in a Way We Never Anticipated

By Hugo Angel,

Andrii Vodolazhskyi/Shutterstock.com

A new brain mechanism hiding in plain sight. Researchers have discovered a brand new mechanism that controls the way nerve cells in our brain communicate with each other to regulate learning and long-term memory.

The fact that a new brain mechanism has been hiding in plain sight is a reminder of how much we have yet to learn about how the human brain works, and what goes wrong in neurodegenerative disorders such as Alzheimer’s and epilepsy.

These discoveries represent a significant advance and will have far-reaching implications for the understanding of 

  • memory, 
  • cognition, 
  • developmental plasticity, and 
  • neuronal network formation and stabilisation,”  

said lead researcher Jeremy Henley from the University of Bristol in the UK.

We believe that this is a groundbreaking study that opens new lines of inquiry which will increase understanding of the molecular details of synaptic function in health and disease.

The human brain contains around 100 billion nerve cells, and each of those makes about 10,000 connections – known as synapses – with other cells.

That’s a whole lot of connections, and each of them is strengthened or weakened depending on different brain mechanisms that scientists have spent decades trying to understand.

Until now, one of the best known mechanisms to increase the strength of information flow across synapses was known as LTP, or long-term potentiation.

LTP intensifies the connection between cells to make information transfer more efficient, and it plays a role in a wide range of neurodegenerative conditions –  

  • too much LTP, and you risk disorders such as epilepsy,  
  • too little, and it could cause dementia or Alzheimer’s disease.
As far as researchers were aware, LTP is usually controlled by the activation of special proteins called NMDA receptors.

But now the UK team has discovered a brand new type of LTP that’s regulated in an entirely different way.

After investigating the formation of synapses in the lab, the team showed that this new LTP mechanism is controlled by molecules known as kainate receptors, instead of NMDA receptors.

These data reveal a new and, to our knowledge, previously unsuspected role for postsynaptic kainate receptors in the induction of functional and structural plasticity in the hippocampus,the researchers write in Nature Neuroscience.

This means we’ve now uncovered a previously unexplored mechanism that could control learning and memory.

Untangling the interactions between the signal receptors in the brain not only tells us more about the inner workings of a healthy brain, but also provides a practical insight into what happens when we form new memories,said one of the researchers, Milos Petrovic from the University of Central Lancashire.

If we can preserve these signals it may help protect against brain diseases.

Not only does this open up a new research pathway that could lead to a better understanding of how our brains work, but if researchers can find a way to target these new pathways, it could lead to more effective treatments for a range of neurodegenerative disorders.

It’s still early days, and the discovery will now need to be verified by independent researchers, but it’s a promising new field of research.

This is certainly an extremely exciting discovery and something that could potentially impact the global population,said Petrovic.

The research has been published in Nature Neuroscience.

ORIGINAL: IFLScience

By FIONA MACDONALD
20 FEB 2017

Google’s AI can now learn from its own memory independently

By Hugo Angel,

An artist’s impression of the DNC. Credit: DeepMind
The DeepMind artificial intelligence (AI) being developed by Google‘s parent company, Alphabet, can now intelligently build on what’s already inside its memory, the system’s programmers have announced.
Their new hybrid system – called a Differential Neural Computer (DNC)pairs a neural network with the vast data storage of conventional computers, and the AI is smart enough to navigate and learn from this external data bank. 
What the DNC is doing is effectively combining external memory (like the external hard drive where all your photos get stored) with the neural network approach of AI, where a massive number of interconnected nodes work dynamically to simulate a brain.
These models… can learn from examples like neural networks, but they can also store complex data like computers,” write DeepMind researchers Alexander Graves and Greg Wayne in a blog post.
At the heart of the DNC is a controller that constantly optimises its responses, comparing its results with the desired and correct ones. Over time, it’s able to get more and more accurate, figuring out how to use its memory data banks at the same time.
Take a family tree: after being told about certain relationships, the DNC was able to figure out other family connections on its own – writing, rewriting, and optimising its memory along the way to pull out the correct information at the right time.
Another example the researchers give is a public transit system, like the London Underground. Once it’s learned the basics, the DNC can figure out more complex relationships and routes without any extra help, relying on what it’s already got in its memory banks.
In other words, it’s functioning like a human brain, taking data from memory (like tube station positions) and figuring out new information (like how many stops to stay on for).
Of course, any smartphone mapping app can tell you the quickest way from one tube station to another, but the difference is that the DNC isn’t pulling this information out of a pre-programmed timetable – it’s working out the information on its own, and juggling a lot of data in its memory all at once.
The approach means a DNC system could take what it learned about the London Underground and apply parts of its knowledge to another transport network, like the New York subway.
The system points to a future where artificial intelligence could answer questions on new topics, by deducing responses from prior experiences, without needing to have learned every possible answer beforehand.
Credit: DeepMind

Of course, that’s how DeepMind was able to beat human champions at Go – by studying millions of Go moves. But by adding external memory, DNCs are able to take on much more complex tasks and work out better overall strategies, its creators say.

Like a conventional computer, [a DNC] can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data,” the researchers explain in Nature.
In another test, the DNC was given two bits of information: “John is in the playground,” and “John picked up the football.” With those known facts, when asked “Where is the football?“, it was able to answer correctly by combining memory with deep learning. (The football is in the playground, if you’re stuck.)
Making those connections might seem like a simple task for our powerful human brains, but until now, it’s been a lot harder for virtual assistants, such as Siri, to figure out.
With the advances DeepMind is making, the researchers say we’re another step forward to producing a computer that can reason independently.
And then we can all start enjoying our robot-driven utopia – or technological dystopia – depending on your point of view.
ORIGINAL: ScienceAlert
By DAVID NIELD

14 OCT 2016

Google’s Deep Mind Gives AI a Memory Boost That Lets It Navigate London’s Underground

By Hugo Angel,

Photo: iStockphoto

Google’s DeepMind artificial intelligence lab does more than just develop computer programs capable of beating the world’s best human players in the ancient game of Go. The DeepMind unit has also been working on the next generation of deep learning software that combines the ability to recognize data patterns with the memory required to decipher more complex relationships within the data.

Deep learning is the latest buzz word for artificial intelligence algorithms called neural networks that can learn over time by filtering huge amounts of relevant data through many “deep” layers. The brain-inspired neural network layers consist of nodes (also known as neurons). Tech giants such as Google, Facebook, Amazon, and Microsoft have been training neural networks to learn how to better handle tasks such as recognizing images of dogs or making better Chinese-to-English translations. These AI capabilities have already benefited millions of people using Google Translate and other online services.
But neural networks face huge challenges when they try to rely solely on pattern recognition without having the external memory to store and retrieve information. To improve deep learning’s capabilities, Google DeepMind created a “differentiable neural computer” (DNC) that gives neural networks an external memory for storing information for later use.
Neural networks are like the human brain; we humans cannot assimilate massive amounts of data and we must rely on external read-write memory all the time,” says Jay McClelland, director of the Center for Mind, Brain and Computation at Stanford University. “We once relied on our physical address books and Rolodexes; now of course we rely on the read-write storage capabilities of regular computers.
McClelland is a cognitive scientist who served as one of several independent peer reviewers for the Google DeepMind paper that describes development of this improved deep learning system. The full paper is presented in the 12 Oct 2016 issue of the journal Nature.
The DeepMind team found that the DNC system’s combination of the neural network and external memory did much better than a neural network alone in tackling the complex relationships between data points in so-called “graph tasks.” For example, they asked their system to either simply take any path between points A and B or to find the shortest travel routes based on a symbolic map of the London Underground subway.
An unaided neural network could not even finish the first level of training, based on traveling between two subway stations without trying to find the shortest route. It achieved an average accuracy of just 37 percent after going through almost two million training examples. By comparison, the neural network with access to external memory in the DNC system successfully completed the entire training curriculum and reached an average of 98.8 percent accuracy on the final lesson.
The external memory of the DNC system also proved critical to success in performing logical planning tasks such as solving simple block puzzle challenges. Again, a neural network by itself could not even finish the first lesson of the training curriculum for the block puzzle challenge. The DNC system was able to use its memory to store information about the challenge’s goals and to effectively plan ahead by writing its decisions to memory before acting upon them.
In 2014, DeepMind’s researchers developed another system, called the neural Turing machine, that also combined neural networks with external memory. But the neural Turing machine was limited in the way it could access “memories” (information) because such memories were effectively stored and retrieved in fixed blocks or arrays. The latest DNC system can access memories in any arbitrary location, McClelland explains.
The DNC system’s memory architecture even bears a certain resemblance to how the hippocampus region of the brain supports new brain cell growth and new connections in order to store new memories. Just as the DNC system uses the equivalent of time stamps to organize the storage and retrieval of memories, human “free recall” experiments have shown that people are more likely to recall certain items in the same order as first presented.
Despite these similarities, the DNC’s design was driven by computational considerations rather than taking direct inspiration from biological brains, DeepMind’s researchers write in their paper. But McClelland says that he prefers not to think of the similarities as being purely coincidental.
The design decisions that motivated the architects of the DNC were the same as those that structured the human memory system, although the latter (in my opinion) was designed by a gradual evolutionary process, rather than by a group of brilliant AI researchers,” McClelland says.
Human brains still have significant advantages over any brain-inspired deep learning software. For example, human memory seems much better at storing information so that it is accessible by both context or content, McClelland says. He expressed hope that future deep learning and AI research could better capture the memory advantages of biological brains.
 
DeepMind’s DNC system and similar neural learning systems may represent crucial steps for the ongoing development of AI. But the DNC system still falls well short of what McClelland considers the most important parts of human intelligence.
The DNC is a sophisticated form of external memory, but ultimately it is like the papyrus on which Euclid wrote the elements. The insights of mathematicians that Euclid codified relied (in my view) on a gradual learning process that structured the neural circuits in their brains so that they came to be able to see relationships that others had not seen, and that structured the neural circuits in Euclid’s brain so that he could formulate what to write. We have a long way to go before we understand fully the algorithms the human brain uses to support these processes.
It’s unclear when or how Google might take advantage of the capabilities offered by the DNC system to boost its commercial products and services. The DeepMind team was “heads down in research” or too busy with travel to entertain media questions at this time, according to a Google spokesperson.
But Herbert Jaeger, professor for computational science at Jacobs University Bremen in Germany, sees the DeepMind team’s work as a “passing snapshot in a fast evolution sequence of novel neural learning architectures.” In fact, he’s confident that the DeepMind team already has something better than the DNC system described in the Nature paper. (Keep in mind that the paper was submitted back in January 2016.)
DeepMind’s work is also part of a bigger trend in deep learning, Jaeger says. The leading deep learning teams at Google and other companies are racing to build new AI architectures with many different functional modules—among them, attentional control or working memory; they then train the systems through deep learning.
The DNC is just one among dozens of novel, highly potent, and cleverly-thought-out neural learning systems that are popping up all over the place,” Jaeger says.
ORIGINAL: IEEE Spectrum
12 Oct 2016

A lab founded by a tech billionaire just unveiled a major leap forward in cracking your brain’s code

By Hugo Angel,

This is definitely not a scene from “A Clockwork Orange.” Allen Brain Observatory
As the mice watched a computer screen, their glowing neurons pulsed through glass windows in their skulls.
Using a device called a two-photon microscope, researchers at the Allen Institute for Brain Science could peer through those windows and record, layer by layer, the workings of their little minds.
The result, announced July 13, is a real-time record of the visual cortex — a brain region shared in similar form across mammalian species — at work. The data set that emerged is so massive and complete that its creators have named it the Allen Brain Observatory.
Bred for the lab, the mice were genetically modified so that specific cells in their brains would fluoresce when they became active. Researchers had installed the brain-windows surgically, slicing away tiny chunks of the rodents’ skulls and replacing them with five-millimeter skylights.
Sparkling neurons of the mouse visual cortex shone through the glass as images and short films flashed across the screen. Each point of light the researchers saw translated, with hours of careful processing, into data: 
  • Which cell lit up? 
  • Where in the brain? 
  • How long did it glow? 
  • What was the mouse doing at the time? 
  • What was on the screen?

The researchers imaged the neurons in small groups, building a map of one microscopic layer before moving down to the next. When they were finished, the activities of 18,000 cells from several dozen mice were recorded in their database.

This is the first data set where we’re watching large populations of neurons’ activity in real time, at the cellular level,” said Saskia de Vries, a scientist who worked on the project, at the private research center launched by Microsoft co-founder Paul Allen.
The problem the Brain Observatory wants to solve is straightforward. Science still does not understand the brain’s underlying code very well, and individual studies may turn up odd results that are difficult to interpret in the context of the whole brain.
A decade ago, for example, a widely-reported study appeared to find a single neuron in a human brain that always — and only — winked on when presented with images of Halle Berry. Few scientists suggested that this single cell actually stored the subject’s whole knowledge of Berry’s face. But without more context about what the cells around it were doing, a more complete explanation remained out of reach.
When you’re listening to a cell with an electrode, all you’re hearing is [its activity level] spiking,” said Shawn Olsen, another researcher on the project. “And you don’t know where exactly that cell is, you don’t know its precise location, you don’t know its shape, you don’t know who it connects to.
Imagine trying to assemble a complete understanding of a computer given only facts like under certain circumstances, clicking the mouse makes lights on the printer blink.
To get beyond that kind of feeling around in the dark, the Allen Institute has taken what Olsen calls an “industrial” approach to mapping out the brain’s activity.
Our goal is to systematically march through the different cortical layers, and the different cell types, and the different areas of the cortex to produce a systematic, mostly comprehensive survey of the activity,” Olsen explained. “It doesn’t just describe how one cell type is responding or one particular area, but characterizes as much as we can a complete population of cells that will allow us to draw inferences that you couldn’t describe if you were just looking at one cell at a time.
In other words, this project makes its impact through the grinding power of time and effort.
A visualization of cells examined in the project. Allen Brain Observatory

Researchers showed the mice moving horizontal or vertical lines, light and dark dots on a surface, natural scenes, and even clips from Hollywood movies.

The more abstract displays target how the mind sees and interprets light and dark, lines, and motion, building on existing neuroscience. Researchers have known for decades that particular cells appear to correspond to particular kinds of motion or shape, or positions in the visual field. This research helps them place the activity of those cells in context.
One of the most obvious results was that the brain is noisy, messy, and confusing.
Even though we showed the same image, we could get dramatically different responses from the same cell. On one trial it may have a strong response, on another it may have a weak response,” Olsen said.
All that noise in their data is one of the things that differentiates it from a typical study, de Vries said.
If you’re inserting an electrode you’re going to keep advancing until you find a cell that kind of responds the way you want it to,” he said. “By doing a survey like this we’re going to see a lot of cells that don’t respond to the stimuli in the way that we think they should. We’re realizing that the cartoon model that we have of the cortex isn’t completely accurate.

Olsen said they suspect a lot of that noise emerges from whatever the mouse is thinking about or doing that has nothing to do with what’s on screen. They recorded videos of the mice during data collection to help researchers combing their data learn more about those effects.
The best evidence for this suspicion? When they showed the mice more interesting visuals, like pictures of animals or clips from the film “Touch of Evil,” the neurons behaved much more consistently.
We would present each [clip] ten different times,” de Vries said. “And we can see from trial to trial many cells at certain times almost always respond — reliable, repeatable, robust responses.
In other words, it appears the mice were paying attention.
Allen Brain Observatory

The Brain Observatory was turned loose on the internet Wednesday, with its data available for researchers and the public to comb through, explore, and maybe critique.

But the project isn’t over.
In the next year-and-a-half, the researchers intend to add more types of cells and more regions of the visual cortex to their observatory. And their long-term ambitions are even grander.
Ultimately,” Olson said,”we want to understand how this visual information in the mouse’s brain gets used to guide behavior and memory and cognition.
Right now, the mice just watch screens. But by training them to perform tasks based on what they see, he said they hope to crack the mysteries of memory, decision-making, and problem-solving. Another parallel observatory created using electrode arrays instead of light through windows will add new levels of richness to their data.
So the underlying code of mouse — and human — brains remains largely a mystery, but the map that we’ll need to unlock it grows richer by the day.
ORIGINAL: Tech Insider

Jul. 13, 2016

First Human Tests of Memory Boosting Brain Implant—a Big Leap Forward

By Hugo Angel,

You have to begin to lose your memory, if only bits and pieces, to realize that memory is what makes our lives. Life without memory is no life at all.” — Luis Buñuel Portolés, Filmmaker
Image Credit: Shutterstock.com
Every year, hundreds of millions of people experience the pain of a failing memory.
The reasons are many:

  • traumatic brain injury, which haunts a disturbingly high number of veterans and football players; 
  • stroke or Alzheimer’s disease, which often plagues the elderly; or 
  • even normal brain aging, which inevitably touches us all.
Memory loss seems to be inescapable. But one maverick neuroscientist is working hard on an electronic cure. Funded by DARPA, Dr. Theodore Berger, a biomedical engineer at the University of Southern California, is testing a memory-boosting implant that mimics the kind of signal processing that occurs when neurons are laying down new long-term memories.
The revolutionary implant, already shown to help memory encoding in rats and monkeys, is now being tested in human patients with epilepsy — an exciting first that may blow the field of memory prosthetics wide open.
To get here, however, the team first had to crack the memory code.

Deciphering Memory
From the very onset, Berger knew he was facing a behemoth of a problem.
We weren’t looking to match everything the brain does when it processes memory, but to at least come up with a decent mimic, said Berger.
Of course people asked: can you model it and put it into a device? Can you get that device to work in any brain? It’s those things that lead people to think I’m crazy. They think it’s too hard,” he said.
But the team had a solid place to start.
The hippocampus, a region buried deep within the folds and grooves of the brain, is the critical gatekeeper that transforms memories from short-lived to long-term. In dogged pursuit, Berger spent most of the last 35 years trying to understand how neurons in the hippocampus accomplish this complicated feat.
At its heart, a memory is a series of electrical pulses that occur over time that are generated by a given number of neurons, said Berger. This is important — it suggests that we can reduce it to mathematical equations and put it into a computational framework, he said.
Berger hasn’t been alone in his quest.
By listening to the chatter of neurons as an animal learns, teams of neuroscientists have begun to decipher the flow of information within the hippocampus that supports memory encoding. Key to this process is a strong electrical signal that travels from CA3, the “input” part of the hippocampus, to CA1, the “output” node.
This signal is impaired in people with memory disabilities, said Berger, so of course we thought if we could recreate it using silicon, we might be able to restore — or even boost — memory.

Bridging the Gap
Yet this brain’s memory code proved to be extremely tough to crack.
The problem lies in the non-linear nature of neural networks: signals are often noisy and constantly overlap in time, which leads to some inputs being suppressed or accentuated. In a network of hundreds and thousands of neurons, any small change could be greatly amplified and lead to vastly different outputs.
It’s a chaotic black box, laughed Berger.
With the help of modern computing techniques, however, Berger believes he may have a crude solution in hand. His proof?
Use his mathematical theorems to program a chip, and then see if the brain accepts the chip as a replacement — or additional — memory module.
Berger and his team began with a simple task using rats. They trained the animals to push one of two levers to get a tasty treat, and recorded the series of CA3 to CA1 electronic pulses in the hippocampus as the animals learned to pick the correct lever. The team carefully captured the way the signals were transformed as the session was laid down into long-term memory, and used that information — the electrical “essence” of the memory — to program an external memory chip.
They then injected the animals with a drug that temporarily disrupted their ability to form and access long-term memories, causing the animals to forget the reward-associated lever. Next, implanting microelectrodes into the hippocampus, the team pulsed CA1, the output region, with their memory code.
The results were striking — powered by an external memory module, the animals regained their ability to pick the right lever.
Encouraged by the results, Berger next tried his memory implant in monkeys, this time focusing on a brain region called the prefrontal cortex, which receives and modulates memories encoded by the hippocampus.
Placing electrodes into the monkey’s brains, the team showed the animals a series of semi-repeated images, and captured the prefrontal cortex’s activity when the animals recognized an image they had seen earlier. Then with a hefty dose of cocaine, the team inhibited that particular brain region, which disrupted the animal’s recall.
Next, using electrodes programmed with the “memory code,” the researchers guided the brain’s signal processing back on track — and the animal’s performance improved significantly.
A year later, the team further validated their memory implant by showing it could also rescue memory deficits due to hippocampal malfunction in the monkey brain.

A Human Memory Implant
Last year, the team cautiously began testing their memory implant prototype in human volunteers.
Because of the risks associated with brain surgery, the team recruited 12 patients with epilepsy, who already have electrodes implanted into their brain to track down the source of their seizures.
Repeated seizures steadily destroy critical parts of the hippocampus needed for long-term memory formation, explained Berger. So if the implant works, it could benefit these patients as well.
The team asked the volunteers to look through a series of pictures, and then recall which ones they had seen 90 seconds later. As the participants learned, the team recorded the firing patterns in both CA1 and CA3 — that is, the input and output nodes.
Using these data, the team extracted an algorithm — a specific human “memory code” — that could predict the pattern of activity in CA1 cells based on CA3 input. Compared to the brain’s actual firing patterns, the algorithm generated correct predictions roughly 80% of the time.
It’s not perfect, said Berger, but it’s a good start.
Using this algorithm, the researchers have begun to stimulate the output cells with an approximation of the transformed input signal.
We have already used the pattern to zap the brain of one woman with epilepsy, said Dr. Dong Song, an associate professor working with Berger. But he remained coy about the result, only saying that although promising, it’s still too early to tell.
Song’s caution is warranted. Unlike the motor cortex, with its clear structured representation of different body parts, the hippocampus is not organized in any obvious way.
It’s hard to understand why stimulating input locations can lead to predictable results, said Dr. Thoman McHugh, a neuroscientist at the RIKEN Brain Science Institute. It’s also difficult to tell whether such an implant could save the memory of those who suffer from damage to the output node of the hippocampus.
That said, the data is convincing,” McHugh acknowledged.
Berger, on the other hand, is ecstatic. “I never thought I’d see this go into humans,” he said.
But the work is far from done. Within the next few years, Berger wants to see whether the chip can help build long-term memories in a variety of different situations. After all, the algorithm was based on the team’s recordings of one specific task — what if the so-called memory code is not generalizable, instead varying based on the type of input that it receives?
Berger acknowledges that it’s a possibility, but he remains hopeful.
I do think that we will find a model that’s a pretty good fit for most conditions, he said. After all, the brain is restricted by its own biophysics — there’s only so many ways that electrical signals in the hippocampus can be processed, he said.
The goal is to improve the quality of life for somebody who has a severe memory deficit,” said Berger. “If I can give them the ability to form new long-term memories for half the conditions that most people live in, I’ll be happy as hell, and so will be most patients.
ORIGINAL: Singularity Hub

Memory capacity of brain is 10 times more than previously thought

By Hugo Angel,

Data from the Salk Institute shows brain’s memory capacity is in the petabyte range, as much as entire Web

LA JOLLA—Salk researchers and collaborators have achieved critical insight into the size of neural connections, putting the memory capacity of the brain far higher than common estimates. The new work also answers a longstanding question as to how the brain is so energy efficient and could help engineers build computers that are incredibly powerful but also conserve energy.
This is a real bombshell in the field of neuroscience,” said Terry Sejnowski from the Salk Institute for Biological Studies. “Our new measurements of the brain’s memory capacity increase conservative estimates by a factor of 10 to at least a petabyte (215 Bytes = 1000 TeraBytes), in the same ballpark as the World Wide Web.
Our memories and thoughts are the result of patterns of electrical and chemical activity in the brain. A key part of the activity happens when branches of neurons, much like electrical wire, interact at certain junctions, known as synapses. An output ‘wire’ (an axon) from one neuron connects to an input ‘wire’ (a dendrite) of a second neuron. Signals travel across the synapse as chemicals called neurotransmitters to tell the receiving neuron whether to convey an electrical signal to other neurons. Each neuron can have thousands of these synapses with thousands of other neurons.
When we first reconstructed every dendrite, axon, glial process, and synapse from a volume of hippocampus the size of a single red blood cell, we were somewhat bewildered by the complexity and diversity amongst the synapses,” says Kristen Harris, co-senior author of the work and professor of neuroscience at the University of Texas, Austin. “While I had hoped to learn fundamental principles about how the brain is organized from these detailed reconstructions, I have been truly amazed at the precision obtained in the analyses of this report.
Synapses are still a mystery, though their dysfunction can cause a range of neurological diseases. Larger synapses—with more surface area and vesicles of neurotransmitters—are stronger, making them more likely to activate their surrounding neurons than medium or small synapses.
The Salk team, while building a 3D reconstruction of rat hippocampus tissue (the memory center of the brain), noticed something unusual. In some cases, a single axon from one neuron formed two synapses reaching out to a single dendrite of a second neuron, signifying that the first neuron seemed to be sending a duplicate message to the receiving neuron.
At first, the researchers didn’t think much of this duplicity, which occurs about 10 percent of the time in the hippocampus. But Tom Bartol, a Salk staff scientist, had an idea: if they could measure the difference between two very similar synapses such as these, they might glean insight into synaptic sizes, which so far had only been classified in the field as small, medium and large.
In a computational reconstruction of brain tissue in the hippocampus, Salk scientists and UT-Austin scientists found the unusual occurrence of two synapses from the axon of one neuron (translucent black strip) forming onto two spines on the same dendrite of a second neuron (yellow). Separate terminals from one neuron’s axon are shown in synaptic contact with two spines (arrows) on the same dendrite of a second neuron in the hippocampus. The spine head volumes, synaptic contact areas (red), neck diameters (gray) and number of presynaptic vesicles (white spheres) of these two synapses are almost identical. Credit: Salk Institut
To do this, researchers used advanced microscopy and computational algorithms they had developed to image rat brains and reconstruct the connectivity, shapes, volumes and surface area of the brain tissue down to a nanomolecular level.
The scientists expected the synapses would be roughly similar in size, but were surprised to discover the synapses were nearly identical.
We were amazed to find that the difference in the sizes of the pairs of synapses were very small, on average, only about 8 percent different in size,” said Tom Bartol, one of the scientists. “No one thought it would be such a small difference. This was a curveball from nature.
Because the memory capacity of neurons is dependent upon synapse size, this eight percent difference turned out to be a key number the team could then plug into their algorithmic models of the brain to measure how much information could potentially be stored in synaptic connections.
It was known before that the range in sizes between the smallest and largest synapses was a factor of 60 and that most are small.
But armed with the knowledge that synapses of all sizes could vary in increments as little as eight percent between sizes within a factor of 60, the team determined there could be about 26 categories of sizes of synapses, rather than just a few.
Our data suggests there are 10 times more discrete sizes of synapses than previously thought,” says Bartol. In computer terms, 26 sizes of synapses correspond to about 4.7 “bits” of information. Previously, it was thought that the brain was capable of just one to two bits for short and long memory storage in the hippocampus.
This is roughly an order of magnitude of precision more than anyone has ever imagined,” said Sejnowski.
What makes this precision puzzling is that hippocampal synapses are notoriously unreliable. When a signal travels from one neuron to another, it typically activates that second neuron only 10 to 20 percent of the time.
We had often wondered how the remarkable precision of the brain can come out of such unreliable synapses,” says Bartol. One answer, it seems, is in the constant adjustment of synapses, averaging out their success and failure rates over time. The team used their new data and a statistical model to find out how many signals it would take a pair of synapses to get to that eight percent difference.
The researchers calculated that
  • for the smallest synapses, about 1,500 events cause a change in their size/ability (20 minutes) and
  • for the largest synapses, only a couple hundred signaling events (1 to 2 minutes) cause a change.
This means that every 2 or 20 minutes, your synapses are going up or down to the next size,” said Bartol. “The synapses are adjusting themselves according to the signals they receive.
From left: Terry Sejnowski, Cailey Bromer and Tom Bartol. Credit: Salk Institute
Our prior work had hinted at the possibility that spines and axons that synapse together would be similar in size, but the reality of the precision is truly remarkable and lays the foundation for whole new ways to think about brains and computers,” says Harris. “The work resulting from this collaboration has opened a new chapter in the search for learning and memory mechanisms.” Harris adds that the findings suggest more questions to explore, for example, if similar rules apply for synapses in other regions of the brain and how those rules differ during development and as synapses change during the initial stages of learning.
The implications of what we found are far-reaching. Hidden under the apparent chaos and messiness of the brain is an underlying precision to the size and shapes of synapses that was hidden from us.
The findings also offer a valuable explanation for the brain’s surprising efficiency. The waking adult brain generates only about 20 watts of continuous power—as much as a very dim light bulb. The Salk discovery could help computer scientists build ultra-precise but energy-efficient computers, particularly ones that employ deep learning and neural nets techniques capable of sophisticated learning and analysis, such as speech, object recognition and translation.
This trick of the brain absolutely points to a way to design better computers,”said Sejnowski. “Using probabilistic transmission turns out to be as accurate and require much less energy for both computers and brains.
Other authors on the paper were Cailey Bromer of the Salk Institute; Justin Kinney of the McGovern Institute for Brain Research; and Michael A. Chirillo and Jennifer N. Bourne of the University of Texas, Austin.
The work was supported by the NIH and the Howard Hughes Medical Institute.
ORIGINAL: Salk.edu
January 20, 2016

Bridging the Bio-Electronic Divide

By Hugo Angel,

New effort aims for fully implantable devices able to connect with up to one million neurons
A new DARPA program aims to develop an implantable neural interface able to provide unprecedented signal resolution and data-transfer bandwidth between the human brain and the digital world. The interface would serve as a translator, converting between the electrochemical language used by neurons in the brain and the ones and zeros that constitute the language of information technology. The goal is to achieve this communications link in a biocompatible device no larger than one cubic centimeter in size, roughly the volume of two nickels stacked back to back.
The program, Neural Engineering System Design (NESD), stands to dramatically enhance research capabilities in neurotechnology and provide a foundation for new therapies.
“Today’s best brain-computer interface systems are like two supercomputers trying to talk to each other using an old 300-baud modem,” said Phillip Alvelda, the NESD program manager. “Imagine what will become possible when we upgrade our tools to really open the channel between the human brain and modern electronics.
Among the program’s potential applications are devices that could compensate for deficits in sight or hearing by feeding digital auditory or visual information into the brain at a resolution and experiential quality far higher than is possible with current technology.
Neural interfaces currently approved for human use squeeze a tremendous amount of information through just 100 channels, with each channel aggregating signals from tens of thousands of neurons at a time. The result is noisy and imprecise. In contrast, the NESD program aims to develop systems that can communicate clearly and individually with any of up to one million neurons in a given region of the brain.
Achieving the program’s ambitious goals and ensuring that the envisioned devices will have the potential to be practical outside of a research setting will require integrated breakthroughs across numerous disciplines including 
  • neuroscience, 
  • synthetic biology, 
  • low-power electronics, 
  • photonics, 
  • medical device packaging and manufacturing, systems engineering, and 
  • clinical testing.
In addition to the program’s hardware challenges, NESD researchers will be required to develop advanced mathematical and neuro-computation techniques to first transcode high-definition sensory information between electronic and cortical neuron representations and then compress and represent those data with minimal loss of fidelity and functionality.
To accelerate that integrative process, the NESD program aims to recruit a diverse roster of leading industry stakeholders willing to offer state-of-the-art prototyping and manufacturing services and intellectual property to NESD researchers on a pre-competitive basis. In later phases of the program, these partners could help transition the resulting technologies into research and commercial application spaces.
To familiarize potential participants with the technical objectives of NESD, DARPA will host a Proposers Day meeting that runs Tuesday and Wednesday, February 2-3, 2016, in Arlington, Va. The Special Notice announcing the Proposers Day meeting is available at https://www.fbo.gov/spg/ODA/DARPA/CMO/DARPA-SN-16-16/listing.html. More details about the Industry Group that will support NESD is available at https://www.fbo.gov/spg/ODA/DARPA/CMO/DARPA-SN-16-17/listing.html. A Broad Agency Announcement describing the specific capabilities sought will be forthcoming on www.fbo.gov.
NESD is part of a broader portfolio of programs within DARPA that support President Obama’s brain initiative. For more information about DARPA’s work in that domain, please visit:http://www.darpa.mil/program/our-research/darpa-and-the-brain-initiative.
ORIGINAL: DARPA
[email protected]
1/19/2016

Scientists have discovered brain networks linked to intelligence for the first time

By Hugo Angel,

Neurons Shutterstock 265323554_1024
Ralwel/Shutterstock.com
And we may even be able to manipulate them.
For the first time ever, scientists have identified clusters of genes in the brain that are believed to be linked to human intelligence.
The two clusters, called M1 and M3, are networks each consisting of hundreds of individual genes, and are thought to influence our

  • cognitive functions, including 
  • memory, 
  • attention, 
  • processing speed, and 
  • reasoning.
Most provocatively, the researchers who identified M1 and M3 say that these clusters are probably under the control of master switches that regulate how the gene networks function. If this hypothesis is correct and scientists can indeed find these switches, we might even be able to manipulate our genetic intelligence and boost our cognitive capabilities.
“We know that genetics plays a major role in intelligence but until now haven’t known which genes are relevant,said neurologist Michael Johnson, at Imperial College London in the UK. “This research highlights some of the genes involved in human intelligence, and how they interact with each other.
The researchers made their discovery by examining the brains of patients who had undergone neurosurgery for the treatment of epilepsy. They analysed thousands of genes expressed in the brain and combined the findings with two sets of data: genetic information from healthy people who had performed IQ tests, and from people with neurological disorders and intellectual disability.
Comparing the results, the researchers discovered that some of the genes that influence human intelligence in healthy people can also cause significant neurological problems if they end up mutating.
Traits such as intelligence are governed by large groups of genes working together – like a football team made up of players in different positions,said Johnson. “We used computer analysis to identify the genes in the human brain that work together to influence our cognitive ability to make new memories or sensible decisions when faced with lots of complex information. We found that some of these genes overlap with those that cause severe childhood onset epilepsy or intellectual disability.
The research, which is reported in Nature Neuroscience, is at an early stage, but the authors believe their analysis could have a significant impact – not only on how we understand and treat brain diseases, but one day perhaps altering brainpower itself.
Eventually, we hope that this sort of analysis will provide new insights into better treatments for neurodevelopmental diseases such as epilepsy, and ameliorate or treat the cognitive impairments associated with these devastating diseases,” said Johnson. “Our research suggests that it might be possible to work with these genes to modify intelligence, but that is only a theoretical possibility at the moment – we have just taken a first step along that road.
ORIGINAL: Science Alert
PETER DOCKRILL
22 DEC 2015

IBM’s SystemML machine learning system becomes Apache Incubator project

By Hugo Angel,

There’s a race between tech giants to open source machine learning systems and become a dominant platform. Apache SystemML has clear enterprise spin.
IBM on Monday said its machine learning system, dubbed SystemML, has been accepted as an open source project by the Apache Incubator.
SPECIAL FEATURE
 
Machine learning, task automation and robotics are already widely used in business. These and other AI technologies are about to multiply, and we look at how organizations can best take advantage of them.
The Apache Incubator is an entry to becoming a project of The Apache Software Foundation. The general idea behind the incubator is to ensure code donations adhere to Apache’s legal guidelines and communities follow guiding principles.
IBM said it would donate SystemML as an open source project in June.
What’s notable about IBM’s SystemML milestone is that open sourcing machine learning systems is becoming a trend. To wit:
For enterprises, the upshot is that there will be a bevy of open source machine learning code bases to consider. Google TensorFlow and Facebook Torch are tools to train neural networks. SystemML is aimed a broadening the ecosystem to business use.
Why are tech giants going open source with their machine learning tools?
The machine learning platform that gets the most data will learn faster and then become more powerful. That cycle will just result in more data to ingest. IBM is looking to work the enterprise angle on machine learning. Microsoft may be another entry on the enterprise side, but may not go the Apache route.
In addition, there are precedents to how open sourcing big analytics ideas can pay off. MapReduce and Hadoop started as open source projects and would be a cousin of whatever Apache machine learning system wins out.
IBM’s SystemML, which is now Apache SystemML, is used to create industry specific machine learning algorithms for enterprise data analysis. IBM created SystemML so it could write one codebase that could apply to multiple industries and platforms. If SystemML can scale, IBM’s Apache move could provide a gateway to its other analytics wares.
The Apache SystemML project has included more than 320 patches for everything from APIs, data ingestion and documentation, more than 90 contributions to Apache Spark and 15 additional organizations adding to the SystemML engine.
Here’s the full definition of the Apache SystemML project:
SystemML provides declarative large-scale machine learning (ML) that aims at flexible specification of ML algorithms and automatic generation of hybrid runtime plans ranging from single node, in-memory computations, to distributed computations on Apache Hadoop and Apache Spark. ML algorithms are expressed in a R or Python syntax, that includes linear algebra primitives, statistical functions, and ML-specific constructs. This high-level language significantly increases the productivity of data scientists as it provides (1) full flexibility in expressing custom analytics, and (2) data independence from the underlying input formats and physical data representations. Automatic optimization according to data characteristics such as distribution on the disk file system, and sparsity as well as processing characteristics in the distributed environment like number of nodes, CPU, memory per node, ensures both efficiency and scalability.
ORIGINAL: ZDNet
November 23, 2015

Network of artificial neurons learns to use language

By Hugo Angel,

Neurons. Shutterstock
A network of artificial neurons has learned how to use language.
Researchers from the universities of Sassari and Plymouth found that their cognitive model, made up of two million interconnected artificial neurons, was able to learn to use language without any prior knowledge.
The model is called the Artificial Neural Network with Adaptive Behaviour Exploited for Language Learning — or the slightly catchier Annabell for short. Researchers hope Annabell will help shed light on the cognitive processes that underpin language development. 
Annabell has no pre-coded knowledge of language, and learned through communication with a human interlocutor. 
The system is capable of learning to communicate through natural language starting from tabula rasa, without any prior knowledge of the structure of phrases, meaning of words [or] role of the different classes of words, and only by interacting with a human through a text-based interface,” researchers said.
It is also able to learn nouns, verbs, adjectives, pronouns and other word classes and to use them in expressive language.” 
Annabell was able to learn due to two functional mechanisms — synaptic plasticity and neural gating, both of which are present in the human brain.

  • Synaptic plasticity: refers to the brain’s ability to increase efficiency when the connection between two neurons are activated simultaneously, and is linked to learning and memory.
  • Neural gating mechanisms: play an important role in the cortex by modulating neurons, behaving like ‘switches’ that turn particular behaviours on and off. When turned on, they transmit a signal; when off, they block the signal. Annabell is able to learn using these mechanisms, as the flow of information inputted into the system is controlled in different areas
The results show that, compared to previous cognitive neural models of language, the Annabell model is able to develop a broad range of functionalities, starting from a tabula rasa condition,” researchers said in their conclusion
The current version of the system sets the scene for subsequent experiments on the fluidity of the brain and its robustness. It could lead to the extension of the model for handling the developmental stages in the grounding and acquisition of language.
ORIGINAL: Wired – UK
13 NOVEMBER 15 

How Your Brain Is Wired Reveals the Real You

By Hugo Angel,

The Human Connectome Project finds surprising correlations between brain architecture and behavior
©iStock.com
The brain’s wiring patterns can shed light on a person’s positive and negative traits, researchers report in Nature Neuroscience. The finding, published on September 28, is the first from the Human Connectome Project (HCP), an international effort to map active connections between neurons in different parts of the brain.
The HCP, which launched in 2010 at a cost of US$40 million, seeks to scan the brain networks, or connectomes, of 1,200 adults. Among its goals is to chart the networks that are active when the brain is idle; these are thought to keep the different parts of the brain connected in case they need to perform a task.
In April, a branch of the project led by one of the HCP’s co-chairs, biomedical engineer Stephen Smith at the University of Oxford, UK, released a database of resting-state connectomes from about 460 people between 22 and 35 years old. Each brain scan is supplemented by information on approximately 280 traits, such as the person’s age, whether they have a history of drug use, their socioeconomic status and personality traits, and their performance on various intelligence tests.
Axis of connectivity
Smith and his colleagues ran a massive computer analysis to look at how these traits varied among the volunteers, and how the traits correlated with different brain connectivity patterns. The team was surprised to find a single, stark difference in the way brains were connected. People with more ‘positive’ variables, such as more education, better physical endurance and above-average performance on memory tests, shared the same patterns. Their brains seemed to be more strongly connected than those of people with ‘negative’ traits such as smoking, aggressive behaviour or a family history of alcohol abuse.
Marcus Raichle, a neuroscientist at Washington University in St Louis, Missouri, is impressed that the activity and anatomy of the brains alone were enough to reveal this ‘positive-negative’ axis. “You can distinguish people with successful traits and successful lives versus those who are not so successful,” he says.
But Raichle says that it is impossible to determine from this study how different traits relate to one another and whether the weakened brain connections are the cause or effect of negative traits. And although the patterns are clear across the large group of HCP volunteers, it might be some time before these connectivity patterns could be used to predict risks and traits in a given individual. Deanna Barch, a psychologist at Washington University who co-authored the latest study, says that once these causal relationships are better understood, it might be possible to push brains toward the ‘good’ end of the axis.
Van Wedeen, a neuroscientist at Massachusetts General Hospital in Boston, says that the findings could help to prioritize future research. For instance, one of the negative traits that pulled a brain farthest down the negative axis was marijuana use in recent weeks. Wedeen says that the finding emphasizes the importance of projects such as one launched by the US National Institute on Drug Abuse last week, which will follow 10,000 adolescents for 10 years to determine how marijuana and other drugs affect their brains.
Wedeen finds it interesting that the wiring patterns associated with people’s general intelligence scores were not exactly the same as the patterns for individual measures of cognition—people with good hand–eye coordination, for instance, fell farther down the negative axis than did those with good verbal memory. This suggests that the biology underlying cognition might be more complex than our current definition of general intelligence, and that it could be influenced by demographic and behavioural factors. “Maybe it will cause us to reconsider what [the test for general intelligence] is measuring,” he says. “We have a new mystery now.
Much more connectome data should emerge in the next few years. The Harvard Aging Brain Study, for instance, is measuring active brain connections in 284 people aged between 65 and 90, and released its first data earlier this year. And Smith is running the Developing Human Connectome Project in the United Kingdom, which is imaging the brains of 1,200 babies before and after birth. He expects to release its first data in the next few months. Meanwhile, the HCP is analysing genetic data from its participants, which include a large number of identical and fraternal twins, to determine how genetic and environmental factors relate to brain connectivity patterns.
This article is reproduced with permission and was first published on September 28, 2015.
September 28, 2015