Category: Brain


A Giant Neuron Has Been Found Wrapped Around the Entire Circumference of the Brain

By Hugo Angel,

Allen Institute for Brain Science

This could be where consciousness forms. For the first time, scientists have detected a giant neuron wrapped around the entire circumference of a mouse’s brain, and it’s so densely connected across both hemispheres, it could finally explain the origins of consciousness.

Using a new imaging technique, the team detected the giant neuron emanating from one of the best-connected regions in the brain, and say it could be coordinating signals from different areas to create conscious thought.

This recently discovered neuron is one of three that have been detected for the first time in a mammal’s brain, and the new imaging technique could help us figure out if similar structures have gone undetected in our own brains for centuries.

At a recent meeting of the Brain Research through Advancing Innovative Neurotechnologies initiative in Maryland, a team from the Allen Institute for Brain Science described how all three neurons stretch across both hemispheres of the brain, but the largest one wraps around the organ’s circumference like a “crown of thorns”.
You can see them highlighted in the image at the top of the page.

Lead researcher Christof Koch told Sara Reardon at Nature that they’ve never seen neurons extend so far across both regions of the brain before.
Oddly enough, all three giant neurons happen to emanate from a part of the brain that’s shown intriguing connections to human consciousness in the past – the claustrum, a thin sheet of grey matter that could be the most connected structure in the entire brain, based on volume.

This relatively small region is hidden between the inner surface of the neocortex in the centre of the brain, and communicates with almost all regions of cortex to achieve many higher cognitive functions such as

  • language,
  • long-term planning, and
  • advanced sensory tasks such as
  • seeing and
  • hearing.

Advanced brain-imaging techniques that look at the white matter fibres coursing to and from the claustrum reveal that it is a neural Grand Central Station,Koch wrote for Scientific American back in 2014. “Almost every region of the cortex sends fibres to the claustrum.”

The claustrum is so densely connected to several crucial areas in the brain that Francis Crick of DNA double helix fame referred to it a “conductor of consciousnessin a 2005 paper co-written with Koch.

They suggested that it connects all of our external and internal perceptions together into a single unifying experience, like a conductor synchronises an orchestra, and strange medical cases in the past few years have only made their case stronger.

Back in 2014, a 54-year-old woman checked into the George Washington University Medical Faculty Associates in Washington, DC, for epilepsy treatment.

This involved gently probing various regions of her brain with electrodes to narrow down the potential source of her epileptic seizures, but when the team started stimulating the woman’s claustrum, they found they could effectively ‘switch’ her consciousness off and on again.

Helen Thomson reported for New Scientist at the time:
When the team zapped the area with high frequency electrical impulses, the woman lost consciousness. She stopped reading and stared blankly into space, she didn’t respond to auditory or visual commands and her breathing slowed.

As soon as the stimulation stopped, she immediately regained consciousness with no memory of the event. The same thing happened every time the area was stimulated during two days of experiments.”

According to Koch, who was not involved in the study, this kind of abrupt and specific ‘stopping and starting‘ of consciousness had never been seen before.

Another experiment in 2015 examined the effects of claustrum lesions on the consciousness of 171 combat veterans with traumatic brain injuries.

They found that claustrum damage was associated with the duration, but not frequency, of loss of consciousness, suggesting that it could play an important role in the switching on and off of conscious thought, but another region could be involved in maintaining it.

And now Koch and his team have discovered extensive neurons in mouse brains emanating from this mysterious region.

In order to map neurons, researchers usually have to inject individual nerve cells with a dye, cut the brain into thin sections, and then trace the neuron’s path by hand.

It’s a surprisingly rudimentary technique for a neuroscientist to have to perform, and given that they have to destroy the brain in the process, it’s not one that can be done regularly on human organs.

Koch and his team wanted to come up with a technique that was less invasive, and engineered mice that could have specific genes in their claustrum neurons activated by a specific drug.

When the researchers fed the mice a small amount of the drug, only a handful of neurons received enough of it to switch on these genes,Reardon reports for Nature.

That resulted in production of a green fluorescent protein that spread throughout the entire neuron. The team then took 10,000 cross-sectional images of the mouse brain, and used a computer program to create a 3D reconstruction of just three glowing cells.

We should keep in mind that just because these new giant neurons are connected to the claustrum doesn’t mean that Koch’s hypothesis about consciousness is correct – we’re a long way from proving that yet.

It’s also important to note that these neurons have only been detected in mice so far, and the research has yet to be published in a peer-reviewed journal, so we need to wait for further confirmation before we can really delve into what this discovery could mean for humans.

But the discovery is an intriguing piece of the puzzle that could help up make sense of this crucial, but enigmatic region of the brain, and how it could relate to the human experience of conscious thought.

The research was presented at the 15 February meeting of the Brain Research through Advancing Innovative Neurotechnologies initiative in Bethesda, Maryland.

ORIGINAL: ScienceAlert

BEC CREW
28 FEB 2017

Scientists Just Found Evidence That Neurons Can Communicate in a Way We Never Anticipated

By Hugo Angel,

Andrii Vodolazhskyi/Shutterstock.com

A new brain mechanism hiding in plain sight. Researchers have discovered a brand new mechanism that controls the way nerve cells in our brain communicate with each other to regulate learning and long-term memory.

The fact that a new brain mechanism has been hiding in plain sight is a reminder of how much we have yet to learn about how the human brain works, and what goes wrong in neurodegenerative disorders such as Alzheimer’s and epilepsy.

These discoveries represent a significant advance and will have far-reaching implications for the understanding of 

  • memory, 
  • cognition, 
  • developmental plasticity, and 
  • neuronal network formation and stabilisation,”  

said lead researcher Jeremy Henley from the University of Bristol in the UK.

We believe that this is a groundbreaking study that opens new lines of inquiry which will increase understanding of the molecular details of synaptic function in health and disease.

The human brain contains around 100 billion nerve cells, and each of those makes about 10,000 connections – known as synapses – with other cells.

That’s a whole lot of connections, and each of them is strengthened or weakened depending on different brain mechanisms that scientists have spent decades trying to understand.

Until now, one of the best known mechanisms to increase the strength of information flow across synapses was known as LTP, or long-term potentiation.

LTP intensifies the connection between cells to make information transfer more efficient, and it plays a role in a wide range of neurodegenerative conditions –  

  • too much LTP, and you risk disorders such as epilepsy,  
  • too little, and it could cause dementia or Alzheimer’s disease.
As far as researchers were aware, LTP is usually controlled by the activation of special proteins called NMDA receptors.

But now the UK team has discovered a brand new type of LTP that’s regulated in an entirely different way.

After investigating the formation of synapses in the lab, the team showed that this new LTP mechanism is controlled by molecules known as kainate receptors, instead of NMDA receptors.

These data reveal a new and, to our knowledge, previously unsuspected role for postsynaptic kainate receptors in the induction of functional and structural plasticity in the hippocampus,the researchers write in Nature Neuroscience.

This means we’ve now uncovered a previously unexplored mechanism that could control learning and memory.

Untangling the interactions between the signal receptors in the brain not only tells us more about the inner workings of a healthy brain, but also provides a practical insight into what happens when we form new memories,said one of the researchers, Milos Petrovic from the University of Central Lancashire.

If we can preserve these signals it may help protect against brain diseases.

Not only does this open up a new research pathway that could lead to a better understanding of how our brains work, but if researchers can find a way to target these new pathways, it could lead to more effective treatments for a range of neurodegenerative disorders.

It’s still early days, and the discovery will now need to be verified by independent researchers, but it’s a promising new field of research.

This is certainly an extremely exciting discovery and something that could potentially impact the global population,said Petrovic.

The research has been published in Nature Neuroscience.

ORIGINAL: IFLScience

By FIONA MACDONALD
20 FEB 2017

Where does intelligence come from?

By Hugo Angel,

Add caption
It is amazing how intelligent we can be. We can construct shelter, find new ways of hunting, and create boats and machines. Our unique intelligence has been responsible for the emergence of civilization.
But how does a set of living cells become intelligent? How can flesh and blood turn into something that can create bicycles and airplanes or write novels?
This is the question of the origin of intelligence.
This problem has puzzled many theorists and scientists, and it is particularly important if we want to build intelligent machines. They still lag well behind us. Although computers calculate millions of times faster than we do, it is we who understand the big picture in which these calculations fit. Even animals are much more intelligent than machines. A mouse can find its way in a hostile forest and survive. This cannot be said for our computers or robots.
The question of how to achieve intelligence remains a mystery for scientists.
Recently, however a new theory has been proposed that may resolve this very question. The theory is called practopoiesis and is founded in the most fundamental capability of all biological organisms—their ability to adapt.
Darwin’s theory of evolution describes one way how our genomes adapt. By creating offspring new combinations of genes are tested; the good ones are kept and the bad ones are disposed of. The result is a genome better adapted to the environment.
Practopoiesis tells us that somewhat similar adaptation mechanisms of trials and errors occur while an organism grows, while it digests food and also, while it acts intelligently or thinks.
For example, the growth of our body is not precisely programmed by the genes. Instead, our genes perform experiments, which require feedback from the environment and corrections of errors. Only with trial and errors can our body properly grow.
Our genes contain an elaborate knowledge of which experiments need to be done, and this knowledge of trial-and-error approaches has been acquired through eons of evolution. We kept whatever worked well for our ancestors.
However, this knowledge alone is not enough to make us intelligent.
To create intelligent behavior such as thinking, decision making, understanding a poem, or simply detecting one’s friend in a crowd of strangers, our bodies require yet another type of trial-and-error knowledge. There are mechanisms in our body that also contain elaborate knowledge for experimenting, but they are much faster. The knowledge of these mechanisms is not collected through evolution but through the development over the lifetime of an individual.
These fast adaptive mechanisms continually adjust the big network of our connected nerve cells. These adaptation mechanisms can change in an eye-blink the way the brain networks are effectively connected. It may take less than a second to make a change necessary to recognize one’s own grandmother, or to make a decision, or to get a new idea on how to solve a problem.
The slow and the fast adaptive mechanisms share one thing: They cannot be successful without receiving feedback and thus iterating through several stages of trial and error; for example, testing several possibilities of who this person in distance could be.
Practopoiesis states that the slow and fast adaptive mechanisms are collectively responsible for creation of intelligence and are organized into a hierarchy. 
  • First, evolution creates genes at a painstakingly slow tempo. Then genes slowly create the mechanisms of fast adaptations
  • Next, adaptation mechanisms change the properties of our nerve cells within seconds
  • And finally, the resulting adjusted networks of nerve cells route sensory signals to muscles with the speed of lightning. 
  • At the end behavior is created.
Probably the most groundbreaking aspect of practopoietic theory is that our intelligent minds are not primarily located in the connectivity matrix of our neural networks, as it has been widely held, but instead in the elaborate knowledge of the fast adaptive mechanisms. The more knowledge our genes store into our quick abilities to adapt nerve cells, the more capability we have to adjust in novel situations, solve problems, and generally, act intelligently.
Therefore, our intelligence seems to come from the hierarchy of adaptive mechanisms, from the very slow evolution that enables the genome to adapt over a lifetime, to the quick pace of neural adaptation expressing knowledge acquired through its lifetime. Only when these adaptations have been performed successfully can our networks of neurons perform tasks with wonderful accuracy.
Our capability to survive and create originates, then, 
  • from the adaptive mechanisms that operate at different levels and 
  • the vast amounts of knowledge accumulated by each of the levels.
 The combined result of all of them together is what makes us intelligent.
May 16, 2016
Danko Nikolić
About the Author:
Danko Nikolić is a brain and mind scientist, running an electrophysiology lab at the Max Planck Institute for Brain Research, and is the creator of the concept of ideasthesia. More about practopoiesis can be read here

First Human Tests of Memory Boosting Brain Implant—a Big Leap Forward

By Hugo Angel,

You have to begin to lose your memory, if only bits and pieces, to realize that memory is what makes our lives. Life without memory is no life at all.” — Luis Buñuel Portolés, Filmmaker
Image Credit: Shutterstock.com
Every year, hundreds of millions of people experience the pain of a failing memory.
The reasons are many:

  • traumatic brain injury, which haunts a disturbingly high number of veterans and football players; 
  • stroke or Alzheimer’s disease, which often plagues the elderly; or 
  • even normal brain aging, which inevitably touches us all.
Memory loss seems to be inescapable. But one maverick neuroscientist is working hard on an electronic cure. Funded by DARPA, Dr. Theodore Berger, a biomedical engineer at the University of Southern California, is testing a memory-boosting implant that mimics the kind of signal processing that occurs when neurons are laying down new long-term memories.
The revolutionary implant, already shown to help memory encoding in rats and monkeys, is now being tested in human patients with epilepsy — an exciting first that may blow the field of memory prosthetics wide open.
To get here, however, the team first had to crack the memory code.

Deciphering Memory
From the very onset, Berger knew he was facing a behemoth of a problem.
We weren’t looking to match everything the brain does when it processes memory, but to at least come up with a decent mimic, said Berger.
Of course people asked: can you model it and put it into a device? Can you get that device to work in any brain? It’s those things that lead people to think I’m crazy. They think it’s too hard,” he said.
But the team had a solid place to start.
The hippocampus, a region buried deep within the folds and grooves of the brain, is the critical gatekeeper that transforms memories from short-lived to long-term. In dogged pursuit, Berger spent most of the last 35 years trying to understand how neurons in the hippocampus accomplish this complicated feat.
At its heart, a memory is a series of electrical pulses that occur over time that are generated by a given number of neurons, said Berger. This is important — it suggests that we can reduce it to mathematical equations and put it into a computational framework, he said.
Berger hasn’t been alone in his quest.
By listening to the chatter of neurons as an animal learns, teams of neuroscientists have begun to decipher the flow of information within the hippocampus that supports memory encoding. Key to this process is a strong electrical signal that travels from CA3, the “input” part of the hippocampus, to CA1, the “output” node.
This signal is impaired in people with memory disabilities, said Berger, so of course we thought if we could recreate it using silicon, we might be able to restore — or even boost — memory.

Bridging the Gap
Yet this brain’s memory code proved to be extremely tough to crack.
The problem lies in the non-linear nature of neural networks: signals are often noisy and constantly overlap in time, which leads to some inputs being suppressed or accentuated. In a network of hundreds and thousands of neurons, any small change could be greatly amplified and lead to vastly different outputs.
It’s a chaotic black box, laughed Berger.
With the help of modern computing techniques, however, Berger believes he may have a crude solution in hand. His proof?
Use his mathematical theorems to program a chip, and then see if the brain accepts the chip as a replacement — or additional — memory module.
Berger and his team began with a simple task using rats. They trained the animals to push one of two levers to get a tasty treat, and recorded the series of CA3 to CA1 electronic pulses in the hippocampus as the animals learned to pick the correct lever. The team carefully captured the way the signals were transformed as the session was laid down into long-term memory, and used that information — the electrical “essence” of the memory — to program an external memory chip.
They then injected the animals with a drug that temporarily disrupted their ability to form and access long-term memories, causing the animals to forget the reward-associated lever. Next, implanting microelectrodes into the hippocampus, the team pulsed CA1, the output region, with their memory code.
The results were striking — powered by an external memory module, the animals regained their ability to pick the right lever.
Encouraged by the results, Berger next tried his memory implant in monkeys, this time focusing on a brain region called the prefrontal cortex, which receives and modulates memories encoded by the hippocampus.
Placing electrodes into the monkey’s brains, the team showed the animals a series of semi-repeated images, and captured the prefrontal cortex’s activity when the animals recognized an image they had seen earlier. Then with a hefty dose of cocaine, the team inhibited that particular brain region, which disrupted the animal’s recall.
Next, using electrodes programmed with the “memory code,” the researchers guided the brain’s signal processing back on track — and the animal’s performance improved significantly.
A year later, the team further validated their memory implant by showing it could also rescue memory deficits due to hippocampal malfunction in the monkey brain.

A Human Memory Implant
Last year, the team cautiously began testing their memory implant prototype in human volunteers.
Because of the risks associated with brain surgery, the team recruited 12 patients with epilepsy, who already have electrodes implanted into their brain to track down the source of their seizures.
Repeated seizures steadily destroy critical parts of the hippocampus needed for long-term memory formation, explained Berger. So if the implant works, it could benefit these patients as well.
The team asked the volunteers to look through a series of pictures, and then recall which ones they had seen 90 seconds later. As the participants learned, the team recorded the firing patterns in both CA1 and CA3 — that is, the input and output nodes.
Using these data, the team extracted an algorithm — a specific human “memory code” — that could predict the pattern of activity in CA1 cells based on CA3 input. Compared to the brain’s actual firing patterns, the algorithm generated correct predictions roughly 80% of the time.
It’s not perfect, said Berger, but it’s a good start.
Using this algorithm, the researchers have begun to stimulate the output cells with an approximation of the transformed input signal.
We have already used the pattern to zap the brain of one woman with epilepsy, said Dr. Dong Song, an associate professor working with Berger. But he remained coy about the result, only saying that although promising, it’s still too early to tell.
Song’s caution is warranted. Unlike the motor cortex, with its clear structured representation of different body parts, the hippocampus is not organized in any obvious way.
It’s hard to understand why stimulating input locations can lead to predictable results, said Dr. Thoman McHugh, a neuroscientist at the RIKEN Brain Science Institute. It’s also difficult to tell whether such an implant could save the memory of those who suffer from damage to the output node of the hippocampus.
That said, the data is convincing,” McHugh acknowledged.
Berger, on the other hand, is ecstatic. “I never thought I’d see this go into humans,” he said.
But the work is far from done. Within the next few years, Berger wants to see whether the chip can help build long-term memories in a variety of different situations. After all, the algorithm was based on the team’s recordings of one specific task — what if the so-called memory code is not generalizable, instead varying based on the type of input that it receives?
Berger acknowledges that it’s a possibility, but he remains hopeful.
I do think that we will find a model that’s a pretty good fit for most conditions, he said. After all, the brain is restricted by its own biophysics — there’s only so many ways that electrical signals in the hippocampus can be processed, he said.
The goal is to improve the quality of life for somebody who has a severe memory deficit,” said Berger. “If I can give them the ability to form new long-term memories for half the conditions that most people live in, I’ll be happy as hell, and so will be most patients.
ORIGINAL: Singularity Hub

Research on largest network of cortical neurons to date published in Nature

By Hugo Angel,

Robust network of connections between neurons performing similar tasks shows fundamentals of how brain circuits are wired
Even the simplest networks of neurons in the brain are composed of millions of connections, and examining these vast networks is critical to understanding how the brain works. An international team of researchers, led by R. Clay Reid, Wei Chung Allen Lee and Vincent Bonin from the Allen Institute for Brain Science, Harvard Medical School and Neuro-Electronics Research Flanders (NERF), respectively, has published the largest network to date of connections between neurons in the cortex, where high-level processing occurs, and have revealed several crucial elements of how networks in the brain are organized. The results are published this week in the journal Nature.
A network of cortical neurons whose connections were traced from a multi-terabyte 3D data set. The data were created by an electron microscope designed and built at Harvard Medical School to collect millions of images in nanoscopic detail, so that every one of the “wires” could be seen, along with the connections between them. Some of the neurons are color-coded according to their activity patterns in the living brain. This is the newest example of functional connectomics, which combines high-throughput functional imaging, at single-cell resolution, with terascale anatomy of the very same neurons. Image credit: Clay Reid, Allen Institute; Wei-Chung Lee, Harvard Medical School; Sam Ingersoll, graphic artist
This is a culmination of a research program that began almost ten years ago. Brain networks are too large and complex to understand piecemeal, so we used high-throughput techniques to collect huge data sets of brain activity and brain wiring,” says R. Clay Reid, M.D., Ph.D., Senior Investigator at the Allen Institute for Brain Science. “But we are finding that the effort is absolutely worthwhile and that we are learning a tremendous amount about the structure of networks in the brain, and ultimately how the brain’s structure is linked to its function.
Although this study is a landmark moment in a substantial chapter of work, it is just the beginning,” says Wei-Chung Lee, Ph.D., Instructor in Neurobiology at Harvard Medicine School and lead author on the paper. “We now have the tools to embark on reverse engineering the brain by discovering relationships between circuit wiring and neuronal and network computations.” 
For decades, researchers have studied brain activity and wiring in isolation, unable to link the two,” says Vincent Bonin, Principal Investigator at Neuro-Electronics Research Flanders. “What we have achieved is to bridge these two realms with unprecedented detail, linking electrical activity in neurons with the nanoscale synaptic connections they make with one another.
We have found some of the first anatomical evidence for modular architecture in a cortical network as well as the structural basis for functionally specific connectivity between neurons,” Lee adds. “The approaches we used allowed us to define the organizational principles of neural circuits. We are now poised to discover cortical connectivity motifs, which may act as building blocks for cerebral network function.
Lee and Bonin began by identifying neurons in the mouse visual cortex that responded to particular visual stimuli, such as vertical or horizontal bars on a screen. Lee then made ultra-thin slices of brain and captured millions of detailed images of those targeted cells and synapses, which were then reconstructed in three dimensions. Teams of annotators on both coasts of the United States simultaneously traced individual neurons through the 3D stacks of images and located connections between individual neurons.
Analyzing this wealth of data yielded several results, including the first direct structural evidence to support the idea that neurons that do similar tasks are more likely to be connected to each other than neurons that carry out different tasks. Furthermore, those connections are larger, despite the fact that they are tangled with many other neurons that perform entirely different functions.
Part of what makes this study unique is the combination of functional imaging and detailed microscopy,” says Reid. “The microscopic data is of unprecedented scale and detail. We gain some very powerful knowledge by first learning what function a particular neuron performs, and then seeing how it connects with neurons that do similar or dissimilar things.
It’s like a symphony orchestra with players sitting in random seats,” Reid adds. “If you listen to only a few nearby musicians, it won’t make sense. By listening to everyone, you will understand the music; it actually becomes simpler. If you then ask who each musician is listening to, you might even figure out how they make the music. There’s no conductor, so the orchestra needs to communicate.
This combination of methods will also be employed in an IARPA contracted project with the Allen Institute for Brain Science, Baylor College of Medicine, and Princeton University, which seeks to scale these methods to a larger segment of brain tissue. The data of the present study is being made available online for other researchers to investigate.
This work was supported by the National Institutes of Health (R01 EY10115, R01 NS075436 and R21 NS085320); through resources provided by the National Resource for Biomedical Supercomputing at the Pittsburgh Supercomputing Center (P41 RR06009) and the National Center for Multiscale Modeling of Biological Systems (P41 GM103712); the Harvard Medical School Vision Core Grant (P30 EY12196); the Bertarelli Foundation; the Edward R. and Anne G. Lefler Center; the Stanley and Theodora Feldberg Fund; Neuro-Electronics Research Flanders (NERF); and the Allen Institute for Brain Science.
About the Allen Institute for Brain Science
The Allen Institute for Brain Science, a division of the Allen Institute (alleninstitute.org), is an independent, 501(c)(3) nonprofit medical research organization dedicated to accelerating the understanding of how the human brain works in health and disease. Using a big science approach, the Allen Institute generates useful public resources used by researchers and organizations around the globe, drives technological and analytical advances, and discovers fundamental brain properties through integration of experiments, modeling and theory. Launched in 2003 with a seed contribution from founder and philanthropist Paul G. Allen, the Allen Institute is supported by a diversity of government, foundation and private funds to enable its projects. Given the Institute’s achievements, Mr. Allen committed an additional $300 million in 2012 for the first four years of a ten-year plan to further propel and expand the Institute’s scientific programs, bringing his total commitment to date to $500 million. The Allen Institute’s data and tools are publicly available online at brain-map.org.
About Harvard Medical School
HMS has more than 7,500 full-time faculty working in 10 academic departments located at the School’s Boston campus or in hospital-based clinical departments at 15 Harvard-affiliated teaching hospitals and research institutes: Beth Israel Deaconess Medical Center, Boston Children’s Hospital, Brigham and Women’s Hospital, Cambridge Health Alliance, Dana-Farber Cancer Institute, Harvard Pilgrim Health Care Institute, Hebrew SeniorLife, Joslin Diabetes Center, Judge Baker Children’s Center, Massachusetts Eye and Ear/Schepens Eye Research Institute, Massachusetts General Hospital, McLean Hospital, Mount Auburn Hospital, Spaulding Rehabilitation Hospital and VA Boston Healthcare System.
About NERF
Neuro-Electronics Research Flanders (NERF; www.nerf.be) is a neurotechnology research initiative is headquartered in Leuven, Belgium initiated by imec, KU Leuven and VIB to unravel how electrical activity in the brain gives rise to mental function and behaviour. Imec performs world-leading research in nanoelectronics and has offices in Belgium, the Netherlands, Taiwan, USA, China, India and Japan. Its staff of about 2,200 people includes almost 700 industrial residents and guest researchers. In 2014, imec’s revenue (P&L) totaled 363 million euro. VIB is a life sciences research institute in Flanders, Belgium. With more than 1470 scientists from over 60 countries, VIB performs basic research into the molecular foundations of life. KU Leuven is one of the oldest and largest research universities in Europe with over 10,000 employees and 55,000 students.
ORIGINAL: Allen Institute
March 28th, 2016

Brain waves may be spread by weak electrical field

By Hugo Angel,

The research team says the electrical fields could be behind the spread of sleep and theta waves, along with epileptic seizure waves (Credit:Shutterstock)
Mechanism tied to waves associated with epilepsy
Researchers at Case Western Reserve University may have found a new way information is communicated throughout the brain.
Their discovery could lead to identifying possible new targets to investigate brain waves associated with memory and epilepsy and better understand healthy physiology.
They recorded neural spikes traveling at a speed too slow for known mechanisms to circulate throughout the brain. The only explanation, the scientists say, is the wave is spread by a mild electrical field they could detect. Computer modeling and in-vitro testing support their theory.
Others have been working on such phenomena for decades, but no one has ever made these connections,” said Steven J. Schiff, director of the Center for Neural Engineering at Penn State University, who was not involved in the study. “The implications are that such directed fields can be used to modulate both pathological activities, such as seizures, and to interact with cognitive rhythms that help regulate a variety of processes in the brain.
Scientists Dominique Durand, Elmer Lincoln Lindseth Professor in Biomedical Engineering at Case School of Engineering and leader of the research, former graduate student Chen Sui and current PhD students Rajat Shivacharan and Mingming Zhang, report their findings in The Journal of Neuroscience.
Researchers have thought that the brain’s endogenous electrical fields are too weak to propagate wave transmission,” Durand said. “But it appears the brain may be using the fields to communicate without synaptic transmissions, gap junctions or diffusion.
How the fields may work
Computer modeling and testing on mouse hippocampi (the central part of the brain associated with memory and spatial navigation) in the lab indicate the field begins in one cell or group of cells.
Although the electrical field is of low amplitude, the field excites and activates immediate neighbors, which, in turn, excite and activate immediate neighbors, and so on across the brain at a rate of about 0.1 meter per second.
Blocking the endogenous electrical field in the mouse hippocampus and increasing the distance between cells in the computer model and in-vitro both slowed the speed of the wave.
These results, the researchers say, confirm that the propagation mechanism for the activity is consistent with the electrical field.
Because sleep waves and theta waves–which are associated with forming memories during sleep–and epileptic seizure waves travel at about 1 meter per second, the researchers are now investigating whether the electrical fields play a role in normal physiology and in epilepsy.
If so, they will try to discern what information the fields may be carrying. Durand’s lab is also investigating where the endogenous spikes come from.
ORIGINAL: Eurkalert
14-JAN-2016

Memory capacity of brain is 10 times more than previously thought

By Hugo Angel,

Data from the Salk Institute shows brain’s memory capacity is in the petabyte range, as much as entire Web

LA JOLLA—Salk researchers and collaborators have achieved critical insight into the size of neural connections, putting the memory capacity of the brain far higher than common estimates. The new work also answers a longstanding question as to how the brain is so energy efficient and could help engineers build computers that are incredibly powerful but also conserve energy.
This is a real bombshell in the field of neuroscience,” said Terry Sejnowski from the Salk Institute for Biological Studies. “Our new measurements of the brain’s memory capacity increase conservative estimates by a factor of 10 to at least a petabyte (215 Bytes = 1000 TeraBytes), in the same ballpark as the World Wide Web.
Our memories and thoughts are the result of patterns of electrical and chemical activity in the brain. A key part of the activity happens when branches of neurons, much like electrical wire, interact at certain junctions, known as synapses. An output ‘wire’ (an axon) from one neuron connects to an input ‘wire’ (a dendrite) of a second neuron. Signals travel across the synapse as chemicals called neurotransmitters to tell the receiving neuron whether to convey an electrical signal to other neurons. Each neuron can have thousands of these synapses with thousands of other neurons.
When we first reconstructed every dendrite, axon, glial process, and synapse from a volume of hippocampus the size of a single red blood cell, we were somewhat bewildered by the complexity and diversity amongst the synapses,” says Kristen Harris, co-senior author of the work and professor of neuroscience at the University of Texas, Austin. “While I had hoped to learn fundamental principles about how the brain is organized from these detailed reconstructions, I have been truly amazed at the precision obtained in the analyses of this report.
Synapses are still a mystery, though their dysfunction can cause a range of neurological diseases. Larger synapses—with more surface area and vesicles of neurotransmitters—are stronger, making them more likely to activate their surrounding neurons than medium or small synapses.
The Salk team, while building a 3D reconstruction of rat hippocampus tissue (the memory center of the brain), noticed something unusual. In some cases, a single axon from one neuron formed two synapses reaching out to a single dendrite of a second neuron, signifying that the first neuron seemed to be sending a duplicate message to the receiving neuron.
At first, the researchers didn’t think much of this duplicity, which occurs about 10 percent of the time in the hippocampus. But Tom Bartol, a Salk staff scientist, had an idea: if they could measure the difference between two very similar synapses such as these, they might glean insight into synaptic sizes, which so far had only been classified in the field as small, medium and large.
In a computational reconstruction of brain tissue in the hippocampus, Salk scientists and UT-Austin scientists found the unusual occurrence of two synapses from the axon of one neuron (translucent black strip) forming onto two spines on the same dendrite of a second neuron (yellow). Separate terminals from one neuron’s axon are shown in synaptic contact with two spines (arrows) on the same dendrite of a second neuron in the hippocampus. The spine head volumes, synaptic contact areas (red), neck diameters (gray) and number of presynaptic vesicles (white spheres) of these two synapses are almost identical. Credit: Salk Institut
To do this, researchers used advanced microscopy and computational algorithms they had developed to image rat brains and reconstruct the connectivity, shapes, volumes and surface area of the brain tissue down to a nanomolecular level.
The scientists expected the synapses would be roughly similar in size, but were surprised to discover the synapses were nearly identical.
We were amazed to find that the difference in the sizes of the pairs of synapses were very small, on average, only about 8 percent different in size,” said Tom Bartol, one of the scientists. “No one thought it would be such a small difference. This was a curveball from nature.
Because the memory capacity of neurons is dependent upon synapse size, this eight percent difference turned out to be a key number the team could then plug into their algorithmic models of the brain to measure how much information could potentially be stored in synaptic connections.
It was known before that the range in sizes between the smallest and largest synapses was a factor of 60 and that most are small.
But armed with the knowledge that synapses of all sizes could vary in increments as little as eight percent between sizes within a factor of 60, the team determined there could be about 26 categories of sizes of synapses, rather than just a few.
Our data suggests there are 10 times more discrete sizes of synapses than previously thought,” says Bartol. In computer terms, 26 sizes of synapses correspond to about 4.7 “bits” of information. Previously, it was thought that the brain was capable of just one to two bits for short and long memory storage in the hippocampus.
This is roughly an order of magnitude of precision more than anyone has ever imagined,” said Sejnowski.
What makes this precision puzzling is that hippocampal synapses are notoriously unreliable. When a signal travels from one neuron to another, it typically activates that second neuron only 10 to 20 percent of the time.
We had often wondered how the remarkable precision of the brain can come out of such unreliable synapses,” says Bartol. One answer, it seems, is in the constant adjustment of synapses, averaging out their success and failure rates over time. The team used their new data and a statistical model to find out how many signals it would take a pair of synapses to get to that eight percent difference.
The researchers calculated that
  • for the smallest synapses, about 1,500 events cause a change in their size/ability (20 minutes) and
  • for the largest synapses, only a couple hundred signaling events (1 to 2 minutes) cause a change.
This means that every 2 or 20 minutes, your synapses are going up or down to the next size,” said Bartol. “The synapses are adjusting themselves according to the signals they receive.
From left: Terry Sejnowski, Cailey Bromer and Tom Bartol. Credit: Salk Institute
Our prior work had hinted at the possibility that spines and axons that synapse together would be similar in size, but the reality of the precision is truly remarkable and lays the foundation for whole new ways to think about brains and computers,” says Harris. “The work resulting from this collaboration has opened a new chapter in the search for learning and memory mechanisms.” Harris adds that the findings suggest more questions to explore, for example, if similar rules apply for synapses in other regions of the brain and how those rules differ during development and as synapses change during the initial stages of learning.
The implications of what we found are far-reaching. Hidden under the apparent chaos and messiness of the brain is an underlying precision to the size and shapes of synapses that was hidden from us.
The findings also offer a valuable explanation for the brain’s surprising efficiency. The waking adult brain generates only about 20 watts of continuous power—as much as a very dim light bulb. The Salk discovery could help computer scientists build ultra-precise but energy-efficient computers, particularly ones that employ deep learning and neural nets techniques capable of sophisticated learning and analysis, such as speech, object recognition and translation.
This trick of the brain absolutely points to a way to design better computers,”said Sejnowski. “Using probabilistic transmission turns out to be as accurate and require much less energy for both computers and brains.
Other authors on the paper were Cailey Bromer of the Salk Institute; Justin Kinney of the McGovern Institute for Brain Research; and Michael A. Chirillo and Jennifer N. Bourne of the University of Texas, Austin.
The work was supported by the NIH and the Howard Hughes Medical Institute.
ORIGINAL: Salk.edu
January 20, 2016

Bridging the Bio-Electronic Divide

By Hugo Angel,

New effort aims for fully implantable devices able to connect with up to one million neurons
A new DARPA program aims to develop an implantable neural interface able to provide unprecedented signal resolution and data-transfer bandwidth between the human brain and the digital world. The interface would serve as a translator, converting between the electrochemical language used by neurons in the brain and the ones and zeros that constitute the language of information technology. The goal is to achieve this communications link in a biocompatible device no larger than one cubic centimeter in size, roughly the volume of two nickels stacked back to back.
The program, Neural Engineering System Design (NESD), stands to dramatically enhance research capabilities in neurotechnology and provide a foundation for new therapies.
“Today’s best brain-computer interface systems are like two supercomputers trying to talk to each other using an old 300-baud modem,” said Phillip Alvelda, the NESD program manager. “Imagine what will become possible when we upgrade our tools to really open the channel between the human brain and modern electronics.
Among the program’s potential applications are devices that could compensate for deficits in sight or hearing by feeding digital auditory or visual information into the brain at a resolution and experiential quality far higher than is possible with current technology.
Neural interfaces currently approved for human use squeeze a tremendous amount of information through just 100 channels, with each channel aggregating signals from tens of thousands of neurons at a time. The result is noisy and imprecise. In contrast, the NESD program aims to develop systems that can communicate clearly and individually with any of up to one million neurons in a given region of the brain.
Achieving the program’s ambitious goals and ensuring that the envisioned devices will have the potential to be practical outside of a research setting will require integrated breakthroughs across numerous disciplines including 
  • neuroscience, 
  • synthetic biology, 
  • low-power electronics, 
  • photonics, 
  • medical device packaging and manufacturing, systems engineering, and 
  • clinical testing.
In addition to the program’s hardware challenges, NESD researchers will be required to develop advanced mathematical and neuro-computation techniques to first transcode high-definition sensory information between electronic and cortical neuron representations and then compress and represent those data with minimal loss of fidelity and functionality.
To accelerate that integrative process, the NESD program aims to recruit a diverse roster of leading industry stakeholders willing to offer state-of-the-art prototyping and manufacturing services and intellectual property to NESD researchers on a pre-competitive basis. In later phases of the program, these partners could help transition the resulting technologies into research and commercial application spaces.
To familiarize potential participants with the technical objectives of NESD, DARPA will host a Proposers Day meeting that runs Tuesday and Wednesday, February 2-3, 2016, in Arlington, Va. The Special Notice announcing the Proposers Day meeting is available at https://www.fbo.gov/spg/ODA/DARPA/CMO/DARPA-SN-16-16/listing.html. More details about the Industry Group that will support NESD is available at https://www.fbo.gov/spg/ODA/DARPA/CMO/DARPA-SN-16-17/listing.html. A Broad Agency Announcement describing the specific capabilities sought will be forthcoming on www.fbo.gov.
NESD is part of a broader portfolio of programs within DARPA that support President Obama’s brain initiative. For more information about DARPA’s work in that domain, please visit:http://www.darpa.mil/program/our-research/darpa-and-the-brain-initiative.
ORIGINAL: DARPA
[email protected]
1/19/2016

Scientists have discovered brain networks linked to intelligence for the first time

By Hugo Angel,

Neurons Shutterstock 265323554_1024
Ralwel/Shutterstock.com
And we may even be able to manipulate them.
For the first time ever, scientists have identified clusters of genes in the brain that are believed to be linked to human intelligence.
The two clusters, called M1 and M3, are networks each consisting of hundreds of individual genes, and are thought to influence our

  • cognitive functions, including 
  • memory, 
  • attention, 
  • processing speed, and 
  • reasoning.
Most provocatively, the researchers who identified M1 and M3 say that these clusters are probably under the control of master switches that regulate how the gene networks function. If this hypothesis is correct and scientists can indeed find these switches, we might even be able to manipulate our genetic intelligence and boost our cognitive capabilities.
“We know that genetics plays a major role in intelligence but until now haven’t known which genes are relevant,said neurologist Michael Johnson, at Imperial College London in the UK. “This research highlights some of the genes involved in human intelligence, and how they interact with each other.
The researchers made their discovery by examining the brains of patients who had undergone neurosurgery for the treatment of epilepsy. They analysed thousands of genes expressed in the brain and combined the findings with two sets of data: genetic information from healthy people who had performed IQ tests, and from people with neurological disorders and intellectual disability.
Comparing the results, the researchers discovered that some of the genes that influence human intelligence in healthy people can also cause significant neurological problems if they end up mutating.
Traits such as intelligence are governed by large groups of genes working together – like a football team made up of players in different positions,said Johnson. “We used computer analysis to identify the genes in the human brain that work together to influence our cognitive ability to make new memories or sensible decisions when faced with lots of complex information. We found that some of these genes overlap with those that cause severe childhood onset epilepsy or intellectual disability.
The research, which is reported in Nature Neuroscience, is at an early stage, but the authors believe their analysis could have a significant impact – not only on how we understand and treat brain diseases, but one day perhaps altering brainpower itself.
Eventually, we hope that this sort of analysis will provide new insights into better treatments for neurodevelopmental diseases such as epilepsy, and ameliorate or treat the cognitive impairments associated with these devastating diseases,” said Johnson. “Our research suggests that it might be possible to work with these genes to modify intelligence, but that is only a theoretical possibility at the moment – we have just taken a first step along that road.
ORIGINAL: Science Alert
PETER DOCKRILL
22 DEC 2015

Allen Institute researchers decode patterns that make our brains human

By Hugo Angel,

Each of our human brains is special, carrying distinctive memories and giving rise to our unique thoughts and actions. Most research on the brain focuses on what makes one brain different from another. But recently, Allen Institute researchers turned the question around.
Add caption
So much research focuses on the variations between individuals, but we turned that question on its head to ask, what makes us similar?” says Ed Lein, Ph.D., Investigator at the Allen Institute for Brain Science. “What is the conserved element among all of us that must give rise to our unique cognitive abilities and human traits?
Their work, published this month in Nature Neuroscience, looked at gene expression across the entire human brain and identified a surprisingly small set of molecular patterns that dominate gene expression in the human brain and appear to be common to all individuals.
Looking at the data from this unique vantage point enables us to study gene patterning that we all share,” says Mike Hawrylycz, Ph.D., Investigator at the Allen Institute for Brain Science. “We used the Allen Human Brain Atlas data to quantify how consistent the patterns of expression for various genes are across human brains, and to determine the importance of the most consistent and reproducible genes for brain function.
Despite the anatomical complexity of the brain and the complexity of the human genome, most of the patterns of gene usage across all 20,000 genes could be characterized by just 32 expression patterns. The most highly stable genes—the genes that were most consistent across all brains—include those that are associated with diseases and disorders like autism and Alzheimer’s and include many existing drug targets. These patterns provide insights into what makes the human brain distinct and raise new opportunities to target therapeutics for treating disease.
Allen Institute researchers decode patterns that make our brains human
Conserved gene patterning across human brains provide insights into health and disease
The human brain may be the most complex piece of organized matter in the known universe, but Allen Institute researchers have begun to unravel the genetic code underlying its function. Research published this month in Nature Neuroscience identified a surprisingly small set of molecular patterns that dominate gene expression in the human brain and appear to be common to all individuals, providing key insights into the core of the genetic code that makes our brains distinctly human.
“So much research focuses on the variations between individuals, but we turned that question on its head to ask, what makes us similar?” says Ed Lein, Ph.D., Investigator at the Allen Institute for Brain Science. “What is the conserved element among all of us that must give rise to our unique cognitive abilities and human traits?”
Researchers used data from the publicly available Allen Human Brain Atlas to investigate how gene expression varies across hundreds of functionally distinct brain regions in six human brains. They began by ranking genes by the consistency of their expression patterns across individuals, and then analyzed the relationship of these genes to one another and to brain function and association with disease.
Looking at the data from this unique vantage point enables us to study gene patterning that we all share,” says Mike Hawrylycz, Ph.D., Investigator at the Allen Institute for Brain Science. “We used the Allen Human Brain Atlas data to quantify how consistent the patterns of expression for various genes are across human brains, and to determine the importance of the most consistent and reproducible genes for brain function.
Despite the anatomical complexity of the brain and the complexity of the human genome, most of the patterns of gene usage across all 20,000 genes could be characterized by just 32 expression patterns. While many of these patterns were similar in human and mouse, the dominant genetic model organism for biomedical research, many genes showed different patterns in human. Surprisingly, genes associated with neurons were most conserved across species, while those for the supporting glial cells showed larger differences.
The most highly stable genes—the genes that were most consistent across all brains—include those that are associated with diseases and disorders like autism and Alzheimer’s and include many existing drug targets. These patterns provide insights into what makes the human brain distinct and raise new opportunities to target therapeutics for treating disease.
The researchers also found that the pattern of gene expression in cerebral cortex is correlated with “functional connectivity” as revealed by neuroimaging data from the Human Connectome Project. “It is exciting to find a correlation between brain circuitry and gene expression by combining high quality data from these two large-scale projects,” says David Van Essen, Ph.D., professor at Washington University in St. Louis and a leader of the Human Connectome Project.
The human brain is phenomenally complex, so it is quite surprising that a small number of patterns can explain most of the gene variability across the brain,” says Christof Koch, Ph.D., President and Chief Scientific Officer at the Allen Institute for Brain Science. “There could easily have been thousands of patterns, or none at all. This gives us an exciting way to look further at the functional activity that underlies the uniquely human brain.
This research was conducted in collaboration with the Cincinnati Children’s Hospital and Medical Center and Washington University in St. Louis.
The project described was supported by award numbers 1R21DA027644 and 5R33DA027644 from the National Institute on Drug Abuse. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the National Institutes of Health and the National Institute on Drug Abuse.

About the Allen Institute for Brain Science
The Allen Institute for Brain Science is an independent, 501(c)(3) nonprofit medical research organization dedicated to accelerating the understanding of how the human brain works in health and disease. Using a big science approach, the Allen Institute generates useful public resources used by researchers and organizations around the globe, drives technological and analytical advances, and discovers fundamental brain properties through integration of experiments, modeling and theory. Launched in 2003 with a seed contribution from founder and philanthropist Paul G. Allen, the Allen Institute is supported by a diversity of government, foundation and private funds to enable its projects. Given the Institute’s achievements, Mr. Allen committed an additional $300 million in 2012 for the first four years of a ten-year plan to further propel and expand the Institute’s scientific programs, bringing his total commitment to date to $500 million. The Allen Institute’s data and tools are publicly available online at brain-map.org.
ORIGINAL: Allen Institute
November 16, 2015

Network of artificial neurons learns to use language

By Hugo Angel,

Neurons. Shutterstock
A network of artificial neurons has learned how to use language.
Researchers from the universities of Sassari and Plymouth found that their cognitive model, made up of two million interconnected artificial neurons, was able to learn to use language without any prior knowledge.
The model is called the Artificial Neural Network with Adaptive Behaviour Exploited for Language Learning — or the slightly catchier Annabell for short. Researchers hope Annabell will help shed light on the cognitive processes that underpin language development. 
Annabell has no pre-coded knowledge of language, and learned through communication with a human interlocutor. 
The system is capable of learning to communicate through natural language starting from tabula rasa, without any prior knowledge of the structure of phrases, meaning of words [or] role of the different classes of words, and only by interacting with a human through a text-based interface,” researchers said.
It is also able to learn nouns, verbs, adjectives, pronouns and other word classes and to use them in expressive language.” 
Annabell was able to learn due to two functional mechanisms — synaptic plasticity and neural gating, both of which are present in the human brain.

  • Synaptic plasticity: refers to the brain’s ability to increase efficiency when the connection between two neurons are activated simultaneously, and is linked to learning and memory.
  • Neural gating mechanisms: play an important role in the cortex by modulating neurons, behaving like ‘switches’ that turn particular behaviours on and off. When turned on, they transmit a signal; when off, they block the signal. Annabell is able to learn using these mechanisms, as the flow of information inputted into the system is controlled in different areas
The results show that, compared to previous cognitive neural models of language, the Annabell model is able to develop a broad range of functionalities, starting from a tabula rasa condition,” researchers said in their conclusion
The current version of the system sets the scene for subsequent experiments on the fluidity of the brain and its robustness. It could lead to the extension of the model for handling the developmental stages in the grounding and acquisition of language.
ORIGINAL: Wired – UK
13 NOVEMBER 15 

How Your Brain Is Wired Reveals the Real You

By Hugo Angel,

The Human Connectome Project finds surprising correlations between brain architecture and behavior
©iStock.com
The brain’s wiring patterns can shed light on a person’s positive and negative traits, researchers report in Nature Neuroscience. The finding, published on September 28, is the first from the Human Connectome Project (HCP), an international effort to map active connections between neurons in different parts of the brain.
The HCP, which launched in 2010 at a cost of US$40 million, seeks to scan the brain networks, or connectomes, of 1,200 adults. Among its goals is to chart the networks that are active when the brain is idle; these are thought to keep the different parts of the brain connected in case they need to perform a task.
In April, a branch of the project led by one of the HCP’s co-chairs, biomedical engineer Stephen Smith at the University of Oxford, UK, released a database of resting-state connectomes from about 460 people between 22 and 35 years old. Each brain scan is supplemented by information on approximately 280 traits, such as the person’s age, whether they have a history of drug use, their socioeconomic status and personality traits, and their performance on various intelligence tests.
Axis of connectivity
Smith and his colleagues ran a massive computer analysis to look at how these traits varied among the volunteers, and how the traits correlated with different brain connectivity patterns. The team was surprised to find a single, stark difference in the way brains were connected. People with more ‘positive’ variables, such as more education, better physical endurance and above-average performance on memory tests, shared the same patterns. Their brains seemed to be more strongly connected than those of people with ‘negative’ traits such as smoking, aggressive behaviour or a family history of alcohol abuse.
Marcus Raichle, a neuroscientist at Washington University in St Louis, Missouri, is impressed that the activity and anatomy of the brains alone were enough to reveal this ‘positive-negative’ axis. “You can distinguish people with successful traits and successful lives versus those who are not so successful,” he says.
But Raichle says that it is impossible to determine from this study how different traits relate to one another and whether the weakened brain connections are the cause or effect of negative traits. And although the patterns are clear across the large group of HCP volunteers, it might be some time before these connectivity patterns could be used to predict risks and traits in a given individual. Deanna Barch, a psychologist at Washington University who co-authored the latest study, says that once these causal relationships are better understood, it might be possible to push brains toward the ‘good’ end of the axis.
Van Wedeen, a neuroscientist at Massachusetts General Hospital in Boston, says that the findings could help to prioritize future research. For instance, one of the negative traits that pulled a brain farthest down the negative axis was marijuana use in recent weeks. Wedeen says that the finding emphasizes the importance of projects such as one launched by the US National Institute on Drug Abuse last week, which will follow 10,000 adolescents for 10 years to determine how marijuana and other drugs affect their brains.
Wedeen finds it interesting that the wiring patterns associated with people’s general intelligence scores were not exactly the same as the patterns for individual measures of cognition—people with good hand–eye coordination, for instance, fell farther down the negative axis than did those with good verbal memory. This suggests that the biology underlying cognition might be more complex than our current definition of general intelligence, and that it could be influenced by demographic and behavioural factors. “Maybe it will cause us to reconsider what [the test for general intelligence] is measuring,” he says. “We have a new mystery now.
Much more connectome data should emerge in the next few years. The Harvard Aging Brain Study, for instance, is measuring active brain connections in 284 people aged between 65 and 90, and released its first data earlier this year. And Smith is running the Developing Human Connectome Project in the United Kingdom, which is imaging the brains of 1,200 babies before and after birth. He expects to release its first data in the next few months. Meanwhile, the HCP is analysing genetic data from its participants, which include a large number of identical and fraternal twins, to determine how genetic and environmental factors relate to brain connectivity patterns.
This article is reproduced with permission and was first published on September 28, 2015.
September 28, 2015