Category: Design


An international team of scientists has come up with a blueprint for a large-scale quantum computer

By Hugo Angel,

‘It is the Holy Grail of science … we will be able to do certain things we could never even dream of before’
Courtesy Professor Winfried Hensinger
Quantum computing breakthrough could help ‘change life completely‘, say scientists
Scientists claim to have produced the first-ever blueprint for a large-scale quantum computer in a development that could bring about a technological revolution on a par with the invention of computing itself.
Until now quantum computers have had just a fraction of the processing power they are theoretically capable of producing.
But an international team of researchers believe they have finally overcome the main technical problems that have prevented the construction of more powerful machines.
They are currently building a prototype and a full-scale quantum computer – many millions of times faster than the best currently available – could be built in about a decade.
This is a modal window.
Scientists invent invisible underwater robots based on glass eels
Such devices work by utilising the almost magical properties found in the world of the very small, where an atom can apparently exist in two different places at the same time.
Professor Winfried Hensinger, head of the Ion Quantum Technology Group at Sussex University, who has been leading this research, told The Independent: “It is the Holy Grail of science, really, to build a quantum computer.
And we are now publishing the actual nuts-and-bolts construction plan for a large-scale quantum computer.
It is thought the astonishing processing power unleashed by quantum mechanics will lead to new, life-saving medicines, help solve the most intractable scientific problems, and probe the mysteries of the universe.
Life will change completely. We will be able to do certain things we could never even dream of before,” Professor Hensinger said.
You can imagine that suddenly the sky is the limit.
This is really, really exciting … it’s probably one of the most exciting times to be in this field.
He said small quantum computers had been built in the past but to test the theories.
This is not an academic study any more, it really is all the engineering required to build such a device,” he said.
Nobody has really gone ahead and drafted a full engineering plan of how you build one.
Many people questioned, because this is so hard to make this happen, that it can even be built.
We show that not only can it be built, but we provide a whole detailed plan on how to make it happen.
The problem is that existing quantum computers require lasers focused precisely on individual atoms. The larger the computer, the more lasers are required and the greater the chance of something going wrong.
But Professor Hensinger and colleagues used a different technique to monitor the atoms involving a microwave field and electricity in an ‘ion-trap’ device.

What we have is a solution that we can scale to arbitrary [computing] power,” he said.

Fig. 2. Gradient wires placed underneath each gate zone and embedded silicon photodetector.
(A) Illustration showing an isometric view of the two main gradient wires placed underneath each gate zone. Short wires are placed locally underneath each gate zone to form coils, which compensate for slowly varying magnetic fields and allow for individual addressing. The wire configuration in each zone can be seen in more detail in the inset.
(B) Silicon photodetector (marked green) embedded in the silicon substrate, transparent center segmented electrodes, and the possible detection angle are shown. VIA structures are used to prevent optical cross-talk from neighboring readout zones.
Source: Science Journals — AAAS. Blueprint for a microwave trapped ion quantum computer. Lekitsch et al. Sci. Adv. 2017;3: e1601540 1 February 2017
Fig. 4. Scalable module illustration. One module consisting of 36 × 36 junctions placed on the supporting steel frame structure: Nine wafers containing the required DACs and control electronics are placed between the wafer holding 36 × 36 junctions and the microchannel cooler (red layer) providing the cooling. X-Y-Z piezo actuators are placed in the four corners on top of the steel frame, allowing for accurate alignment of the module. Flexible electric wires supply voltages, currents, and control signals to the DACs and control electronics, such as field-programmable gate arrays (FPGAs). Coolant is supplied to the microchannel cooler layer via two flexible steel tubes placed in the center of the modules.
Source: Science Journals — AAAS. Blueprint for a microwave trapped ion quantum computer. Lekitsch et al. Sci. Adv. 2017;3: e1601540 1 February 2017
Fig. 5. Illustration of vacuum chambers. Schematic of octagonal UHV chambers connected together; each chamber is 4.5 × 4.5 m2 large and can hold >2.2 million individual X-junctions placed on steel frames.
Source: Science Journals — AAAS. Blueprint for a microwave trapped ion quantum computer. Lekitsch et al. Sci. Adv. 2017;3: e1601540 1 February 2017

We are already building it now. Within two years we think we will have completed a prototype which incorporates all the technology we state in this blueprint.

At the same time we are now looking for industry partner so we can really build a large-scale device that fills a building basically.
It’s extraordinarily expensive so we need industry partners … this will be in the 10s of millions, up to £100m.
Commenting on the research, described in a paper in the journal Science Advances, other academics praised the quality of the work but expressed caution about how quickly it could be developed.
Dr Toby Cubitt, a Royal Society research fellow in quantum information theory at University College London, said: “Many different technologies are competing to build the first large-scale quantum computer. Ion traps were one of the earliest realistic proposals. 
This work is an important step towards scaling up ion-trap quantum computing.
Though there’s still a long way to go before you’ll be making spreadsheets on your quantum computer.
And Professor Alan Woodward, of Surrey University, hailed the “tremendous step in the right direction”.
It is great work,” he said. “They have made some significant strides forward.

But he added it was “too soon to say” whether it would lead to the hoped-for technological revolution.

ORIGINAL: The Independent
Ian Johnston Science Correspondent
Thursday 2 February 2017

Microsoft Neural Net Shows Deep Learning can get Way Deeper

By Hugo Angel,

Silicon Wafer by Sonic
PAUL TAYLOR/GETTY IMAGES
COMPUTER VISION IS now a part of everyday life. Facebook recognizes faces in the photos you post to the popular social network. The Google Photos app can find images buried in your collection, identifying everything from dogs to birthday parties to gravestones. Twitter can pinpoint pornographic images without help from human curators.
All of this “seeing” stems from a remarkably effective breed of artificial intelligence called deep learning. But as far as this much-hyped technology has come in recent years, a new experiment from Microsoft Research shows it’s only getting started. Deep learning can go so much deeper.
We’re staring at a huge design space, trying to figure out where to go next.‘ 

 

PETER LEE, MICROSOFT RESEARCH
This revolution in computer vision was a long time coming. A key turning point came in 2012, when artificial intelligence researchers from the University of Toronto won a competition called ImageNet. ImageNet pits machines against each other in an image recognition contest—which computer can identify cats or cars or clouds more accurately?—and that year, the Toronto team, including researcher Alex Krizhevsky and professor Geoff Hinton, topped the contest using deep neural nets, a technology that learns to identify images by examining enormous numbers of them, rather than identifying images according to rules diligently hand-coded by humans.
 
Toronto’s win provided a roadmap for the future of deep learning. In the years since, the biggest names on the ‘net—including Facebook, Google, Twitter, and Microsoft—have used similar tech to build computer vision systems that can match and even surpass humans. “We can’t claim that our system ‘sees’ like a person does,” says Peter Lee, the head of research at Microsoft. “But what we can say is that for very specific, narrowly defined tasks, we can learn to be as good as humans.
Roughly speaking, neural nets use hardware and software to approximate the web of neurons in the human brain. This idea dates to the 1980s, but in 2012, Krizhevsky and Hinton advanced the technology by running their neural nets atop graphics processing units, or GPUs. These specialized chips were originally designed to render images for games and other highly graphical software, but as it turns out, they’re also suited to the kind of math that drives neural nets. Google, Facebook, Twitter, Microsoft, and so many others now use GPU-powered-AI to handle image recognition and so many others tasks, from Internet search to security. Krizhevsky and Hinton joined the staff at Google.
Deep learning can go so much deeper.
Now, the latest ImageNet winner is pointing to what could be another step in the evolution of computer vision—and the wider field of artificial intelligence. Last month, a team of Microsoft researchers took the ImageNet crown using a new approach they call a deep residual network. The name doesn’t quite describe it. They’ve designed a neural net that’s significantly more complex than typical designs—one that spans 152 layers of mathematical operations, compared to the typical six or seven. It shows that, in the years to come, companies like Microsoft will be able to use vast clusters of GPUs and other specialized chips to significantly improve not only image recognition but other AI services, including systems that recognize speech and even understand language as we humans naturally speak it.
In other words, deep learning is nowhere close to reaching its potential. “We’re staring at a huge design space,” Lee says, “trying to figure out where to go next.
Layers of Neurons
Deep neural networks are arranged in layers. Each layer is a different set of mathematical operations—aka algorithms. The output of one layer becomes the input of the next. Loosely speaking, if a neural network is designed for image recognition, one layer will look for a particular set of features in an image—edges or angles or shapes or textures or the like—and the next will look for another set. These layers are what make these neural networks deep. “Generally speaking, if you make these networks deeper, it becomes easier for them to learn,” says Alex Berg, a researcher at the University of North Carolina who helps oversee the ImageNet competition.
Constructing this kind of mega-neural net is flat-out difficult.
Today, a typical neural network includes six or seven layers. Some might extend to 20 or even 30. But the Microsoft team, led by researcher Jian Sun, just expanded that to 152. In essence, this neural net is better at recognizing images because it can examine more features. “There is a lot more subtlety that can be learned,” Lee says.
In the past, according Lee and researchers outside of Microsoft, this sort of very deep neural net wasn’t feasible. Part of the problem was that as your mathematical signal moved from layer to layer, it became diluted and tended to fade. As Lee explains, Microsoft solved this problem by building a neural net that skips certain layers when it doesn’t need them, but uses them when it does. “When you do this kind of skipping, you’re able to preserve the strength of the signal much further,” Lee says, “and this is turning out to have a tremendous, beneficial impact on accuracy.
Berg says that this is an notable departure from previous systems, and he believes that others companies and researchers will follow suit.
Deep Difficulty
The other issue is that constructing this kind of mega-neural net is tremendously difficult. Landing on a particular set of algorithms—determining how each layer should operate and how it should talk to the next layer—is an almost epic task. But Microsoft has a trick here, too. It has designed a computing system that can help build these networks.
As Jian Sun explains it, researchers can identify a promising arrangement for massive neural networks, and then the system can cycle through a range of similar possibilities until it settles on this best one. “In most cases, after a number of tries, the researchers learn [something], reflect, and make a new decision on the next try,” he says. “You can view this as ‘human-assisted search.’”
Microsoft has designed a computing system that can help build these networks.
According to Adam Gibson—the chief researcher at deep learning startup Skymind—this kind of thing is getting more common. It’s called “hyper parameter optimization.” “People can just spin up a cluster [of machines], run 10 models at once, find out which one works best and use that,” Gibson says. “They can input some baseline parameter—based on intuition—and the machines kind of homes in on what the best solution is.” As Gibson notes, last year Twitter acquired a company, Whetlab, that offers similar ways of “optimizing” neural networks.

‘A Hardware Problem’
As Peter Lee and Jian Sun describe it, such an approach isn’t exactly “brute forcing” the problem. “With very very large amounts of compute resources, one could fantasize about a gigantic ‘natural selection’ setup where evolutionary forces help direct a brute-force search through a huge space of possibilities,” Lee says. “The world doesn’t have those computing resources available for such a thing…For now, we will still depend on really smart researchers like Jian.
But Lee does say that, thanks to new techniques and computer data centers filled with GPU machines, the realm of possibilities for deep learning are enormous. A big part of the company’s task is just finding the time and the computing power needed to explore these possibilities. “This work as dramatically exploded the design space. The amount of ground to cover, in terms of scientific investigation, has become exponentially larger,” Lee says. And this extends well beyond image recognition, into speech recognition, natural language understanding, and other tasks.
As Lee explains, that’s one reason Microsoft is not only pushing to improve the power of its GPUs clusters, but exploring the use of other specialized processors, including FPGAs—chips that can programmed for particular tasks, such as deep learning. “There has also been an explosion in demand for much more experimental hardware platforms from our researchers,” he says. And this work is sending ripples across the wider of world of tech and artificial intelligence. This past summer, in its largest ever acquisition deal, Intel agreed to buy Altera, which specializes in FPGAs.
Indeed, Gibson says that deep learning has become more of “a hardware problem.” Yes, we still need top researchers to guide the creation of neural networks, but more and more, finding new paths is a matter of brute-forcing new algorithms across ever more powerful collections of hardware. As Gibson point out, though these deep neural nets work extremely well, we don’t quite know why they work. The trick lies in finding the complex combination of algorithms that work the best. More and better hardware can shorten the path.
The end result is that the companies that can build the most powerful networks of hardware are the companies will come out ahead. That would be Google and Facebook and Microsoft. Those that are good at deep learning today will only get better.
ORIGINAL: Wired

Forward to the Future: Visions of 2045

By Hugo Angel,

DARPA asked the world and our own researchers what technologies they expect to see 30 years from now—and received insightful, sometimes funny predictions
Today—October 21, 2015—is famous in popular culture as the date 30 years in the future when Marty McFly and Doc Brown arrive in their time-traveling DeLorean in the movie “Back to the Future Part II.” The film got some things right about 2015, including in-home videoconferencing and devices that recognize people by their voices and fingerprints. But it also predicted trunk-sized fusion reactors, hoverboards and flying cars—game-changing technologies that, despite the advances we’ve seen in so many fields over the past three decades, still exist only in our imaginations.
A big part of DARPA’s mission is to envision the future and make the impossible possible. So ten days ago, as the “Back to the Future” day approached, we turned to social media and asked the world to predict: What technologies might actually surround us 30 years from now? We pointed people to presentations from DARPA’s Future Technologies Forum, held last month in St. Louis, for inspiration and a reality check before submitting their predictions.
Well, you rose to the challenge and the results are in. So in honor of Marty and Doc (little known fact: he is a DARPA alum) and all of the world’s innovators past and future, we present here some highlights from your responses, in roughly descending order by number of mentions for each class of futuristic capability:
  • Space: Interplanetary and interstellar travel, including faster-than-light travel; missions and permanent settlements on the Moon, Mars and the asteroid belt; space elevators
  • Transportation & Energy: Self-driving and electric vehicles; improved mass transit systems and intercontinental travel; flying cars and hoverboards; high-efficiency solar and other sustainable energy sources
  • Medicine & Health: Neurological devices for memory augmentation, storage and transfer, and perhaps to read people’s thoughts; life extension, including virtual immortality via uploading brains into computers; artificial cells and organs; “Star Trek”-style tricorder for home diagnostics and treatment; wearable technology, such as exoskeletons and augmented-reality glasses and contact lenses
  • Materials & Robotics: Ubiquitous nanotechnology, 3-D printing and robotics; invisibility and cloaking devices; energy shields; anti-gravity devices
  • Cyber & Big Data: Improved artificial intelligence; optical and quantum computing; faster, more secure Internet; better use of data analytics to improve use of resources
A few predictions inspired us to respond directly:
  • Pizza delivery via teleportation”—DARPA took a close look at this a few years ago and decided there is plenty of incentive for the private sector to handle this challenge.
  • Time travel technology will be close, but will be closely guarded by the military as a matter of national security”—We already did this tomorrow.
  • Systems for controlling the weather”—Meteorologists told us it would be a job killer and we didn’t want to rain on their parade.
  • Space colonies…and unlimited cellular data plans that won’t be slowed by your carrier when you go over a limit”—We appreciate the idea that these are equally difficult, but they are not. We think likable cell-phone data plans are beyond even DARPA and a total non-starter.
So seriously, as an adjunct to this crowd-sourced view of the future, we asked three DARPA researchers from various fields to share their visions of 2045, and why getting there will require a group effort with players not only from academia and industry but from forward-looking government laboratories and agencies:

Pam Melroy, an aerospace engineer, former astronaut and current deputy director of DARPA’s Tactical Technologies Office (TTO), foresees technologies that would enable machines to collaborate with humans as partners on tasks far more complex than those we can tackle today:
Justin Sanchez, a neuroscientist and program manager in DARPA’s Biological Technologies Office (BTO), imagines a world where neurotechnologies could enable users to interact with their environment and other people by thought alone:
Stefanie Tompkins, a geologist and director of DARPA’s Defense Sciences Office, envisions building substances from the atomic or molecular level up to create “impossible” materials with previously unattainable capabilities.
Check back with us in 2045—or sooner, if that time machine stuff works out—for an assessment of how things really turned out in 30 years.
# # #
Associated images posted on www.darpa.mil and video posted at www.youtube.com/darpatv may be reused according to the terms of the DARPA User Agreement, available here:http://www.darpa.mil/policy/usage-policy.
Tweet @darpa
ORIGINAL: DARPA
[email protected]
10/21/2015

Robotic insect mimics Nature’s extreme moves

By Hugo Angel,

An international team of Seoul National University and Harvard researchers looked to water strider insects to develop robots that jump off water’s surface
(SEOUL and BOSTON) — The concept of walking on water might sound supernatural, but in fact it is a quite natural phenomenon. Many small living creatures leverage water’s surface tension to maneuver themselves around. One of the most complex maneuvers, jumping on water, is achieved by a species of semi-aquatic insects called water striders that not only skim along water’s surface but also generate enough upward thrust with their legs to launch themselves airborne from it.


In this video, watch how novel robotic insects developed by a team of Seoul National University and Harvard scientists can jump directly off water’s surface. The robots emulate the natural locomotion of water strider insects, which skim on and jump off the surface of water. Credit: Wyss Institute at Harvard University
Now, emulating this natural form of water-based locomotion, an international team of scientists from Seoul National University, Korea (SNU), Harvard’s Wyss Institute for Biologically Inspired Engineering, and the Harvard John A. Paulson School of Engineering and Applied Sciences, has unveiled a novel robotic insect that can jump off of water’s surface. In doing so, they have revealed new insights into the natural mechanics that allow water striders to jump from rigid ground or fluid water with the same amount of power and height. The work is reported in the July 31 issue of Science.
Water’s surface needs to be pressed at the right speed for an adequate amount of time, up to a certain depth, in order to achieve jumping,” said the study’s co–senior author Kyu Jin Cho, Associate Professor in the Department of Mechanical and Aerospace Engineering and Director of the Biorobotics Laboratory at Seoul National University. “The water strider is capable of doing all these things flawlessly.
The water strider, whose legs have slightly curved tips, employs a rotational leg movement to aid it its takeoff from the water’s surface, discovered co–senior author Ho–Young Kim who is Professor in SNU’s Department of Mechanical and Aerospace Engineering and Director of SNU’s Micro Fluid Mechanics Lab. Kim, a former Wyss Institute Visiting Scholar, worked with the study’s co–first author Eunjin Yang, a graduate researcher at SNU’s Micro Fluid Mechanics lab, to collect water striders and take extensive videos of their movements to analyze the mechanics that enable the insects to skim on and jump off water’s surface.
It took the team several trial and error attempts to fully understand the mechanics of the water strider, using robotic prototypes to test and shape their hypotheses.
If you apply as much force as quickly as possible on water, the limbs will break through the surface and you won’t get anywhere,” said Robert Wood, Ph.D., who is a co–author on the study, a Wyss Institute Core Faculty member, the Charles River Professor of Engineering and Applied Sciences at the Harvard Paulson School, and founder of the Harvard Microrobotics Lab.
But by studying water striders in comparison to iterative prototypes of their robotic insect, the SNU and Harvard team discovered that the best way to jump off of water is to maintain leg contact on the water for as long as possible during the jump motion.
Using its legs to push down on water, the natural water strider exerts the maximum amount of force just below the threshold that would break the water’s surface,” said the study’s co-first author Je-Sung Koh, Ph.D., who was pursuing his doctoral degree at SNU during the majority of this research and is now a Postdoctoral Fellow at the Wyss Institute and the Harvard Paulson School.
Mimicking these mechanics, the robotic insect built by the team can exert up to 16 times its own body weight on the water’s surface without breaking through, and can do so without complicated controls. Many natural organisms such as the water strider can perform extreme styles of locomotion – such as flying, floating, swimming, or jumping on water – with great ease despite a lack of complex cognitive skills.

From left, Seoul National University (SNU) professors Ho-Young Kim, Ph.D., and Kyu Jin Cho, Ph.D., observe the semi-aquatic jumping robotic insects developed by an SNU and Harvard team. Credit: Seoul National University.
This is due to their natural morphology,” said Cho. “It is a form of embodied or physical intelligence, and we can learn from this kind of physical intelligence to build robots that are similarly capable of performing extreme maneuvers without highly–complex controls or artificial intelligence.
The robotic insect was built using a “torque reversal catapult mechanism” inspired by the way a flea jumps, which allows this kind of extreme locomotion without intelligent control. It was first reported by Cho, Wood and Koh in 2013 in the International Conference on Intelligent Robots and Systems.
For the robotic insect to jump off water, the lightweight catapult mechanism uses a burst of momentum coupled with limited thrust to propel the robot off the water without breaking the water’s surface. An automatic triggering mechanism, built from composite materials and actuators, was employed to activate the catapult.
To produce the body of the robotic insect, “pop-up” manufacturing was used to create folded composite structures that self-assemble much like the foldable components that “pop–up” in 3D books. Devised by engineers at the Harvard Paulson School and the Wyss Institute, this ingenious layering and folding process enables the rapid fabrication of microrobots and a broad range of electromechanical devices.
The resulting robotic insects can achieve the same momentum and height that could be generated during a rapid jump on firm ground – but instead can do so on water – by spreading out the jumping thrust over a longer amount of time and in sustaining prolonged contact with the water’s surface,” said Wood.
This international collaboration of biologists and roboticists has not only looked into nature to develop a novel, semi–aquatic bioinspired robot that performs a new extreme form of robotic locomotion, but has also provided us with new insights on the natural mechanics at play in water striders,” said Wyss Institute Founding Director Donald Ingber, M.D., Ph.D.
Additional co–authors of the study include Gwang–Pil Jung, a Ph.D. candidate in SNU’s Biorobotics Laboratory; Sun–Pill Jung, an M.S. candidate in SNU’s Biorobotics Laboratory; Jae Hak Son, who earned his Ph.D. in SNU’s Laboratory of Behavioral Ecology and Evolution; Sang–Im Lee, Ph.D., who is Research Associate Professor at SNU’s Institute of Advanced Machines and Design and Adjunct Research Professor at the SNU’s Laboratory of Behavioral Ecology and Evolution; and Piotr Jablonski, Ph.D., who is Professor in SNU’s Laboratory of Behavioral Ecology and Evolution.
This work was supported by the National Research Foundation of Korea, Bio–Mimetic Robot Research Center funding from the Defense Acquisition Program Administration, and the Wyss Institute for Biologically Inspired Engineering at Harvard University.
IMAGE AND VIDEO AVAILABLE
###
PRESS CONTACTS
Seoul National University College of Engineering
Wyss Institute for Biologically Inspired Engineering at Harvard University
Harvard University John A. Paulson School of Engineering and Applies Sciences
The Seoul National University College of Engineering (SNU CE) (http://eng.snu.ac.kr/english/index.php) aims to foster leaders in global industry and society. In CE, professors from all over the world are applying their passion for education and research. Graduates of the college are taking on important roles in society as the CEOs of conglomerates, founders of venture businesses, and prominent engineers, contributing to the country’s industrial development. Globalization is the trend of a new era, and engineering in particular is a field of boundless competition and cooperation. The role of engineers is crucial to our 21st century knowledge and information society, and engineers contribute to the continuous development of Korea toward a central role on the world stage. CE, which provides enhanced curricula in a variety of major fields, has now become the environment in which future global leaders are cultivated.
The Wyss Institute for Biologically Inspired Engineering at Harvard University (http://wyss.harvard.edu) uses Nature’s design principles to develop bioinspired materials and devices that will transform medicine and create a more sustainable world. Wyss researchers are developing innovative new engineering solutions for healthcare, energy, architecture, robotics, and manufacturing that are translated into commercial products and therapies through collaborations with clinical investigators, corporate alliances, and formation of new start–ups. The Wyss Institute creates transformative technological breakthroughs by engaging in high risk research, and crosses disciplinary and institutional barriers, working as an alliance that includes Harvard’s Schools of Medicine, Engineering, Arts & Sciences and Design, and in partnership with Beth Israel Deaconess Medical Center, Brigham and Women’s Hospital, Boston Children’s Hospital, Dana–Farber Cancer Institute, Massachusetts General Hospital, the University of Massachusetts Medical School, Spaulding Rehabilitation Hospital, Boston University, Tufts University, and Charité – Universitätsmedizin Berlin, University of Zurich and Massachusetts Institute of Technology.
The Harvard University John A. Paulson School of Engineering and Applied Sciences (http://seas.harvard.edu) serves as the connector and integrator of Harvard’s teaching and research efforts in engineering, applied sciences, and technology. Through collaboration with researchers from all parts of Harvard, other universities, and corporate and foundational partners, we bring discovery and innovation directly to bear on improving human life and society.
ORIGINAL: Wyss Institute
Jul 30, 2015

Building an organic computing device with multiple interconnected brains. (Duke U.)

By admin,

Experimental apparatus scheme for a Brainet computing device.
Abstract
Recently, we proposed that Brainets, i.e. networks formed by multiple animal brains, cooperating and exchanging information in real time through direct brain-to-brain interfaces, could provide the core of a new type of computing device: an organic computer. Here, we describe the first experimental demonstration of such a Brainet, built by interconnecting four adult rat brains. Brainets worked by concurrently recording the extracellular electrical activity generated by populations of cortical neurons distributed across multiple rats chronically implanted with multi-electrode arrays. Cortical neuronal activity was recorded and analyzed in real time, and then delivered to the somatosensory cortices of other animals that participated in the Brainet using intracortical microstimulation (ICMS). Using this approach, different Brainet architectures solved a number of useful computational problems, such as 
  • discrete classification, 
  • image processing, 
  • storage and retrieval of tactile information, and even 
  • weather forecasting. 

Brainets consistently performed at the same or higher levels than single rats in these tasks. Based on these findings, we propose that Brainets could be used to investigate animal social behaviors as well as a test bed for exploring the properties and potential applications of organic computers.

Subject terms:
Introduction
After introducing the concept of brain-to-brain interfaces (BtBIs) 1 , our laboratory demonstrated experimentally that BtBIs could be utilized to directly transfer tactile or visuomotor information between pairs of rat brains in real time 2 . Since our original report, other studies have highlighted several properties of BtBIs 1 , 3 , such as transmission of hippocampus representations between rodents 4 , transmission of visual information between a human and a rodent 5 , and transmission of motor information between two humans 6 , 7 . Our lab has also shown that Brainets could allow monkey pairs or triads to perform cooperative motor tasks mentally by inducing, accurate synchronization of neural ensemble activity across individual brains 8 .
In addition to the concept of BtBIs, we have also suggested that networks of multiple interconnected animal brains, which we dubbed Brainet 1 , could provide the core for a new type of computing device: an organic computer. Here, we tested the hypothesis that such a Brainet could potentially exceed the performance of individual brains, due to a distributed and parallel computing architecture 1 , 8 . This hypothesis was tested by constructing a Brainet formed by four interconnected rat brains and then investigating how it could solve fundamental computational problems ( Fig. 1A–C ). In our Brainet, all four rats were chronically implanted with multielectrode arrays, placed bilaterally in the primary somatosensory cortex (S1). These implants were used to both record neural ensemble electrical activity and transmit virtual tactile information via intracortical electrical microstimulation (ICMS). Once animals recovered from the implantation surgery, the resulting 4-rat Brainets ( Fig. 1 ) were tested in a variety of ways. Our central goal was to investigate how well different Brainet architectures could be employed by the four rats to collaborate in order to solve a particular computational task. Different Brainet designs were implemented to address three fundamental computational problems: discrete classification, sequential and parallel computations, and memory storage/retrieval 1 . As predicted, we observed that Brainets consistently outperformed individual rats in each of these tasks.
Figure 1: Experimental apparatus scheme for a Brainet computing device.
A) A Brainet of four interconnected brains is shown. The arrows represent the flow of information through the Brainet. Inputs were delivered as simultaneous ICMS patterns to the S1 cortex of each rat. Neural activity was then recorded and analyzed in real time. Rats were required to synchronize their neural activity with the remaining of the Brainet to receive water B) Inputs to the Brainet were delivered as ICMS patterns to the left S1, while outputs were calculated using the neural responses recorded from the right S1. C) Brainet architectures were set to mimic hidden layers of an artificial neural network. D) Examples of perievent histograms of neurons after the delivery of ICMS.
Results
All experiments with 4-rat Brainets were pooled from a sample of 16 animals that received cortical implants from which we could simultaneously record the extracellular activity from 15–66 S1 neurons per Brainet (total of 2,738 neurons recorded across 71 sessions).
Brainet for neural synchronization
Rats were water deprived and trained on a task that required them to synchronize their neural activity after an ICMS stimulus. A total of six rats were used in 12 sessions to run this first experiment. As depicted in Fig. 1A–C , the processing chain in these experiments started with the simultaneous delivery of an ICMS pattern to one of the S1 cortices of all subjects, then processing of tactile information with a single-layer Brainet, followed by generation of the system output by the contralateral S1 cortex of each animal. Each trial was comprised of four epochs: waiting (baseline), ICMS delivery, test, and reward. ICMS patterns (20 pulses at 22–26 Hz) were unilaterally delivered to the S1 of each rat. Neuronal responses to the ICMS were evaluated during the test period when S1 neuronal ensemble activity was sampled from the hemisphere contralateral to the stimulation site ( Figs. 1D and 2A–E ) ( Fig. 2A–E ). Rats were rewarded if their cortical activity became synchronized during the test period. The correlation coefficient R was used as the measure of global Brainet synchrony. Thus, R measured the linear correlation between the normalized firing rate of all neurons in a given rat and the average normalized firing rate for all neurons recorded in the remaining three rats (see Methods for details). If at least three rats presented R values greater or equal to 0.2, a trial was considered successful, and all four rats were rewarded. Otherwise no reward was given to any rat. Two conditions served as controls: the pre-session, where no ICMS or water reward were delivered, and the post-session, where no ICMS was delivered but rats were still rewarded if they satisfied the correlation criterion ( Fig. 2A ).
Figure 2: The Brainet can synchronize neural activity.
A) The different colors indicate the different manipulations used to study synchronization across the network. During the pre-session, rats were tested for periods of spurious neural synchronization. No ICMS or rewards were delivered here. During sessions, rats were tested for increased neural synchronization due to detection of the ICMS stimulus (red period). Successful synchronization was rewarded with water. During the post session, rats were tested for periods of neural synchronization due to the effects of reward (e.g. continuous whisking/licking). Successful synchronization was rewarded with water, but no ICMS stimulus was delivered. B) Example of neuronal activity across the Brainet. After the ICMS there was a general tendency for neural activity to increase. Periods of maximum firing rate are represented in red. C) The performance of the Brainet during sessions was above the pre-sessions and post-sessions. Also, delivery of ICMS alone or during anesthetized states also resulted in poor performances. ** and *** indicate P < 0.01 and P < 0.0001 respectively. D) Overall changes in R values in early and late sessions show that improvements in performances were accompanied by specific changes in the periods of synchronized activity. E) Example of a synchronization trial. The lower panels show, in red, the neural activity of each rat and, in blue, the average of neural activity for the remaining of the Brainet. The upper panels depict the R value for the correlation coefficient between each rat and the remaining of the Brainet. There was an overall tendency for the Brainet to correlate in the beginning of the test period.
Behaviorally, rats remained mostly calm or immobile during the baseline period. After the ICMS pattern was delivered simultaneously to all animals, rats typically displayed periods of whisking and licking movements. A sample of S1 neuronal population activity during this period is shown in Fig. 2B (also see Fig. 1D for examples of individual neurons perievent histograms). Typically, after the delivery of ICMS, there was a sharp decrease in the neuronal firing rate of the neurons (~20 ms), followed by a sudden firing rate increase (~100 ms). While the main measure of accuracy for this task was the degree in which cortical neuronal populations fired synchronously, it is important to emphasize that the build up of these ensemble firing patterns depended highly on how single S1 neurons modulated their firing rate as a result of electrical microstimulation. Thus, ICMS served as a reset signal that allowed rats to synchronize their neural activity to the remaining network ( Fig. 2D,E ). Note that, in this task, rats were not exchanging neural information through the BtBI. Instead the timing of the ICMS stimulus, the partial contact allowed through the Plexiglas panels, and the reward were the only sources of information available for rats to succeed in the task.
As the Brainet consistently exhibited the best performance during the first trials, we focused our subsequent analysis on the first 30-trial block of each session. Overall, this 4-rat Brainet was able to synchronize the neural activity of the constituent rats significantly above Pre-Session (Brainet: 57.95 ± 2%; Pre-Sessions: 45.95 ± 2%; F2,24 = 10.99; P = 0.0004; Dunnett’s test: P < 0.001) and Post-Session levels (46.41 ± 2%; Dunnett’s test: P < 0.01; Fig. 2C ).
Over approximately 1.5 weeks (total of 12 sessions), this Brainet gradually improved its performance, from 54.76 ± 3.16% (mean ± standard error; the first 6 days) to 61.67 ± 3.01% correct trials (the last 6 days; F1,2 = 5.770, P = 0.0175 for interaction; Bonferroni post hoc comparisons: pre vs session initial start P > 0.05; pre vs session end P < 0.01; session vs post start P > 0.05; session vs post end P < 0.001). The high fidelity of information transfer in this Brainet configuration was further confirmed by the observation that the performance of individual rats reached 65.28 ± 1.70%. In other words, a 4-rat Brainet was capable of maintaining a level of global neuronal synchrony across multiple brains that was virtually identical to that observed in the cortex of a single rat (Brainet level = 61.67 ± 3.07%; Man-Whitney U = 58.0; P = 0.4818, n.s.).
A comparison of correlation values between sessions from the first (n = 6) and the last days (n = 6) further demonstrated that daily training on this first task resulted in a statistically significant increase in correlated cortical activity across rats, centered between 700 ms and 1000 ms of the testing period (F = 1.622; df = 1.49; P = 0.0043, Fig. 2D ). The lower panel of Fig. 2E shows the normalized firing rate for each rat (in red) and for the remaining Brainet (in blue) in one trial. The upper panels show R value changes for the correlation between neuronal activity in each rat and the remaining Brainet. Notice the overall tendency for most rats to increase the R values soon after the delivery of the ICMS pattern (T = 0 seconds).
To determine if reward was mandatory for the correlation to emerge in the Brainet, we performed three control sessions with awake animals receiving ICMS (but no reward). The performances dropped to levels below chance (performance: 30.67 ± 3.0%; see Fig. 2C ). Further, in another three sessions where ICMS was applied to anesthetized animals, the Brainet performed close to chance levels again (performance: 38.89 ± 4.8%; see Fig. 2C ). These results demonstrated that the Brainet could only operate above chance in awake behaving rats in which there was an expectation for reward.
After determining that the Brainet could learn to respond to an ICMS input by synchronizing its output across multiple brains, we tested whether such a collective neuronal response could be utilized for multiple computational purposes. These included discrete stimulus classification, storage of a tactile memory, and, by combining the two former tasks, processing of multiple tactile stimuli.
Brainet for stimulus classification
Initially, we trained our 4-rat Brainet to discriminate between two ICMS patterns ( Fig. 3A,B , 8 sessions in 4 rats). The first pattern (Stimulus 1) was the same as in the previous experiment (20 pulses at 22–26 Hz), while the second (Stimulus 2) consisted of two separate bursts of four pulses (22–26 Hz). The Brainet was required to report either the presence of Stimulus 1 with an increase in neuronal synchrony across the four rat brains (i.e. R ≥ 0.2 in at least three rats), or Stimulus 2 by a decrease in synchrony (i.e., R < 0.2 in at least three rats). By requiring that the delivery of Stimulus 2 be indicated through a reduction in neuronal synchronization, we further ensured that the Brainet performance was not based on a simple neural response to the ICMS pattern. As in the previous experiment, Stimulus 1 served as a reset signal that allowed rats to synchronize their neural activity to the remaining network. Meanwhile, because Stimulus 2 was much shorter than Stimulus 1, it still induced neural responses in several S1 neurons ( Fig. 3B ), but its effects were less pronounced and not as likely to induce an overall neural synchronization across the Brainet (see Supplementary Figure 1 ).
Figure 3: The Brainet can both synchronize and desynchronize neural activity.
A) Architecture of a Brainet that can synchronize and desynchronize its neural activity to perform virtual tactile stimuli classification. Different patterns of ICMS were simultaneously delivered to each rat in the Brainet. Neural signals from all neurons from each brain were analyzed and compared to the remaining rats in the Brainet. The Brainet was required to synchronize its neural activity to indicate the delivery of a Stimulus 1 and to desynchronize its neural activity to indicate the delivery of a Stimulus 2. B) Example of perievent histograms of neurons for ICMS Stimulus 1 and 2. C) The Brainet performance was above No-ICMS sessions, and above individual rats’ performances. * indicates P < 0.05; ** indicates P < 0.01; n.s. indicates non significant.
Following training, the Brainet reached an average performance of 61.24 ± 0.5% correct discrimination between Stimuli 1 and 2, which was significantly above No-ICMS sessions (52.97 ± 1.1%, n = 8 sessions; Brainet vs No-ICMS: Dunn’s test: P < 0.01). Moreover, using this more complex task design, the Brainet outperformed individual rats (55.86 ± 1.2%) (Kruskal-Wallis statistic = 10.87, P = 0.0044; Brainet vs Individual Rats; Dunn’s test: P < 0.05; also see Fig. 3C ).
To improve the overall performance of this 4-rat Brainet, we implemented an adaptive decoding algorithm that analyzed the activity of each neuron in each specific bin separately, and then readjusted the neuronal weights following each trial (see Methods for details). Figure 4A depicts this Brainet architecture. Notice the different weights for each of the individual neurons (represented by different shades of grey), reflecting the individual accuracy in decoding the ICMS pattern. Figure 4B illustrates a session in which all four rats contributed to the overall decoding of the ICMS stimuli (the red color indicates periods of maximum decoding). Using this approach, we increased both the overall Brainet performance (74.18 ± 2.2% correct trials; n = 7 rats in 12 sessions) and the number of trials performed (64.17 ± 6.2 trials) in each session. The neuronal ensembles of this Brainet included an average of 50 ± 43 neurons (mean ± standard error). Figure 4C depicts the improved performance of the Brainet compared to that of the No-ICMS sessions (54.34 ± 2.2% correct trials, n = 11 sessions) and the performance of individual rats (61.28 ± 1.1% correct trials, F = 26.34; df = 2, 56; P < 0.0001; Bonferroni post hoc comparisons; Brainet vs No-ICMS: P < 0.0001; Brainet vs Individual rats P < 0.0001).
Figure 4: Brainet for discrete classification.
A) Architecture of a Brainet for stimulus classification. Two different patterns of ICMS were simultaneously delivered to each rat in the Brainet. Neural signals from each individual neuron were analyzed separately and used to determine an overall classification vote for the Brainet. B) Example of a session where a total of 62 neurons were recorded from four different animals. Deep blue indicates poor encoding, while dark red indicates good encoding. Although Rat 3 presented the best encoding neurons, all rats contributed to the network’s final classification. C) Performance of Brainet during sessions was significantly higher when compared to the No-ICMS sessions. Additionally, because the neural activity is redundant across multiple brains, the overall performance of the Brainet was also higher than in individual brains. *** indicates P < 0.0001. D) Neuron dropping curve of Brainet for discrete classification. The effect of redundancy in encoding can be observed in the Brainet as the best encoding cells from each session are removed. E) The panels depict the dynamics of the stimulus presented (X axis: 1 or 2) and the Brainet classifications (Y axis: 1 to 2) during sessions and No-ICMS sessions. During regular sessions, the Brainet classifications mostly matched the stimulus presented (lower left and upper right quadrants). Meanwhile, during No ICMS sessions the Brainet classifications were evenly distributed across all four quadrants. The percentages indicate the fraction of trials in each quadrant (Stimulus 1, vote 1 not shown). F) Example of an image processed by the Brainet for discrete classification. An original image was pixilated and each blue or white pixel was delivered as a different ICMS pattern to the Brainet during a series of trials (Stimulus 1 – white; Stimulus 2 – blue). The left panel shows the original input image and the right panel shows the output of the Brainet.
When rats were anesthetized (2 sessions in five rats) or trial duration was reduced to 10 s (i.e. almost only comprising the ICMS and the test period – 2 sessions in four rats), the Brainet’s performance dropped sharply (anesthetized: 60.61 ± 2.8% correct; short time trials: 62.57 ± 3.14%). Once again, this control experiment indicated that the Brainet operation was not solely dependent on an automatic response to the delivery of an ICMS.
Next, we investigated the dependence of the Brainet’s performance on the number of S1 neurons recorded simultaneously. Figure 4D depicts a neuron dropping curve illustrating this effect. According to this analysis, Brainets formed by larger cortical neuronal ensembles performed better than those containing just a few neurons 9 .
The difference between the Brainet classification of the two stimuli during regular sessions and during those in which no-ICMS was delivered is shown in Fig. 4E . During the regular sessions stimulus classification remained mostly in the quadrants corresponding to the stimuli delivered (lower left and upper right quadrants), while during the No-ICMS sessions the 4-rat Brainet trial classification was evenly distributed across all quadrants.
As different rats were introduced to the Brainet, we also compared how neuronal ensemble encoding in each animal changed during initial and late sessions (the first three versus the remaining days). Overall, there was a significant increase in ICMS encoding (initial: 59.67 ± 1.4%, late: 65.08 ± 1.2%, Mann-Whitney U = 281.0, P = 0.0344) and, to a smaller extent, in the correlation coefficients between neural activity of the different animals (initial: 0.1831 ± 0.007, late: 0.2028 ± 0.005, Mann-Whitney U = 275.0, P = 0.0153) suggesting that improvements in Brainet performances were accompanied by cortical plasticity in the S1 of each animal.
To demonstrate a potential application for this stimulus discrimination task, we tested whether our Brainet could read out a pixilated image (N = 4 rats in n = 4 sessions) using the same principles demonstrated in the previous two experiments. Blue and white pixels were converted into binary codes (white – Stimulus 1 or blue – Stimulus 2) and then delivered to the Brainet over a series of trials. The right panel of Fig. 4F shows that a 4-rat Brainet was able to capture the original image with good accuracy (overall 87% correct trials) across a period of four sessions.
Brainet for storage and retrieval of tactile memories
To test whether a 3-rat Brainet could store and retrieve a tactile memory, we sent an ICMS stimulus to the S1 of one rat and then successively transferred the information decoded from that rat’s brain to other animals, via a BtBI, over a block of four trials. To retrieve the tactile memory, the information traveling across different rat brains was delivered, at the end of the chain, back to the S1 cortex of the first rat for decoding ( Fig. 5A ). Opaque panels were placed between the animals, and cortical neural activity was analyzed for each rat separately. The architecture of inputs and outputs of the 3-rat Brainet’s is shown in Fig. 5A , starting from the bottom shelf and progressing to the top one. The experiment started by delivering one of two different ICMS stimuli to the S1 of the input rat (from now on referred to as Rat 1) during the first trial (Trial 1). Neuronal ensemble activity sampled from Rat 1 was then used to decode the identity of the stimulus (either Stimulus 1 or 2). Once the stimulus identity was determined, a new trial started and a BtBI was employed to deliver a correspondent ICMS pattern to Rat 2, defining Trial 2 of the task. In this arrangement, the BtB link between Rat 1 and Rat 2 served to store the pattern (Pattern Storage I). Next, neuronal ensemble activity was recorded from the S1 of Rat 2. In the third trial, it was Rat 3’s turn to receive the tactile message (Pattern Storage II) decoded from the neural ensemble activity of Rat 2, via an ICMS mediated BtB link. During the fourth and final trial, Rat 1 received the message decoded from the neural activity of Rat 3.
Figure 5: A Brainet for storage and retrieval of tactile memories.
A) Tactile memories encoded as two different ICMS stimuli were stored in the Brainet by keeping information flowing between different nodes (i.e. rats). Tactile information sent to the first rat in Trial 1 (‘Stimulus Decoding’), was successively decoded and transferred between Rats 2 and 3, and again transferred to Rat 1, across a period of four trials (memory trace in red). The use of the brain-to-brain interface between the nodes of the network allowed accurate transfer of information. B) The overall performance of the Brainet was significantly better than the performance in the No-ICMS sessions and better than individual rats performing 4 consecutive correct trials. In this panel, * indicates P < 0.05 and *** indicates P < 0.001. C) Neuron dropping curve of Brainet for storage and retrieval of memories. D) Example of session with multiple memories (each column) processed in blocks of four trials (each row). Information flows from the bottom (Stimulus delivered) towards the top (Trials 1–4). Blue and red indicate Stimulus 1 or 2 respectively. Correct tactile memory traces are columns which have a full sequence of trials with the same color (see blocks: 3, 5, 7 and 9). In this panel, * indicates an incorrect trial.
Using this Brainet architecture, the memory of a tactile stimulus could only be recovered if the individual BtB communication links worked correctly in all four consecutive trials. The chance level for this operation was 6.25%. Under these conditions, this Brainet was able to retrieve a total of 35.37 ± 2.2% (9 sessions in 9 rats) of the tactile stimuli presented to it (Kruskall Wallis statistic = 14.89; P = 0.0006, Fig. 5B ), contrasting with 7.91 ± 6.5% in No-ICMS sessions (n = 5 sessions; Dunn’s test: P < 0.001). For comparison purposes, individual rats performed the same four-trial task correctly in only 15.63 ± 2.1% of the trials. This outcome was significantly lower than a 3-rat Brainet (Dunn’s test: P < 0.001). As in the previous experiments, larger neuronal ensembles yielded better encoding ( Fig. 5C ).
As an additional control, rats that were not processing memory related information in a specific trial (e.g. Rats 2 and 3 during the Stimulus Decoding Stage in Rat 1) received Stimulus 1 or Stimulus 2, randomly chosen. Thus, in every single trial all rats received some form of ICMS, but only the information gathered from a specific rat was used for the overall tactile trace.
The colored matrix in Fig. 5D illustrates a session in which a tactile trace developed along the 3-rat Brainet. A successful example of information transfer and recovery is shown in the third block of trials (blue column on the left). The figure shows that the original stimulus (Stimulus 1 – bottom blue square) was delivered to the S1 of Rat 1 in the first trial. This stimulus was successfully decoded from Rat 1’s neural activity, as shown by the presence of the blue square immediately above it (Trial 1 – Stimulus Decoding). In Trial 2 (Pattern Storage I), Stimulus 2 was delivered, via ICMS to the S1 of Rat 2, and again successfully decoded (as shown by the blue square in the center). Then, in Trial 3 (Pattern Storage II), the ICMS pattern delivered to Rat 3 corresponded to Stimulus 1, and the decoding of S1 neural activity obtained from this animal still corresponded to Stimulus 1, as shown by the blue square. Lastly, in Trial 4 (Stimulus Recovery), Rat 1 received an ICMS pattern corresponding to Stimulus 1 and its S1 neural activity still encoded Stimulus 1 (blue square). Thus, in this specific block of trials, the original tactile stimulus was fully recovered since all rats were able to accurately encode and decode the ICMS pattern received. Similarly, columns 5, 7, and 9 also show blocks of trials where the original tactile stimulus (in these cases Stimulus 2, red square) was accurately encoded and decoded by the Brainet. Alternatively, columns with an asterisk on top (e.g. 1 and 8) indicate incorrect blocks of trials. In these incorrect blocks, the stimulus delivered was not accurately encoded in the brain of at least one rat belonging to the Brainet (e.g. rat 3 in block 1).
Brainet for sequential and parallel processing
Lastly, we combined all the processing abilities demonstrated in the previous experiments (discrete tactile stimulus classification, BtB interface, and tactile memory storage) to investigate whether Brainets would be able to use sequential and parallel processing to perform a tactile discrimination task (N = 5 rats in N = 10 sessions). For this we used blocks of two trials where tactile stimuli were processed according to Boolean logic 10 ( Fig.6A–B ). This means that in each trial there was a binary decision tree (i.e. two options encoded as Stimulus 1 or 2). In the first trial, two different tactile inputs were independently sent to two dyads of rats (Dyad 1: Rat 1-Rat 2; Dyad 2: Rat 3-Rat 4; bottom of Fig. 6A ). In the next trial, the tactile stimuli decoded by the two dyads were combined and transmitted, as a new tactile input, to a 4-rat Brainet. Upon receiving this new stimulus, the Brainet was in charge of encoding a final solution (i.e. identifying Stimulus 3 or 4, see Supplementary Figure 2 ).
Figure 6: A Brainet for parallel and sequential processing.
A) Architecture of a network for Parallel and Sequential processing. Information flows from the bottom to the top during the course of two trials. In first trial, odd trial for parallel processing, Dyad 1 (Rat 1-Rat 2) received one of two ICMS patterns, and Dyad 2 (Rat 3-Rat 4) received independently one of two ICMS patterns. During Trial 2, even trial for sequential processing, the whole Brainet received again one of two ICMS patterns. However, the pattern delivered in the even trial was dependent on the results of the first trial and was calculated according to the colored matrix presented. As depicted by the different encasing of the matrix (blue or red), if both dyads encoded the same stimulus in the odd trial (Stimulus 1-Stimulus1 or Stimulus 2-Stimulus 2), then the stimulus delivered in the even trial corresponded to Stimulus 3. Otherwise, if each dyad encoded a different stimulus in the odd trial (Stimulus1-Stimulus 2 or Stimulus 2-Stimulus 1), then the stimulus delivered in even trial was Stimulus 4. Each correct block of information required three accurate estimates of the stimulus delivered (i.e. encoding by both dyads in the even trial, as well as the whole Brainet in the odd trial). B) Example of session with sequential and parallel processing. The bottom and center panel show the dyads processing the stimuli during the odd trials (parallel processing), while the top panel shows the performance of the whole Brainet during the even trials. In this panel, * indicates an incorrect classification. C) The performance of the Brainet was significantly better than the performance during the No-ICMS sessions and above the performance of individual rats performing blocks of 3 correct trials. In this panel, * indicates P < 0.05.
As shown at the bottom of Fig. 6A , odd trials were used for parallel processing, i.e. each of two rat dyads independently received ICMS patterns, while neural activity was analyzed and the original tactile stimulus decoded (i.e. Stimulus 1 or 2). Then, during even trials ( Fig. 6A , top), ICMS was used to encode a second layer of patterns, defined as Stimulus 3 and Stimulus 4. Note that ICMS Stimuli 3 and 4 were physically identical to Stimuli 2 and 1 respectively; however, because the stimuli delivered in the even trials were contingent on the results of the odd trials, we employed a different nomenclature to identify them. The decision tree (i.e. truth table) used to calculate the stimuli for the even trials is shown in the colored matrix at the center of Fig. 6A . The matrix shows that, if both dyads encoded the same tactile stimulus in the odd trial (Stimulus 1-Stimulus 1, or Stimulus 2-Stimulus 2; combinations with blue encasing), the ICMS delivered to the entire Brainet in the even trial corresponded to Stimulus 4. Otherwise, if the tactile stimulus decoded from each rat dyad in the odd trial was different (Stimulus 1-Stimulus 2, or Stimulus 2-Stimulus 1; combinations with red encasing), the ICMS delivered to the entire Brainet in the even trial corresponded to Stimulus 3. As such, the ICMS pattern delivered in even trials was the same for the whole Brainet (i.e. all four rats).
At the end of each even trial, the stimulus decoded from the combined neuronal activity of the four brain ensemble (top of Fig. 6A ) defined the final output of the Brainet. Chance level was set at 12.5%. Overall, this Brainet performance was much higher than chance level or No-ICMS sessions (Brainet: 45.22 ± 3.4%, n = 10 sessions) significantly above No-ICMS sessions (n = 5 sessions) (No-ICMS: 22.79 ± 5.4%; Kruskal-Wallis statistic = 7.565, P = 0.0228; Dunn’s test: P < 0.05 Fig. 6C ). Additionally, the Brainet also outperformed each individual rat (groups of three consecutive trials: 30.25 ± 3.0%; Dunn’s test: P < 0.05).
As our last experiment, we tested whether a 3-rat Brainet could be used to classify meteorological data (see Methods for details). Again, the decision tree included two independent variables in the odd trials and a dependent variable in the even trials (see Supplementary Figure 3 ). Figure 7A illustrates how Boolean logic was applied to convert data from an original weather forecast model . In the bottom panel, the yellow line depicts continuous changes in temperature occurring during a period of 10 hours. Periods where the temperature increased were transferred to the Brainet as Stimulus 1 (see arrows in periods between 0 and 4 hours), whereas periods where the temperature decreased were transferred as Stimulus 2 (see arrows in periods between 6 and 10 hours). The middle panel of Fig. 7A illustrates changes in barometric pressure (green line). Again, periods where the barometric pressure increased were translated as Stimulus 1 (e.g. between 1-2 hours), while periods where the barometric pressure decreased were translated as Stimulus 2 (e.g. 3–5 hours).
Figure 7: Parallel and sequential processing for weather forecast
A) Each panel represents examples of the original data, reflecting changes in temperature (lower panel), barometric pressure (center panel), and probability of precipitation (upper panel). The arrows represent general changes in each variable, indicating an increase or a decrease. On the top of each panel is represented the ICMS pattern that resulted from each arrow presented. B) Lower and center panels show trials where different rats of the Brainet (Rat 1 lower panel, and Rats 2-3 center panel) processed the original data in parallel. Specifically, Rat 1 processed temperature changes and Rats 2-3 processed barometric pressure changes. The upper panel shows the Brainet processing changes in the probability of precipitation (Rats 1–3) during the even trials. * indicates trials where processing was incorrect.
Both Stimulus 1 and 2 were delivered to a Brainet during odd trials; changes in temperature were delivered to Rat 1 alone, while changes in barometric pressure were delivered to Rats 2 and 3. As in the previous experiment, Stimuli 3 and 4 were physically similar to Stimuli 1 and 2. In even trials, increases and decreases in the probability of precipitation (top panel Fig. 7A ) were calculated as follows: an increase in temperature (Stimulus 1; Rat 1) combined with a decrease in barometric pressure (Stimulus 2; Rats 2 and 3) was transferred to even trials as an increase in the probability of precipitation (i.e. a Stimulus 4), whereas any other combination was transferred as Stimulus 3, and associated with a decrease in precipitation probability. This specific combination of inputs was used because it reflects a common set of conditions associated with early evening spring thunderstorms in North Carolina.
Overall, our 3-rat Brainet predicted changes in the probability of precipitation with 41.02 ± 5.1% accuracy which was much higher than chance (No-ICMS: 16.67 ± 8.82%; n = 3 sessions; t = 2.388, df = 4; P = 0.0377) (also see Fig. 7B ).
Discussion
In this study we described different Brainet architectures capable of extracting information from multiple (3-4) rat brains. Our Brainets employed ICMS based BtBs combined with neuronal ensemble recordings to simultaneously deliver and retrieve information to and from multiple brains. Multiple BtBIs were used to construct some of our Brainet designs. Our experiments demonstrated that several Brainet architectures can be employed to solve basic computational problems. Moreover, in all cases analyzed the Brainet performance was equal or superior to that of an individual brain. These results provide a proof of concept for the possibility of creating computational engines composed of multiple interconnected animal brains.
Previously, Brainets have incorporated only up to two subjects exchanging motor or sensory information 2 , or up to three monkeys that collectively controlled the 3D movements of a virtual arm 8 . These studies provided two major building blocks for Brainet design: (1) information transfer between individual brains, and (2) collaborative performance among multiple animal brains. Here, we took advantage of these building blocks to demonstrate more advanced Brainet processing by solving multiple computational problems, which included discrete classification, image processing, storage and retrieval of memories, and a simplified form of weather forecasting 1 , 2 , 8 . All these computations were dependent on the collective work of cortical neuronal ensembles recorded simultaneously from multiple animal brains working towards a common goal.
One could argue that the Brainet operations demonstrated here could result from local responses of S1 neurons to ICMS. Several lines of evidence suggest that this was not the case. First, we have demonstrated that animals needed several sessions of training before they learned to synchronize their S1 activity with other rats. Second, the decoding for individual neurons in untrained rats was close to chance levels. Third, attempts to make the Brainet work in anesthetized animals resulted in poor performance. Fourth, network synchronization and individual neuron decoding failed when animals did not attend to the task requirements and engaged in grooming instead. Fifth, removing the reward contingency drastically reduced the Brainet performance. Sixth, after we reduced trial duration, the decoding from individual neurons dropped to levels close to chance.
Altogether, these findings indicate that optimal Brainet processing was only attainable in fully awake, actively engaged animals, with an expectation to be rewarded for correct performance. These features are of utmost importance since they allowed Brainets to retain the computational aptitudes of the awake brain 11 and, in addition, to benefit from emergent properties resulting from the interactions between multiple individuals 2 . It is also noteworthy to state that the Brainets implemented here only allowed partial social interactions between subjects (through the Plexiglas panels). As such, it is not clear from our current study, to what extent social interactions played (or not) a pivotal role in the Brainet performance. Therefore, it will be interesting to repeat and expand these experiments by allowing full social contact between multiple animals engaged in a Brainet operation. In this context, Brainets may become a very useful tool to investigate the neurophysiological basis of animal social interactions and group behavior.
We have previously proposed that the accuracy of the BtBI could be improved by increasing the number of nodes in the network and the size of neuronal ensembles utilized to process and transfer information 2 . The novel Brainet architectures tested in the present study support these suggestions, as we have demonstrated an overall improvement in BtBI performances compared to our previous study (maximum of 72% correct in the previous study versus maximum of 87% correct here) 2 . Since neuron dropping curves did not reach a plateau, it is likely that the performance of our Brainet architectures can be significantly improved by the utilization of larger cortical neuronal samples. In addition, switching between sequential and parallel processing modes, as was done in the last experiment, allowed the same Brainet to process more than two bits of information. It is important to emphasize, however, that the computational tasks examined in this study were implemented through Boolean logic 10 , 12 . In future studies we propose to address a new range of computational problems by using simultaneous analog and digital processing. By doing so, we intend to identify computational problems that are more suitable for Brainets to solve. Our hypothesis is that, instead of typical computational problems addressed by digital machines, Brainets will be much more amenable to solving the kind of problems faced by animals in their natural environments.
The present study has also shown that the use of multiple interconnected brains improved Brainet performance by introducing redundancy in the overall processing of the inputs and allowing groups of animals to share the attentional load during the task, as previously reported for monkey Brainets 8 . Therefore, our findings extended the concept of BtBIs by showing that these interfaces can allow networks of brains to alternate between sequential and parallel processing 13 and to store information.
In conclusion, we propose that animal Brainets have significant potential both as a new experimental tool to further investigate system neurophysiological mechanisms of social interactions and group behavior, as well as provide a test bed for building organic computing devices that can take advantage of a hybrid digital-analogue architecture.
Methods
All animal procedures were performed in accordance with the National Research Council’s Guide for the Care and Use of Laboratory Animals and were approved by the Duke University Institutional Animal Care and Use Committee. Long Evans rats weighing between 250–350 g were used in all experiments.
Tasks of synchronization and desynchronization
Groups of four rats, divided in two pairs (dyads), were placed in two behavioral chambers (one dyad in each chamber). Rats belonging to the same dyad (i.e. inside the same chamber) could see each other through a Plexiglas panel, but not the animals in the other dyad. Each trial in a session consisted of four different periods: baseline (from 0–9 seconds), ICMS (9–11 seconds), test (11–12 seconds), and reward (13–25 seconds). During the baseline period no action was required from rats. During the ICMS period a pattern of ICMS (20 pulses, at 22–26 Hz, 10–100 uA) was delivered to all rats simultaneously. During the Test period, neural activity from all neurons recorded in each rat was analyzed and compared to the neural activity of all other animals as a population. Spikes from individual channels were summed to generate a population vector representing the overall activity which generally constitutes a good indicator of whisking and/or licking activity 14 . The population vectors for each of the four rats were then normalized. Lastly, we calculated the Pearson correlation between the normalized population vector of each rat and the general population of rats (the average of the neural population vectors from three remaining rats). During Pre-Sessions neural activity was analyzed in each trial, but no ICMS or water reward was delivered. During Sessions, neural activity was analyzed after the delivery of an ICMS stimulus and if the threshold for a correct trial was reached (at least three rats with R> = 0.2) then a water reward was delivered. During the Post-Sessions, neural activity was recorded and a water reward was delivered if animals reached the threshold for a correct trial, however no ICMS stimuli were delivered.
Additionally, we also tested the effect of ICMS alone and in anesthetized animals (Ketamine/Xylazine 100 mg/kg). During the synchronization/desynchronization task two different ICMS patterns were delivered: Stimulus 1 consisted of the same pattern that was used for the synchronization task and the threshold for a correct trial remained the same. Stimulus 2 consisted of two short bursts of ICMS (2 × 4 pulses, 22–26 Hz separated by 250 ms interval) and the threshold for a correct response was less than three rats reaching an R value of 0.2 during the testing period.
Adaptive decoding algorithm
During the experiments where the adaptive decoding algorithm was used (discrete classification, tactile memory storage, sequential and parallel processing), the ICMS patterns remained as previously. Neural activity was separately analyzed for each neuron in each rat and 25 ms distributions were built and filtered with a moving average of 250 ms. The overall structure of the sessions included an initial period of 16–30 trials where Stimuli 1 and 2 were delivered to rats in order to build the distributions for each stimulus. The overall firing rate for each bin in the test period was then analyzed and, according to the probability distributions, a vote for Stimulus 1 or for Stimulus 2 was calculated. Bins with similar spike distributions for both stimuli were not analyzed. A final vote for each cell was then calculated, using the votes from all the bins that presented differences in the firing rate for the two stimuli. Lastly, the final votes for each cell in the population were filtered with a sigmoid curve. This filtering allowed the best encoding cells in the ensembles to contribute significantly more than other cells to the overall decision made by the Brainet made in each trial. Additionally, the weight of the cell population could be automatically adjusted at different intervals (e.g. every 10 or 15 trials).
For the image processing experiment, groups of four rats were tested. An original image was pixilated and converted into multiple trials. Each trial corresponded to a white (Stimulus 1) or blue (Stimulus 2) pixel in the original image. In each trial one of two different ICMS stimuli was delivered to the Brainet. After the neural activity from the Brainet was decoded, a new image corresponding to the overall processing by the Brainet was recreated.
Memory storage experiment
For this specific experiment only three rats were used in each session and ICMS frequency patterns varied between 20–100 Hz. The number of pulses remained the same as in the previous experiments. Each memory was processed across a period of four trials which represented four different stages of a memory being processed: Stimulus delivery (Trial 1), Pattern Storage I (Trial 2), Pattern Storage II (Trial 3), and lastly, Stimulus Recovery (Trial 4). Information was initially delivered to the S1 cortex of the first rat (Rat 1) in the first trial – Stimulus Delivery. In Trial 2, information decoded from the cortex of Rat 1 was delivered as an ICMS pattern to the second rat (Rat 2) – Pattern Storage I. In Trial 3, information decoded from the S1 of Rat 2 was delivered to Rat 3 – Pattern Storage II. In Trial 4, neural activity decoded from the cortex of Rat 3 was decoded and delivered to the cortex of Rat 1 as a pattern of ICMS. Lastly, if the stimulus encoding and decoding was correct across all four trials (chance level of 6.25%) a memory was considered to be recovered. The overall number of memories decoded, the percent of stimuli decoded and the accuracy of the brain-to-brain interface information transfer were measured. As a control measure the Plexiglas panels separating the dyads were made opaque for this experiment. Additionally, as the tactile pattern was delivered to each rat in the specific memory stage (delivery, storage or recovery), a random Stimulus 1 or 2 was delivered to the remaining rats. This random stimulation of the remaining individuals ensured that, in each trial, rats could not identify whether or not they were participating in the tactile trace.
Sequential and parallel processing experiment
Each block of information processing consisted of two trials: the first trial corresponded to parallel processing and the second trial corresponded to sequential processing. Two dyads of rats were formed: Dyad 1 (Rat 1-Rat 2) and Dyad 2 (Rat 3-Rat 4). During the first trial each dyad processed one of two ICMS stimuli independently of the other dyad. After the delivery of the ICMS stimuli to each dyad, neural activity was decoded and the stimulus for Trial 2 was computed from the results. If both dyads encoded a similar stimulus (Stimulus 1 – Stimulus 1, or Stimulus 2 – Stimulus 2), then the ICMS stimulus in Trial 2 was Stimulus 3. Otherwise, if the dyads encoded different ICMS stimuli (Stimulus 1 – Stimulus 2, or Stimulus 2 – Stimulus 1), then the ICMS stimulus in Trial 2 would be Stimulus 4. Stimuli 1 and 3 and Stimuli 2 and 4 had the exact same physical characteristics (number of pulses). During the second trial the same stimulus was delivered simultaneously to all four rats, and the Brainet encoded an overall response. A block of information was considered to be correct only if both Trials 1 and 2 were correct in both the dyads and in the Brainet.
For the weather forecasting experiment groups of three animals were tested. Sessions were run as described above for sequential and parallel processing. However, Trial one (parallel processing) was processed only by one rat (temperature) and one dyad of rats (barometric pressure), while Trial two (sequential processing: probability of precipitation) was processed by the whole Brainet (three rats).
To establish a simple weather forecast model we used original data from Raleigh/Durham Airport (KRDU), at WWW.Wunderground.com. Estimates were collected on August 2, 2014. We used periods characterized by increases and decreases in temperature and barometric pressure as independent variables, and increases in the probability of precipitation as the dependent variable. A total of 13 periods were collected. These included a total of 26 independent inputs for even trials (13 variations in temperatures and 13 variations in barometric pressure), as well as 13 additional changes in the probability of precipitation, to be compared with the Brainet outputs (i.e. the actual forecast). Specifically, for this experiment, increases in temperature (Stimulus 1 for the first rat) with decreases in barometric pressure (Stimulus 2 in Rats 2-3), during the odd trials, were computed as an increase in the probability of precipitation (Stimulus 4 to the Brainet in the even trial). Otherwise, increases or decreases in temperature (Stimulus 1 or 2 in the odd trial) combined with an increase in barometric pressure (Stimulus 1 for Rats 2 and 3), were computed as a decrease in the probability of precipitation (Stimulus 3 for the Brainet) in the even trial. Stimuli 1 and 3, and Stimuli 2 and 4 had the exact same physical characteristics (number of pulses).
Surgery for microelectrode array implantation
Fixed or movable microelectrode bundles or arrays of electrodes were implanted bilaterally in the S1 of rats. Craniotomies were made and arrays lowered at the following stereotaxic coordinates: [(AP) −3.5 mm, (ML), ±5.5 mm (DV) −1.5 mm].
Electrophysiological recordings
A Multineuronal Acquisition Processor (64 channels, Plexon Inc, Dallas, TX) was used to record neuronal spikes, as previously described 15 . Briefly, differentiated neural signals were amplified (20000–32,000×) and digitized at 40 kHz. Up to four single neurons per recording channel were sorted online (Sort client 2002, Plexon inc, Dallas, TX).
Intracortical electrical microstimulation
Intracortical electrical microstimulation cues were generated by an electrical microstimulator (Master 8 , AMPI, Jerusalem, Israel) controlled by custom Matlab script (Nattick, USA) receiving information from a Plexon system over the internet. Patterns of 8–20 (bipolar, biphasic, charge balanced; 200 μsec) pulses at 20–120 Hz were delivered to S1. Current intensity varied from 10–100 μA.
Additional Information
How to cite this article: Pais-Vieira, M. et al. Building an organic computing device with multiple interconnected brains. Sci. Rep. 5, 11869; doi: 10.1038/srep11869 (2015).
References
Nicolelis, M. Beyond boundaries: the new neuroscience of connecting brains with machines–and how it will change our lives. 1st edn, (Times Books/Henry Holt and Co.,2011).
Show context
Pais-Vieira, M., Lebedev, M., Kunicki, C., Wang, J. & Nicolelis, M. A. A brain-to-brain interface for real-time sharing of sensorimotor information. Sci Rep 3, 1319, doi:10.1038/srep01319 (2013).
CAS
Show context
West, B. J., Turalska, M. & Grigolini, P. Networks of Echoes Imitation, Innovation and Invisible Leaders (Springer 2014).
Show context
Deadwyler, S. A. et al. Donor/recipient enhancement of memory in rat hippocampus. Front Syst Neurosci 7, 120, doi:10.3389/fnsys.2013.00120 (2013).
Show context
Yoo, S. S., Kim, H., Filandrianos, E., Taghados, S. J. & Park, S. Non-invasive brain-to-brain interface (BBI): establishing functional links between two brains. PLoS One 8, e60410, doi:10.1371/journal.pone.0060410 PONE-D-12-31631 (2013).
CAS
Show context
Rao, R. P. et al. A Direct Brain-to-Brain Interface in Humans. PLoS One 9, e111332, doi:10.1371/journal.pone.0111332 PONE-D-14-32416 (2014).
CAS
Show context
Grau, C. et al. Conscious brain-to-brain communication in humans using non-invasive technologies. PLoS One 9, e105225, doi:10.1371/journal.pone.0105225 PONE-D-14-17198 (2014).
CAS
Show context
Ramakrishnan, A. et al. Computing arm movements with a monkey brainet. Sci Rep In Press (2015).
Show context
Carmena, J. M. et al. Learning to control a brain-machine interface for reaching and grasping by primates. PLoS Biol 1, E42, doi:10.1371/journal.pbio.0000042 (2003).
CAS
Show context
Boole, G. in The Mathematical Analysis of Logic, being an essay towards a calculus of deductive reasoning. Cambridge: MacMillan, Barclay & MacMillan (1847).
Show context
Krupa, D. J., Wiest, M. C., Shuler, M. G., Laubach, M. & Nicolelis, M. A. Layer-specific somatosensory cortical activation during active tactile discrimination. Science 304,1989–1992, doi:10.1126/science.1093318 304/5679/1989 (2004).
CAS
ISI
Show context
Harris, J. M., Hirst, J. L. & Mossinghoff, M. J. Combinatorics and graph theory. 2nd edn, (Springer, 2008).
Show context
Grama, A. Introduction to parallel computing. 2nd edn, (Addison-Wesley, 2003).
Show context
Pais-Vieira, M., Lebedev, M. A., Wiest, M. C. & Nicolelis, M. A. Simultaneous Top-down Modulation of the Primary Somatosensory Cortex and Thalamic Nuclei during Active Tactile Discrimination. J Neurosci 33, 4076–4093, doi:10.1523/JNEUROSCI.1659-12.2013 33/9/4076 (2013).
CAS
Show context
Nicolelis, M. A. L. Methods for neural ensemble recordings. 2nd edn, (CRC Press, 2008).
Show context
Acknowledgements
The authors would like to thank James Meloy for microelectrode array manufacturing and setup development, Po-He Tseng and Eric Thomson for comments on the manuscript, Laura Oliveira, Susan Halkiotis, and Terry Jones for miscellaneous assistance. This work was supported by NIH R01DE011451, R01NS073125, RC1HD063390, National Institute of Mental Health award DP1MH099903, and by Fundacao BIAL 199/12 to MALN. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Author information
Affiliations
Department of Neurobiology, Duke University, Durham, North Carolina 27710
Miguel Pais-Vieira, 
Gabriela Chiuffa, 
Mikhail Lebedev & 
Miguel A. L. Nicolelis
Department of Biomedical Engineering, Duke University, Durham, North Carolina 27710
Amol Yadav & 
Miguel A. L. Nicolelis
Department of Psychology and Neuroscience, Duke University, Durham, North Carolina 27710
Miguel A. L. Nicolelis
Duke Center for Neuroengineering, Duke University, Durham, North Carolina 27710
Mikhail Lebedev & 
Miguel A. L. Nicolelis
Edmond and Lily Safra International Institute for Neuroscience of Natal, Natal, Brazil
Miguel A. L. Nicolelis
Contributions
M.P.V. and G.S. performed the experiments; M.P.V. and M.A.N. conceptualized the experiments; M.P.V., A.Y., M.L. and M.A.N. analyzed the data. M.P.V., M.L. and M.A.N. wrote the manuscript. M.P.V. prepared Figures 1–7 and SF1–3. G.S. also prepared Figure 4 . All authors reviewed the manuscript.
Competing financial interests
The authors declare no competing financial interests.
Corresponding author
Correspondence to: 
Supplementary information
PDF files
This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
ORIGINAL: Nature
Corresponding author  Scientific Reports 5, Article number: 11869 doi:10.1038/srep11869Received 03 March 2015 Accepted 09 June 2015 Published 09 July 2015

At a glance

Figure 1: Experimental apparatus scheme for a Brainet computing device.

 A) A Brainet of four interconnected brains is shown. The arrows represent the flow of information through the Brainet. Inputs were delivered as simultaneous ICMS patterns to the S1 cortex of each rat. Neural activity was then recorded and analyzed in real time. Rats were required to synchronize their neural activity with the remaining of the Brainet to receive water B) Inputs to the Brainet were delivered as ICMS patterns to the left S1, while outputs were calculated using the neural responses recorded from the right S1. C) Brainet architectures were set to mimic hidden layers of an artificial neural network. D) Examples of perievent histograms of neurons after the delivery of ICMS.

Figure 2: The Brainet can synchronize neural activity A) The different colors indicate the different manipulations used to study synchronization across the network. During the pre-session, rats were tested for periods of spurious neural synchronization. No ICMS or rewards were delivered here. During sessions, rats were tested for increased neural synchronization due to detection of the ICMS stimulus (red period). Successful synchronization was rewarded with water. During the post session, rats were tested for periods of neural synchronization due to the effects of reward (e.g. continuous whisking/licking). Successful synchronization was rewarded with water, but no ICMS stimulus was delivered. B) Example of neuronal activity across the Brainet. After the ICMS there was a general tendency for neural activity to increase. Periods of maximum firing rate are represented in red. C) The performance of the Brainet during sessions was above the pre-sessions and post-sessions. Also, delivery of ICMS alone or during anesthetized states also resulted in poor performances. ** and *** indicate P < 0.01 and P < 0.0001 respectively. D) Overall changes in R values in early and late sessions show that improvements in performances were accompanied by specific changes in the periods of synchronized activity. E) Example of a synchronization trial. The lower panels show, in red, the neural activity of each rat and, in blue, the average of neural activity for the remaining of the Brainet. The upper panels depict the R value for the correlation coefficient between each rat and the remaining of the Brainet. There was an overall tendency for the Brainet to correlate in the beginning of the test period.

Figure 3: The Brainet can both synchronize and desynchronize neural activity. 

A) Architecture of a Brainet that can synchronize and desynchronize its neural activity to perform virtual tactile stimuli classification. Different patterns of ICMS were simultaneously delivered to each rat in the Brainet. Neural signals from all neurons from each brain were analyzed and compared to the remaining rats in the Brainet. The Brainet was required to synchronize its neural activity to indicate the delivery of a Stimulus 1 and to desynchronize its neural activity to indicate the delivery of a Stimulus 2. 

B) Example of perievent histograms of neurons for ICMS Stimulus 1 and 2. 

C) The Brainet performance was above No-ICMS sessions, and above individual rats’ performances. * indicates P < 0.05; ** indicates P < 0.01; n.s. indicates non significant.


Figure 4: Brainet for discrete classification. 

A) Architecture of a Brainet for stimulus classification. Two different patterns of ICMS were simultaneously delivered to each rat in the Brainet. Neural signals from each individual neuron were analyzed separately and used to determine an overall classification vote for the Brainet. 

B) Example of a session where a total of 62 neurons were recorded from four different animals. Deep blue indicates poor encoding, while dark red indicates good encoding. Although Rat 3 presented the best encoding neurons, all rats contributed to the network’s final classification. 

C) Performance of Brainet during sessions was significantly higher when compared to the No-ICMS sessions. Additionally, because the neural activity is redundant across multiple brains, the overall performance of the Brainet was also higher than in individual brains. *** indicates P < 0.0001.

D) Neuron dropping curve of Brainet for discrete classification. The effect of redundancy in encoding can be observed in the Brainet as the best encoding cells from each session are removed. 

E) The panels depict the dynamics of the stimulus presented (X axis: 1 or 2) and the Brainet classifications (Y axis: 1 to 2) during sessions and No-ICMS sessions. During regular sessions, the Brainet classifications mostly matched the stimulus presented (lower left and upper right quadrants). Meanwhile, during No ICMS sessions the Brainet classifications were evenly distributed across all four quadrants. The percentages indicate the fraction of trials in each quadrant (Stimulus 1, vote 1 not shown). 

F) Example of an image processed by the Brainet for discrete classification. An original image was pixilated and each blue or white pixel was delivered as a different ICMS pattern to the Brainet during a series of trials (Stimulus 1 – white; Stimulus 2 – blue). The left panel shows the original input image and the right panel shows the output of the Brainet.

Figure 5: A Brainet for storage and retrieval of tactile memories. 

A) Tactile memories encoded as two different ICMS stimuli were stored in the Brainet by keeping information flowing between different nodes (i.e. rats). Tactile information sent to the first rat in Trial 1 (‘Stimulus Decoding’), was successively decoded and transferred between Rats 2 and 3, and again transferred to Rat 1, across a period of four trials (memory trace in red). The use of the brain-to-brain interface between the nodes of the network allowed accurate transfer of information. 

B) The overall performance of the Brainet was significantly better than the performance in the No-ICMS sessions and better than individual rats performing 4 consecutive correct trials. In this panel, * indicates P < 0.05 and *** indicates P < 0.001. 

C) Neuron dropping curve of Brainet for storage and retrieval of memories. 

D) Example of session with multiple memories (each column) processed in blocks of four trials (each row). Information flows from the bottom (Stimulus delivered) towards the top (Trials 1–4). Blue and red indicate Stimulus 1 or 2 respectively. Correct tactile memory traces are columns which have a full sequence of trials with the same color (see blocks: 3, 5, 7 and 9). In this panel, * indicates an incorrect trial.

Figure 6: A Brainet for parallel and sequential processing. 

A) Architecture of a network for Parallel and Sequential processing. Information flows from the bottom to the top during the course of two trials. In first trial, odd trial for parallel processing, Dyad 1 (Rat 1-Rat 2) received one of two ICMS patterns, and Dyad 2 (Rat 3-Rat 4) received independently one of two ICMS patterns. During Trial 2, even trial for sequential processing, the whole Brainet received again one of two ICMS patterns. However, the pattern delivered in the even trial was dependent on the results of the first trial and was calculated according to the colored matrix presented. As depicted by the different encasing of the matrix (blue or red), if both dyads encoded the same stimulus in the odd trial (Stimulus 1-Stimulus1 or Stimulus 2-Stimulus 2), then the stimulus delivered in the even trial corresponded to Stimulus 3. Otherwise, if each dyad encoded a different stimulus in the odd trial (Stimulus1-Stimulus 2 or Stimulus 2-Stimulus 1), then the stimulus delivered in even trial was Stimulus 4. Each correct block of information required three accurate estimates of the stimulus delivered (i.e. encoding by both dyads in the even trial, as well as the whole Brainet in the odd trial). 

B) Example of session with sequential and parallel processing. The bottom and center panel show the dyads processing the stimuli during the odd trials (parallel processing), while the top panel shows the performance of the whole Brainet during the even trials. In this panel, * indicates an incorrect classification. 

C) The performance of the Brainet was significantly better than the performance during the No-ICMS sessions and above the performance of individual rats performing blocks of 3 correct trials. In this panel, * indicates P < 0.05.

Figure 7: Parallel and sequential processing for weather forecast. 

A) Each panel represents examples of the original data, reflecting changes in temperature (lower panel), barometric pressure (center panel), and probability of precipitation (upper panel). The arrows represent general changes in each variable, indicating an increase or a decrease. On the top of each panel is represented the ICMS pattern that resulted from each arrow presented. 

B) Lower and center panels show trials where different rats of the Brainet (Rat 1 lower panel, and Rats 2-3 center panel) processed the original data in parallel. Specifically, Rat 1 processed temperature changes and Rats 2-3 processed barometric pressure changes. The upper panel shows the Brainet processing changes in the probability of precipitation (Rats 1–3) during the even trials. * indicates trials where processing was incorrect.

IBM Announces Computer Chips More Powerful Than Any in Existence

By admin,

A wafer made up of seven-nanometer chips.
A wafer made up of seven-nanometer chips. IBM said it made the advance by using silicon-germanium instead of pure silicon. CreditDarryl Bautista/IBM
IBM said on Thursday that it had made working versions of ultradense computer chips, with roughly four times the capacity of today’s most powerful chips.
The announcement, made on behalf of an international consortium led by IBM, the giant computer company, is part of an effort to manufacture the most advanced computer chips in New York’s Hudson Valley, where IBM is investing $3 billion in a private-public partnership with New York State, GlobalFoundries, Samsung and equipment vendors.
The development lifts a bit of the cloud that has fallen over the semiconductor industry, which has struggled to maintain its legendary pace of doubling transistor density every two years.
Intel, which for decades has been the industry leader, has faced technical challenges in recent years. Moreover, technologists have begun to question whether the longstanding pace of chip improvement, known as Moore’s Law, would continue past the current 14-nanometer generation of chips.
Each generation of chip technology is defined by the minimum size of fundamental components that switch current at nanosecond intervals. Today the industry is making the commercial transition from what the industry generally describes as 14-nanometer manufacturing to 10-nanometer manufacturing.
Michael Liehr of the SUNY College of Nanoscale Science and Engineering, left, and Bala Haranand of IBM examine a wafer comprised of the new chips. They are not yet ready for commercial manufacturing. CreditDarryl Bautista/IBM
Each generation brings roughly a 50 percent reduction in the area required by a given amount of circuitry. IBM’s new chips, though still in a research phase, suggest that semiconductor technology will continue to shrink at least through 2018.
The company said on Thursday that it had working samples of chips with seven-nanometer transistors. It made the research advance by using silicon-germanium instead of pure silicon in key regions of the molecular-size switches.
The new material makes possible faster transistor switching and lower power requirements. The tiny size of these transistors suggests that further advances will require new materials and new manufacturing techniques.
As points of comparison to the size of the seven-nanometer transistors, a strand of DNA is about 2.5 nanometers in diameter and a red blood cell is roughly 7,500 nanometers in diameter. IBM said that would make it possible to build microprocessors with more than 20 billion transistors.
I’m not surprised, because this is exactly what the road map predicted, but this is fantastic,” said Subhashish Mitra, director of the Robust Systems Group in the Electrical Engineering Department at Stanford University.
Even though IBM has shed much of its computer and semiconductor manufacturing capacity, the announcement indicates that the company remains interested in supporting the nation’s high technology manufacturing base.
This puts IBM in the position of being a gentleman gambler as opposed to being a horse owner,” said Richard Doherty, president of Envisioneering, a Seaford, N.Y., consulting firm, referring to the fact that IBM’s chip manufacturing facility was acquired by GlobalFoundries effective last week.
IBM’s seven-nanometer node transistors. A strand of DNA is about 2.5 nanometers in diameter and a red blood cell is roughly 7,500 nanometers in diameter. CreditIBM Research
They still want to be in the race,” he added.
IBM now licenses the technology it is developing to a number of manufacturers and GlobalFoundries, owned by the Emirate of Abu Dhabi, to make chips for companies including Broadcom, Qualcomm and Advanced Micro Devices.
The semiconductor industry must now decide if IBM’s bet on silicon-germanium is the best way forward.
It must also grapple with the shift to using extreme ultraviolet, or EUV, light to etch patterns on chips at a resolution that approaches the diameter of individual atoms. In the past, Intel said it could see its way toward seven-nanometer manufacturing. But it has not said when that generation of chip making might arrive.
IBM also declined to speculate on when it might begin commercial manufacturing of this technology generation. This year, Taiwan Semiconductor Manufacturing Company said that it planned to begin pilot product of seven-nanometer chips in 2017. Unlike IBM, however, it has not demonstrated working chips to meet that goal.
It is uncertain whether the longer exposure times required by the new generation of EUV photolithographic stepper machines would make high-speed manufacturing operations impossible. Even the slightest vibration can undermine the precision of the optics necessary to etch lines of molecular thicknesses, and the semiconductor industry has been forced to build specialized stabilized buildings to try to isolate equipment from vibration.
An IBM official said that the consortium now sees a way to use EUV light in commercial manufacturing operations.
EUV is another game changer,” said Mukesh Khare, vice president for semiconductor research at IBM. To date, he noted, the demonstration has taken place in a research lab, not in a manufacturing plant. Ultimately the goal is to create circuits that have been reduced in area by another 50 percent over the industry’s 10-nanometer technology generation scheduled to be introduced next year.
ORIGINAL: NYTimes
JULY 9, 2015

Silicon Valley Then and Now: To Invent the Future, You Must Understand the Past

By admin,

William Shockley’s employees toast him for his Nobel Prize, 1956. Photo courtesy Computer History Museum.
You can’t really understand what is going on now without understanding what came before.
Steve Jobs is explaining why, as a young man, he spent so much time with the Silicon Valley entrepreneurs a generation older, men like Robert Noyce, Andy Grove, and Regis McKenna.
It’s a beautiful Saturday morning in May, 2003, and I’m sitting next to Jobs on his living room sofa, interviewing him for a book I’m writing. I ask him to tell me more about why he wanted, as he put it, “to smell that second wonderful era of the valley, the semiconductor companies leading into the computer.” Why, I want to know, is it not enough to stand on the shoulders of giants? Why does he want to pick their brains?
It’s like that Schopenhauer quote about the conjurer,” he says. When I look blank, he tells me to wait and then dashes upstairs. He comes down a minute later holding a book and reading aloud:
Steve Jobs and Robert Noyce.
Courtesy Leslie Berlin.
He who lives to see two or three generations is like a man who sits some time in the conjurer’s booth at a fair, and witnesses the performance twice or thrice in succession. The tricks were meant to be seen only once, and when they are no longer a novelty and cease to deceive, their effect is gone.
History, Jobs understood, gave him a chance to see — and see through — the conjurer’s tricks before they happened to him, so he would know how to handle them.
Flash forward eleven years. It’s 2014, and I am going to see Robert W. Taylor. In 1966, Taylor convinced the Department of Defense to build the ARPANET that eventually formed the core of the Internet. He went on to run the famous Xerox PARC Computer Science Lab that developed the first modern personal computer. For a finishing touch, he led one of the teams at DEC behind the world’s first blazingly fast search engine — three years before Google was founded.
Visiting Taylor is like driving into a Silicon Valley time machine. You zip past the venture capital firms on Sand Hill Road, over the 280 freeway, and down a twisty two-lane street that is nearly impassable on weekends, thanks to the packs of lycra-clad cyclists on multi-thousand-dollar bikes raising their cardio thresholds along the steep climbs. A sharp turn and you enter what seems to be another world, wooded and cool, the coastal redwoods dense along the hills. Cell phone signals fade in and out in this part of Woodside, far above Buck’s Restaurant where power deals are negotiated over early-morning cups of coffee. GPS tries valiantly to ascertain a location — and then gives up.
When I get to Taylor’s home on a hill overlooking the Valley, he tells me about another visitor who recently took that drive, apparently driven by the same curiosity that Steve Jobs had: Mark Zuckerberg, along with some colleagues at the company he founded, Facebook.
Zuckerberg must have heard about me in some historical sense,” Taylor recalls in his Texas drawl. “He wanted to see what I was all about, I guess.
 
To invent the future, you must understand the past.

I am a historian, and my subject matter is Silicon Valley. So I’m not surprised that Jobs and Zuckerberg both understood that the Valley’s past matters today and that the lessons of history can take innovation further. When I talk to other founders and participants in the area, they also want to hear what happened before. Their questions usually boil down to two:

  1. Why did Silicon Valley happen in the first place, and 
  2. why has it remained at the epicenter of the global tech economy for so long?
I think I can answer those questions.

First, a definition of terms. When I use the term “Silicon Valley,” I am referring quite specifically to the narrow stretch of the San Francisco Peninsula that is sandwiched between the bay to the east and the Coastal Range to the west. (Yes, Silicon Valley is a physical valley — there are hills on the far side of the bay.) Silicon Valley has traditionally comprised 

  • Santa Clara County and 
  • the southern tip of San Mateo County. In the past few years, 
  • parts of Alameda County and 
  • the city of San Francisco 

can also legitimately be considered satellites of Silicon Valley, or perhaps part of “Greater Silicon Valley.

The name “Silicon Valley,” incidentally, was popularized in 1971 by a hard-drinking, story-chasing, gossip-mongering journalist named Don Hoefler, who wrote for a trade rag called Electronic News. Before, the region was called the Valley of the Hearts Delight,” renowned for its apricot, plum, cherry and almond orchards.
This was down-home farming, three generations of tranquility, beauty, health, and productivity based on family farms of small acreage but bountiful production,” reminisced Wallace Stegner, the famed Western writer. To see what the Valley looked like then, watch the first few minutes of this wonderful 1948 promotional video for the “Valley of the Heart’s Delight.”
<

Three historical forces — technical, cultural, and financial — created Silicon Valley.
 
Technology
On the technical side, in some sense the Valley got lucky. In 1955, one of the inventors of the transistor, William Shockley, moved back to Palo Alto, where he had spent some of his childhood. Shockley was also a brilliant physicist — he would share the Nobel Prize in 1956 — an outstanding teacher, and a terrible entrepreneur and boss. Because he was a brilliant scientist and inventor, Shockley was able to recruit some of the brightest young researchers in the country — Shockley called them “hot minds” — to come work for him 3,000 miles from the research-intensive businesses and laboratories that lined the Eastern Seaboard from Boston to Bell Labs in New Jersey. Because Shockley was an outstanding teacher, he got these young scientists, all but one of whom had never built transistors, to the point that they not only understood the tiny devices but began innovating in the field of semiconductor electronics on their own.
And because Shockley was a terrible boss — the sort of boss who posted salaries and subjected his employees to lie-detector tests — many who came to work for him could not wait to get away and work for someone else. That someone else, it turned out, would be themselves. The move by eight of Shockley’s employees to launch their own semiconductor operation called Fairchild Semiconductor in 1957 marked the first significant modern startup company in Silicon Valley. After Fairchild Semiconductor blew apart in the late-1960s, employees launched dozens of new companies (including Intel, National and AMD) that are collectively called the Fairchildren.
The Fairchild 8: Gordon Moore, Sheldon Roberts, Eugene Kleiner, Robert Noyce, Victor Grinich, Julius Blank, Jean Hoerni, and Jay Last. Photo courtesy Wayne Miller/Magnum Photos.
Equally important for the Valley’s future was the technology that Shockley taught his employees to build: the transistor. Nearly everything that we associate with the modern technology revolution and Silicon Valley can be traced back to the tiny, tiny transistor.
 
Think of the transistor as the grain of sand at the core of the Silicon Valley pearl. The next layer of the pearl appeared when people strung together transistors, along with other discrete electronic components like resistors and capacitors, to make an entire electronic circuit on a single slice of silicon. This new device was called a microchip. Then someone came up with a specialized microchip that could be programmed: the microprocessor. The first pocket calculators were built around these microprocessors. Then someone figured out that it was possible to combine a microprocessor with other components and a screen — that was a computer. People wrote code for those computers to serve as operating systems and software on top of those systems. At some point people began connecting these computers to each other: networking. Then people realized it should be possible to “virtualize” these computers and store their contents off-site in a “cloud,” and it was also possible to search across the information stored in multiple computers. Then the networked computer was shrunk — keeping the key components of screen, keyboard, and pointing device (today a finger) — to build tablets and palm-sized machines called smart phones. Then people began writing apps for those mobile devices … .
You get the picture. These changes all kept pace to the metronomic tick-tock of Moore’s Law.
The skills learned through building and commercializing one layer of the pearl underpinned and supported the development of the next layer or developments in related industries. Apple, for instance, is a company that people often speak of as sui generis, but Apple Computer’s early key employees had worked at Intel, Atari, or Hewlett-Packard. Apple’s venture capital backers had either backed Fairchild or Intel or worked there. The famous Macintosh, with its user-friendly aspect, graphical-user interface, overlapping windows, and mouse was inspired by a 1979 visit Steve Jobs and a group of engineers paid to XEROX PARC, located in the Stanford Research Park. In other words, Apple was the product of its Silicon Valley environment and technological roots.
Culture
This brings us to the second force behind the birth of Silicon Valley: culture. When Shockley, his transistor and his recruits arrived in 1955, the valley was still largely agricultural, and the small local industry had a distinctly high-tech (or as they would have said then, “space age”) focus. The largest employer was defense contractor Lockheed. IBM was about to open a small research facility. Hewlett-Packard, one of the few homegrown tech companies in Silicon Valley before the 1950s, was more than a decade old.
Stanford, meanwhile, was actively trying to build up its physics and engineering departments. Professor (and Provost from 1955 to 1965) Frederick Terman worried about a “brain drain” of Stanford graduates to the East Coast, where jobs were plentiful. So he worked with President J.E. Wallace Sterling to create what Terman called “a community of technical scholars” in which the links between industry and academia were fluid. This meant that as the new transistor-cum-microchip companies began to grow, technically knowledgeable engineers were already there.
Woz and Jobs.
Photo courtesy Computer History Museum.
These trends only accelerated as the population exploded. Between 1950 and 1970, the population of Santa Clara County tripled, from roughly 300,000 residents to more than 1 million. It was as if a new person moved into Santa Clara County every 15 minutes for 20 years. The newcomers were, overall, younger and better educated than the people already in the area. The Valley changed from a community of aging farmers with high school diplomas to one filled with 20-something PhDs.
All these new people pouring into what had been an agricultural region meant that it was possible to create a business environment around the needs of new companies coming up, rather than adapting an existing business culture to accommodate the new industries. In what would become a self-perpetuating cycle, everything from specialized law firms, recruiting operations and prototyping facilities; to liberal stock option plans; to zoning laws; to community college course offerings developed to support a tech-based business infrastructure.
Historian Richard White says that the modern American West was “born modern” because the population followed, rather than preceded, connections to national and international markets. Silicon Valley was bornpost-modern, with those connections not only in place but so taken for granted that people were comfortable experimenting with new types of business structures and approaches strikingly different from the traditional East Coast business practices with roots nearly two centuries old.
From the beginning, Silicon Valley entrepreneurs saw themselves in direct opposition to their East Coast counterparts. The westerners saw themselves as cowboys and pioneers, working on a “new frontier” where people dared greatly and failure was not shameful but just the quickest way to learn a hard lesson. In the 1970s, with the influence of the counterculture’s epicenter at the corner of Haight and Ashbury, only an easy drive up the freeway, Silicon Valley companies also became famous for their laid-back, dressed-down culture, and for their products, such as video games and personal computers, that brought advanced technology to “the rest of us.
 
Money

The third key component driving the birth of Silicon Valley, along with the right technology seed falling into a particularly rich and receptive cultural soil, was money. Again, timing was crucial. Silicon Valley was kick-started by federal dollars. Whether it was

  • the Department of Defense buying 100% of the earliest microchips, 
  • Hewlett-Packard and Lockheed selling products to military customers, or 
  • federal research money pouring into Stanford, 

Silicon Valley was the beneficiary of Cold War fears that translated to the Department of Defense being willing to spend almost anything on advanced electronics and electronic systems. The government, in effect, served as the Valley’s first venture capitalist.

The first significant wave of venture capital firms hit Silicon Valley in the 1970s. Both Sequoia Capital and Kleiner Perkins Caufield and Byers were founded by Fairchild alumni in 1972. Between them, these venture firms would go on to fund Amazon, Apple, Cisco, Dropbox, Electronic Arts, Facebook, Genentech, Google, Instagram, Intuit, and LinkedIn — and that is just the first half of the alphabet.
This model of one generation succeeding and then turning around to offer the next generation of entrepreneurs financial support and managerial expertise is one of the most important and under-recognized secrets to Silicon Valley’s ongoing success. Robert Noyce called it “re-stocking the stream I fished from.” Steve Jobs, in his remarkable 2005 commencement address at Stanford, used the analogy of a baton being passed from one runner to another in an ongoing relay across time.
 
So that’s how Silicon Valley emerged. Why has it endured?

After all, if modern Silicon Valley was born in the 1950s, the region is now in its seventh decade. For roughly two-thirds of that time, Valley watchers have predicted its imminent demise, usually with an allusion to Detroit.

  • First, the oil shocks and energy crises of the 1970s were going to shut down the fabs (specialized factories) that build microchips. 
  • In the 1980s, Japanese competition was the concern. 
  • The bursting of the dot-com bubble
  • the rise of formidable tech regions in other parts of the world
  • the Internet and mobile technologies that make it possible to work from anywhere: 

all have been heard as Silicon Valley’s death knell.

The Valley of Heart’s Delight, pre-technology. OSU Special Collections.
The Valley economy is notorious for its cyclicity, but it has indeed endured. Here we are in 2015, a year in which more patents, more IPOs, and a larger share of venture capital and angel investments have come from the Valley than ever before. As a recent report from Joint Venture Silicon Valley (***) put it, “We’ve extended a four-year streak of job growth, we are among the highest income regions in the country, and we have the biggest share of the nation’s high-growth, high-wage sectors.” Would-be entrepreneurs continue to move to the Valley from all over the world. Even companies that are not started in Silicon Valley move there (witness Facebook).
Why? What is behind Silicon Valley’s staying power? The answer is that many of the factors that launched Silicon Valley in the 1950s continue to underpin its strength today even as the Valley economy has proven quite adaptable.
Technology
The Valley still glides in the long wake of the transistor, both in terms of technology and in terms of the infrastructure to support companies that rely on semiconductor technology. Remember the pearl. At the same time, when new industries not related directly to semiconductors have sprung up in the Valley — industries like biotechnology — they have taken advantage of the infrastructure and support structure already in place.
Money
Venture capital has remained the dominant source of funding for young companies in Silicon Valley. In 2014, some $14.5 billion in venture capital was invested in the Valley, accounting for 43 percent of all venture capital investments in the country. More than half of Silicon Valley venture capital went to software investments, and the rise of software, too, helps to explain the recent migration of many tech companies to San Francisco. (San Francisco, it should be noted, accounted for nearly half of the $14.5 billion figure.) Building microchips or computers or specialized production equipment — things that used to happen in Silicon Valley — requires many people, huge fabrication operations and access to specialized chemicals and treatment facilities, often on large swaths of land. Building software requires none of these things; in fact, software engineers need little more than a computer and some server space in the cloud to do their jobs. It is thus easy for software companies to locate in cities like San Francisco, where many young techies want to live.
Culture
The Valley continues to be a magnet for young, educated people. The flood of intranational immigrants to Silicon Valley from other parts of the country in the second half of the twentieth century has become, in the twenty-first century, a flood of international immigrants from all over the world. It is impossible to overstate the importance of immigrants to the region and to the modern tech industry. Nearly 37 percent of the people in Silicon Valley today were born outside of the United States — of these, more than 60 percent were born in Asia and 20 percent in Mexico. Half of Silicon Valley households speak a language other than English in the home. Sixty-five percent of the people with Bachelors degrees working in science and engineering in the valley were born in another country. Let me say that again: 2/3 of people in working in sci-tech Valley industries who have completed their college education are foreign born. (Nearly half the college graduates working in all industries in the valley are foreign-born.)
Here’s another way to look at it: From 1995 to 2005, more than half of all Silicon Valley startups had at least one founder who was born outside the United States.[13] Their businesses — companies like Google and eBay — have created American jobs and billions of dollars in American market capitalization.
Silicon Valley, now, as in the past, is built and sustained by immigrants.
Gordon Moore and Robert Noyce at Intel in 1970. Photo courtesy Intel.
Stanford also remains at the center of the action. By one estimate, from 2012, companies formed by Stanford entrepreneurs generate world revenues of $2.7 trillion annually and have created 5.4 million jobs since the 1930s. This figure includes companies whose primary business is not tech: companies like Nike, Gap, and Trader Joe’s. But even if you just look at Silicon Valley companies that came out of Stanford, the list is impressive, including Cisco, Google, HP, IDEO, Instagram, MIPS, Netscape, NVIDIA, Silicon Graphics, Snapchat, Sun, Varian, VMware, and Yahoo. Indeed, some critics have complained that Stanford has become overly focused on student entrepreneurship in recent years — an allegation that I disagree with but is neatly encapsulated in a 2012 New Yorker article that called the university “Get Rich U.”
 
Change
The above represent important continuities, but change has also been vital to the region’s longevity. Silicon Valley has been re-inventing itself for decades, a trend that is evident with a quick look at the emerging or leading technologies in the area:
• 1940s: instrumentation
• 1950s/60s: microchips
• 1970s: biotech, consumer electronics using chips (PC, video game, etc)
• 1980s: software, networking
• 1990s: web, search
• 2000s: cloud, mobile, social networking
The overriding sense of what it means to be in Silicon Valley — the lionization of risk-taking, the David-versus-Goliath stories, the persistent belief that failure teaches important business lessons even when the data show otherwise — has not changed, but over the past few years, a new trope has appeared alongside the Western metaphors of Gold Rushes and Wild Wests: Disruption.
“Disruption” is the notion, roughly based on ideas first proposed by Joseph Schumpeter in 1942, that a little company can come in and — usually with technology — completely remake an industry that seemed established and largely impervious to change. So: Uber is disrupting the taxi industry. Airbnb is disrupting the hotel industry. The disruption story is, in its essentials, the same as the Western tale: a new approach comes out of nowhere to change the establishment world for the better. You can hear the same themes of adventure, anti-establishment thinking, opportunity and risk-taking. It’s the same song, with different lyrics.
The shift to the new language may reflect the key role that immigrants play in today’s Silicon Valley. Many educated, working adults in the region arrived with no cultural background that promoted cowboys or pioneers. These immigrants did not even travel west to get to Silicon Valley. They came east, or north. It will be interesting to see how long the Western metaphor survives this cultural shift. I’m betting that it’s on its way out.
Something else new has been happening in Silicon Valley culture in the past decade. The anti-establishment little guys have become the establishment big guys. Apple settled an anti-trust case. You are hearing about Silicon Valley companies like Facebook or Google collecting massive amounts of data on American citizens, some of which has ended up in the hands of the NSA. What happens when Silicon Valley companies start looking like the Big Brother from the famous 1984 Apple Macintosh commercial?
A Brief Feint at the Future
I opened these musings by defining Silicon Valley as a physical location. I’m often asked how or whether place will continue to matter in the age of mobile technologies, the Internet and connections that will only get faster. In other words, is region an outdated concept?
I believe that physical location will continue to be relevant when it comes to technological innovation. Proximity matters. Creativity cannot be scheduled for the particular half-hour block of time that everyone has free to teleconference. Important work can be done remotely, but the kinds of conversations that lead to real breakthroughs often happen serendipitously. People run into each other down the hall, or in a coffee shop, or at a religious service, or at the gym, or on the sidelines of a kid’s soccer game.
It is precisely because place will continue to matter that the biggest threats to Silicon Valley’s future have local and national parameters. Silicon Valley’s innovation economy depends on its being able to attract the brightest minds in the world; they act as a constant innovation “refresh” button. If Silicon Valley loses its allure for those people —

  • if the quality of public schools declines so that their children cannot receive good educations, 
  • if housing prices remain so astronomical that fewer than half of first-time buyers can afford the median-priced home, or 
  • if immigration policy makes it difficult for high-skilled immigrants who want to stay here to do so — 

the Valley’s status, and that of the United States economy, will be threatened. Also worrisome: ever-expanding gaps between the highest and lowest earners in Silicon Valley; stagnant wages for low- and middle-skilled workers; and the persistent reality that as a group, men in Silicon Valley earn more than women at the same level of educational attainment. Moreover, today in Silicon Valley, the lowest-earning racial/ethnic group earns 70 percent less than the highest earning group, according to the Joint Venture report. The stark reality, with apologies to George Orwell, is that even in the Valley’s vaunted egalitarian culture, some people are more equal than others.

Another threat is the continuing decline in federal support for basic research. Venture capital is important for developing products into companies, but the federal government still funds the great majority of basic research in this country. Silicon Valley is highly dependent on that basic research — “No Basic Research, No iPhone” is my favorite title from a recently released report on research and development in the United States. Today, the US occupies tenth place among OECD nations in overall R&D investment. That is investment as a percentage of GDP — somewhere between 2.5 and 3 percent. This represents a 13 percent drop below where we were ten years ago (again as a percentage of GDP). China is projected to outspend the United States in R&D within the next ten years, both in absolute terms and as a fraction of economic development.
People around the world have tried to reproduce Silicon Valley. No one has succeeded.
And no one will succeed because no place else — including Silicon Valley itself in its 2015 incarnation — could ever reproduce the unique concoction of academic research, technology, countercultural ideals and a California-specific type of Gold Rush reputation that attracts people with a high tolerance for risk and very little to lose. Partially through the passage of time, partially through deliberate effort by some entrepreneurs who tried to “give back” and others who tried to make a buck, this culture has become self-perpetuating.
The drive to build another Silicon Valley may be doomed to fail, but that is not necessarily bad news for regional planners elsewhere. The high-tech economy is not a zero-sum game. The twenty-first century global technology economy is large and complex enough for multiple regions to thrive for decades to come — including Silicon Valley, if the threats it faces are taken seriously.
Follow Backchannel: Twitter | Facebook

‘Highly creative’ professionals won’t lose their jobs to robots, study finds

By admin,

ORIGINAL Fortune
APRIL 22, 2015
A University of Oxford study finds that there are some things that a robot won’t be able to do. Unfortunately, these gigs don’t pay all that well.
Many people are in “robot overlord denial,” according to a recent online poll run by jobs board Monster.com. They think computers could not replace them at work. Sadly, most are probably wrong.
University of Oxford researchers Carl Benedikt Frey and Michael Osborne estimated in 2013 that 47% of total U.S. jobs could be automated by 2033. The combination of robotics, automation, artificial intelligence, and machine learning is so powerful that some white collar workers are already being replaced — and we’re talking journalists, lawyers, doctors, and financial analysts, not the person who used to file all the incoming faxes.
But there’s hope, at least for some. According to an advanced copy of a new report that U.K. non-profit Nesta sent to Fortune, 21% of US employment requires people to be “highly creative.” Of them, 86% (18% of the total workforce) are at low or no risk from automation. In the U.K., 87% of those in creative fields
Artists, musicians, computer programmers, architects, advertising specialists … there’s a very wide range of creative occupations,” said report co-author Hasan Bakhshi, director of creative economy at Nesta, to Fortune. Some other types would be financial managers, judges, management consultants, and IT managers. “Those jobs have a very high degree of resistance to automation.”
The study is based on the work of Frey and Osborne, who are also co-authors of this new report. The three researchers fed 120 job descriptions from the US Department of Labor into a computer and analyzed them to see which were most likely to require extensive creativity, or the use of imagination or ideas to make something new.
Creativity is one of the three classic bottlenecks to automating work, according to Bakhshi. “Tasks which involve a high degree of human manipulation and human perception — subtle tasks — other things being equal will be more difficult to automate,” he said. For instance, although goods can be manufactured in a robotic factory, real craft work still “requires the human touch.
So will jobs that need social intelligence, such as your therapist or life insurance agent.
Of course, the degree of creativity matters. Financial journalists who rewrite financial statements are already beginning to be supplanted by software. The more repetitive and dependent on data the work is, the more easily a human can be pushed aside.
In addition, just because certain types of creative occupations can’t easily be replaced doesn’t mean that their industries won’t see disruption. Packing and shipping crafts can be automated, as can could some aspects of the film industry that aren’t such things as directing, acting, and design. “These industries are going to be disrupted and are vulnerable,” Bakhshi said.
Also, not all these will necessarily provide a financial windfall. The study found an “inverse U-shape” relationship between the probability of an occupation being highly creative and the average income it might deliver. Musicians, actors, dancers, and artists might make relatively little, while people in technical, financial, and legal creative occupations can do quite well. So keeping that creative job may not seem much of a financial blessing in many cases.
Are you in a “creative” role that will be safe from automation? You can find out what these Oxford researchers think by taking their online quiz.

A Brain-Computer Interface That Lasts for Weeks

By admin,

Photo: John Rogers/University of Illinois
Brain signals can be read using soft, flexible, wearable electrodes that stick onto and near the ear like a temporary tattoo and can stay on for more than two weeks even during highly demanding activities such as exercise and swimming, researchers say.
The invention could be used for a persistent brain-computer interface (BCI) to help people operate prosthetics, computers, and other machines using only their minds, scientists add.
For more than 80 years, scientists have analyzed human brain activity non-invasively by recording electroencephalograms (EEGs). Conventionally, this involves electrodes stuck onto the head with conductive gel. The electrodes typically cannot stay mounted to the skin for more than a few days, which limits widespread use of EEGs for applications such as BCIs.
Now materials scientist John Rogers at the University of Illinois at Urbana-Champaign and his colleagues have developed a wearable device that can help record EEGs uninterrupted for more than 14 days. Moreover, their invention survived despite showering, bathing, and sleeping. And it did so without irritating the skin. The two weeks might be “a rough upper limit, defined by the timescale for natural exfoliation of skin cells,” Rogers says. 
The device consists of a soft, foldable collection of gold electrodes only 300 nanometers thick and 30 micrometers wide mounted on a soft plastic film. This assemblage stays stuck to the body using electric forces known as van der Waals interactions—the same forces that help geckoes cling cling to walls.
The electrodes are flexible enough to mold onto the ear and the mastoid process behind the ear. The researchers mounted the device onto three volunteers using tweezers. Spray-on bandage was used once twice a day to help the electrodes survive normal daily activities.
The electrodes on the mastoid process recorded brain activity while those on the ear were used as a ground wire. The electrodes were connected to a stretchable wire that could plug into monitoring devices. “Most of the experiments used devices mounted on just one side, but dual sides is certainly possible,” Rogers says.
The device helped record brain signals well enough for the volunteers to operate a text-speller by thought, albeit at a slow rate of 2.3 to 2.5 letters per minute.
According to Rogers, this research: 
…could enable a persistent BCI that one could imagine might help disabled people, for whom mind control is an attractive option for operating prosthetics… It could also be useful for monitoring cognitive states—for instance, 

  • to see if people are paying attention while they’re driving a truck, 
  • flying an airplane, or 
  • operating complex machinery. 

It could also help monitor patterns of sleep to better understand sleep disorders such as sleep apnea, or for monitoring brain function during learning.

The scientists hope to improve the speed at which people can use this device to communicate mentally, which could expand its use into commercial wearable electronics. They also plan to explore devices that can operate wirelessly, Rogers says. The researchers detailed their findings online March 16 in the journal Proceedings of the National Academy of Sciences.
ORIGINAL: IEEE Spectrum
By Charles Q. Choi
16 Mar 2015 

A Bendable Implant Taps the Nervous System without Damaging It

By admin,

Swiss researchers allow rats to walk again with a rubbery electronic implant.

Why It Matters

Neuroscientists need new materials to restore movement to paralyzed people.

An implant made of silicone and gold wires is as stretchy as human tissue.


Medicine these days entertains all kinds of ambitious plans for reading off brain signals to control wheelchairs, or using electronics to bypass spinal injuries.
But most of these ideas for implants that can interface with the nervous system run up against a basic materials problem: wires are stiff and bodies are soft.

That motivated some researchers at the École Polytechnique Fédérale, in Lausanne, Switzerland, to design a soft, flexible electronic implant, which they say has the same ability to bend and stretch as dura mater, the membrane that surrounds the brain and spinal cord.

The scientists, including Gregoire Courtine, have previously showed that implants can allow mice with spinal injuries to walk again. They did this by sending patterns of electrical shocks to the spinal cord via electrodes placed inside the spine (see “Paralyzed Rats Take 1,000 Steps, Orchestrated by Computer”). But the rigid wires ended up damaging the mice’s nervous systems.

So Courtine joined electrical engineer Stéphanie Lacour (see “Innovators Under 35, 2006: Stéphanie Lacour”) to come up with a new implant they call “e-dura.” It’s made from 

  • soft silicone, 
  • stretchy gold wires, and 
  • rubbery electrodes flecked with platinum, 
  • as well as a microchannel through which the researchers were able to pump drugs.

The work builds on ongoing advances in flexible electronics. Other scientists have built patches that match the properties of the skin and include circuits, sensors, or even radios (see “Stick-On Electronic Tattoos”).

What’s new is how stretchable electronics are merging with a widening effort to invent new ways to send and receive signals from nerves (see “Neuroscience’s New Toolbox”). “People are pushing the limits because everyone wants to precisely interact with the brain and nervous system,” says Polina Anikeeva, a materials scientist at MIT who develops ultrathin fiber-optic threads as a different way of interfacing with neural tissue.

The reason metal or plastic electrodes eventually cause damage, or stop working, is that they cause compression and tissue damage. A stiff implant, even if it’s very thin, will still not stretch as the spinal cord does. “It slides against the tissue and causes a lot of inflammation,” says Lacour. “When you bend over to tie your shoelaces, the spinal cord stretches by several percent.

The implant mimics a property of human tissue called viscoelasticity—somewhere between rubber and a very thick fluid. Pinch the skin on your hand with force and it will deform, but then flow back into place.

Using the flexible implant, the Swiss scientists reported today in the journal Science that they could overcome spinal injury in rats by wrapping it around the spinal cord and sending electrical signals to make the rodent’s hind legs move. They also pumped in chemicals to enhance the process. After two months, they saw few signs of tissue damage compared to conventional electrodes, which ended up causing an immune reaction and impairing the animal’s ability to move.

The ultimate aim of this kind of research is an implant that could restore a paralyzed person’s ability to walk. Lacour says that is still far off, but believes it will probably involve soft electronics. “If you want a therapy for patients, you want to ensure it can last in the body,” she says. “If we can match the properties of the neural tissue we should have a better interface.”

ORIGINAL:
Tech Review
By Antonio Regalado 

January 8, 2015

Joi Ito: Want to innovate? Become a “now-ist”

By admin,

Remember before the internet?” asks Joi Ito. “Remember when people used
to try to predict the future?
” In this engaging talk, the head of the
MIT Media Lab skips the future predictions and instead shares a new approach to creating in the moment: building quickly and improving constantly, without waiting for permission or for proof that you have the right idea
. This kind of bottom-up innovation is seen in the most fascinating, futuristic projects emerging today, and it starts, he says, with being open and alert to what’s going on around you right now.
Don’t be a futurist, he suggests: be a now-ist.

Preparing Your Students for the Challenges of Tomorrow

By admin,

ORIGINAL: Edutopia
August 20, 2014

Right now, you have students. Eventually, those students will become the citizens — employers, employees, professionals, educators, and caretakers of our planet in 21st century. Beyond mastery of standards, what can you do to help prepare them? What can you promote to be sure they are equipped with the skill sets they will need to take on challenges and opportunities that we can’t yet even imagine?

Following are six tips to guide you in preparing your students for what they’re likely to face in the years and decades to come.

1. Teach Collaboration as a Value and Skill Set Students of today need new skills for the coming century that will make them ready to collaborate with others on a global level. Whatever they do, we can expect their work to include finding creative solutions to emerging challenges.
2. Evaluate Information Accuracy 
New information is being discovered and disseminated at a phenomenal rate. It is predicted that 50 percent of the facts students are memorizing today will no longer be accurate or complete in the near future. Students need to know
  • how to find accurate information, and
  • how to use critical analysis for
  • assessing the veracity or bias and
  • the current or potential uses of new information.
These are the executive functions that they need to develop and practice in the home and at school today, because without them, students will be unprepared to find, analyze, and use the information of tomorrow.
3. Teach Tolerance 
In order for collaboration to happen within a global community, job applicants of the future will be evaluated by their ability for communication with, openness to, and tolerance for unfamiliar cultures and ideas. To foster these critical skills, today’s students will need open discussions and experiences that can help them learn about and feel comfortable communicating with people of other cultures.
4. Help Students Learn Through Their Strengths 
Children are born with brains that want to learn. They’re also born with different strengths — and they grow best through those strengths. One size does not fit all in assessment and instruction. The current testing system and the curriculum that it has spawned leave behind the majority of students who might not be doing their best with the linear, sequential instruction required for this kind of testing. Look ahead on the curriculum map and help promote each student’s interest in the topic beforehand. Use clever “front-loading” techniques that will pique their curiosity.
5. Use Learning Beyond the Classroom
New “learning” does not become permanent memory unless there is repeated stimulation of the new memory circuits in the brain pathways
. This is the “practice makes permanent” aspect of neuroplasticity where neural networks that are the most stimulated develop more dendrites, synapses, and thicker myelin for more efficient information transmission. These stronger networks are less susceptible to pruning, and they become long-term memory holders. Students need to use what they learn repeatedly and in different, personally meaningful ways for short-term memory to become permanent knowledge that can be retrieved and used in the future. Help your students make memories permanent by providing opportunities for them to “transfer” school learning to real-life situations.
6. Teach Students to Use Their Brain Owner’s Manual
The most important manual that you can share with your students is the owner’s manual to their own brains. When they understand how their brains take in and store information (PDF, 139KB), they hold the keys to successfully operating the most powerful tool they’ll ever own. When your students understand that, through neuroplasticity, they can change their own brains and intelligence, together you can build their resilience and willingness to persevere through the challenges that they will undoubtedly face in the future.How are you preparing your students to thrive in the world they’ll inhabit as adults?