Category: Neuromorphic Hardware


The future of AI is neuromorphic. Meet the scientists building digital ‘brains’ for your phone

By Hugo Angel,

Neuromorphic chips are being designed to specifically mimic the human brain – and they could soon replace CPUs
BRAIN ACTIVITY MAP
Neuroscape Lab
AI services like Apple’s Siri and others operate by sending your queries to faraway data centers, which send back responses. The reason they rely on cloud-based computing is that today’s electronics don’t come with enough computing power to run the processing-heavy algorithms needed for machine learning. The typical CPUs most smartphones use could never handle a system like Siri on the device. But Dr. Chris Eliasmith, a theoretical neuroscientist and co-CEO of Canadian AI startup Applied Brain Research, is confident that a new type of chip is about to change that.
Many have suggested Moore’s law is ending and that means we won’t get ‘more compute’ cheaper using the same methods,” Eliasmith says. He’s betting on the proliferation of ‘neuromorphics’ — a type of computer chip that is not yet widely known but already being developed by several major chip makers.
Traditional CPUs process instructions based on “clocked time” – information is transmitted at regular intervals, as if managed by a metronome. By packing in digital equivalents of neurons, neuromorphics communicate in parallel (and without the rigidity of clocked time) using “spikes” – bursts of electric current that can be sent whenever needed. Just like our own brains, the chip’s neurons communicate by processing incoming flows of electricity – each neuron able to determine from the incoming spike whether to send current out to the next neuron.
What makes this a big deal is that these chips require far less power to process AI algorithms. For example, one neuromorphic chip made by IBM contains five times as many transistors as a standard Intel processor, yet consumes only 70 milliwatts of power. An Intel processor would use anywhere from 35 to 140 watts, or up to 2000 times more power.
Eliasmith points out that neuromorphics aren’t new and that their designs have been around since the 80s. Back then, however, the designs required specific algorithms be baked directly into the chip. That meant you’d need one chip for detecting motion, and a different one for detecting sound. None of the chips acted as a general processor in the way that our own cortex does.
This was partly because there hasn’t been any way for programmers to design algorithms that can do much with a general purpose chip. So even as these brain-like chips were being developed, building algorithms for them has remained a challenge.
 
Eliasmith and his team are keenly focused on building tools that would allow a community of programmers to deploy AI algorithms on these new cortical chips.
Central to these efforts is Nengo, a compiler that developers can use to build their own algorithms for AI applications that will operate on general purpose neuromorphic hardware. Compilers are a software tool that programmers use to write code, and that translate that code into the complex instructions that get hardware to actually do something. What makes Nengo useful is its use of the familiar Python programming language – known for it’s intuitive syntax – and its ability to put the algorithms on many different hardware platforms, including neuromorphic chips. Pretty soon, anyone with an understanding of Python could be building sophisticated neural nets made for neuromorphic hardware.
Things like vision systems, speech systems, motion control, and adaptive robotic controllers have already been built with Nengo,Peter Suma, a trained computer scientist and the other CEO of Applied Brain Research, tells me.
Perhaps the most impressive system built using the compiler is Spaun, a project that in 2012 earned international praise for being the most complex brain model ever simulated on a computer. Spaun demonstrated that computers could be made to interact fluidly with the environment, and perform human-like cognitive tasks like recognizing images and controlling a robot arm that writes down what it’s sees. The machine wasn’t perfect, but it was a stunning demonstration that computers could one day blur the line between human and machine cognition. Recently, by using neuromorphics, most of Spaun has been run 9000x faster, using less energy than it would on conventional CPUs – and by the end of 2017, all of Spaun will be running on Neuromorphic hardware.
Eliasmith won NSERC’s John C. Polyani award for that project — Canada’s highest recognition for a breakthrough scientific achievement – and once Suma came across the research, the pair joined forces to commercialize these tools.
While Spaun shows us a way towards one day building fluidly intelligent reasoning systems, in the nearer term neuromorphics will enable many types of context aware AIs,” says Suma. Suma points out that while today’s AIs like Siri remain offline until explicitly called into action, we’ll soon have artificial agents that are ‘always on’ and ever-present in our lives.
Imagine a SIRI that listens and sees all of your conversations and interactions. You’ll be able to ask it for things like – “Who did I have that conversation about doing the launch for our new product in Tokyo?” or “What was that idea for my wife’s birthday gift that Melissa suggested?,” he says.
When I raised concerns that some company might then have an uninterrupted window into even the most intimate parts of my life, I’m reminded that because the AI would be processed locally on the device, there’s no need for that information to touch a server owned by a big company. And for Eliasmith, this ‘always on’ component is a necessary step towards true machine cognition. “The most fundamental difference between most available AI systems of today and the biological intelligent systems we are used to, is the fact that the latter always operate in real-time. Bodies and brains are built to work with the physics of the world,” he says.
Already, major efforts across the IT industry are heating up to get their AI services into the hands of users. Companies like Apple, Facebook, Amazon, and even Samsung, are developing conversational assistants they hope will one day become digital helpers.
ORIGINAL: Wired
Monday 6 March 2017

Former NASA chief unveils $100 million neural chip maker KnuEdge

By Hugo Angel,

Daniel Goldin
It’s not all that easy to call KnuEdge a startup. Created a decade ago by Daniel Goldin, the former head of the National Aeronautics and Space Administration, KnuEdge is only now coming out of stealth mode. It has already raised $100 million in funding to build a “neural chip” that Goldin says will make data centers more efficient in a hyperscale age.
Goldin, who founded the San Diego, California-based company with the former chief technology officer of NASA, said he believes the company’s brain-like chip will be far more cost and power efficient than current chips based on the computer design popularized by computer architect John von Neumann. In von Neumann machines, memory and processor are separated and linked via a data pathway known as a bus. Over the years, von Neumann machines have gotten faster by sending more and more data at higher speeds across the bus as processor and memory interact. But the speed of a computer is often limited by the capacity of that bus, leading to what some computer scientists to call the “von Neumann bottleneck.” IBM has seen the same problem, and it has a research team working on brain-like data center chips. Both efforts are part of an attempt to deal with the explosion of data driven by artificial intelligence and machine learning.
Goldin’s company is doing something similar to IBM, but only on the surface. Its approach is much different, and it has been secretly funded by unknown angel investors. And Goldin said in an interview with VentureBeat that the company has already generated $20 million in revenue and is actively engaged in hyperscale computing companies and Fortune 500 companies in the aerospace, banking, health care, hospitality, and insurance industries. The mission is a fundamental transformation of the computing world, Goldin said.
It all started over a mission to Mars,” Goldin said.

Above: KnuEdge’s first chip has 256 cores.Image Credit: KnuEdge
Back in the year 2000, Goldin saw that the time delay for controlling a space vehicle would be too long, so the vehicle would have to operate itself. He calculated that a mission to Mars would take software that would push technology to the limit, with more than tens of millions of lines of code.
Above: Daniel Goldin, CEO of KnuEdge.
Image Credit: KnuEdge
I thought, Former NASA chief unveils $100 million neural chip maker KnuEdge

It’s not all that easy to call KnuEdge a startup. Created a decade ago by Daniel Goldin, the former head of the National Aeronautics and Space Administration, KnuEdge is only now coming out of stealth mode. It has already raised $100 million in funding to build a “neural chip” that Goldin says will make data centers more efficient in a hyperscale age.
Goldin, who founded the San Diego, California-based company with the former chief technology officer of NASA, said he believes the company’s brain-like chip will be far more cost and power efficient than current chips based on the computer design popularized by computer architect John von Neumann. In von Neumann machines, memory and processor are separated and linked via a data pathway known as a bus. Over the years, von Neumann machines have gotten faster by sending more and more data at higher speeds across the bus as processor and memory interact. But the speed of a computer is often limited by the capacity of that bus, leading to what some computer scientists to call the “von Neumann bottleneck.” IBM has seen the same problem, and it has a research team working on brain-like data center chips. Both efforts are part of an attempt to deal with the explosion of data driven by artificial intelligence and machine learning.
Goldin’s company is doing something similar to IBM, but only on the surface. Its approach is much different, and it has been secretly funded by unknown angel investors. And Goldin said in an interview with VentureBeat that the company has already generated $20 million in revenue and is actively engaged in hyperscale computing companies and Fortune 500 companies in the aerospace, banking, health care, hospitality, and insurance industries. The mission is a fundamental transformation of the computing world, Goldin said.
It all started over a mission to Mars,” Goldin said.

Above: KnuEdge’s first chip has 256 cores.Image Credit: KnuEdge
Back in the year 2000, Goldin saw that the time delay for controlling a space vehicle would be too long, so the vehicle would have to operate itself. He calculated that a mission to Mars would take software that would push technology to the limit, with more than tens of millions of lines of code.
Above: Daniel Goldin, CEO of KnuEdge.
Image Credit: KnuEdge
I thought, holy smokes,” he said. “It’s going to be too expensive. It’s not propulsion. It’s not environmental control. It’s not power. This software business is a very big problem, and that nation couldn’t afford it.
So Goldin looked further into the brains of the robotics, and that’s when he started thinking about the computing it would take.
Asked if it was easier to run NASA or a startup, Goldin let out a guffaw.
I love them both, but they’re both very different,” Goldin said. “At NASA, I spent a lot of time on non-technical issues. I had a project every quarter, and I didn’t want to become dull technically. I tried to always take on a technical job doing architecture, working with a design team, and always doing something leading edge. I grew up at a time when you graduated from a university and went to work for someone else. If I ever come back to this earth, I would graduate and become an entrepreneur. This is so wonderful.
Back in 1992, Goldin was planning on starting a wireless company as an entrepreneur. But then he got the call to “go serve the country,” and he did that work for a decade. He started KnuEdge (previously called Intellisis) in 2005, and he got very patient capital.
When I went out to find investors, I knew I couldn’t use the conventional Silicon Valley approach (impatient capital),” he said. “It is a fabulous approach that has generated incredible wealth. But I wanted to undertake revolutionary technology development. To build the future tools for next-generation machine learning, improving the natural interface between humans and machines. So I got patient capital that wanted to see lightning strike. Between all of us, we have a board of directors that can contact almost anyone in the world. They’re fabulous business people and technologists. We knew we had a ten-year run-up.
But he’s not saying who those people are yet.
KnuEdge’s chips are part of a larger platform. KnuEdge is also unveiling KnuVerse, a military-grade voice recognition and authentication technology that unlocks the potential of voice interfaces to power next-generation computing, Goldin said.
While the voice technology market has exploded over the past five years due to the introductions of Siri, Cortana, Google Home, Echo, and ViV, the aspirations of most commercial voice technology teams are still on hold because of security and noise issues. KnuVerse solutions are based on patented authentication techniques using the human voice — even in extremely noisy environments — as one of the most secure forms of biometrics. Secure voice recognition has applications in industries such as banking, entertainment, and hospitality.
KnuEdge says it is now possible to authenticate to computers, web and mobile apps, and Internet of Things devices (or everyday objects that are smart and connected) with only a few words spoken into a microphone — in any language, no matter how loud the background environment or how many other people are talking nearby. In addition to KnuVerse, KnuEdge offers Knurld.io for application developers, a software development kit, and a cloud-based voice recognition and authentication service that can be integrated into an app typically within two hours.
And KnuEdge is announcing KnuPath with LambdaFabric computing. KnuEdge’s first chip, built with an older manufacturing technology, has 256 cores, or neuron-like brain cells, on a single chip. Each core is a tiny digital signal processor. The LambdaFabric makes it possible to instantly connect those cores to each other — a trick that helps overcome one of the major problems of multicore chips, Goldin said. The LambdaFabric is designed to connect up to 512,000 devices, enabling the system to be used in the most demanding computing environments. From rack to rack, the fabric has a latency (or interaction delay) of only 400 nanoseconds. And the whole system is designed to use a low amount of power.
All of the company’s designs are built on biological principles about how the brain gets a lot of computing work done with a small amount of power. The chip is based on what Goldin calls “sparse matrix heterogeneous machine learning algorithms.” And it will run C++ software, something that is already very popular. Programmers can program each one of the cores with a different algorithm to run simultaneously, for the “ultimate in heterogeneity.” It’s multiple input, multiple data, and “that gives us some of our power,” Goldin said.

Above: KnuEdge’s KnuPath chip.
Image Credit: KnuEdge
KnuEdge is emerging out of stealth mode to aim its new Voice and Machine Learning technologies at key challenges in IoT, cloud based machine learning and pattern recognition,” said Paul Teich, principal analyst at Tirias Research, in a statement. “Dan Goldin used his experience in transforming technology to charter KnuEdge with a bold idea, with the patience of longer development timelines and away from typical startup hype and practices. The result is a new and cutting-edge path for neural computing acceleration. There is also a refreshing surprise element to KnuEdge announcing a relevant new architecture that is ready to ship… not just a concept or early prototype.”
Today, Goldin said the company is ready to show off its designs. The first chip was ready last December, and KnuEdge is sharing it with potential customers. That chip was built with a 32-nanometer manufacturing process, and even though that’s an older technology, it is a powerful chip, Goldin said. Even at 32 nanometers, the chip has something like a two-times to six-times performance advantage over similar chips, KnuEdge said.
The human brain has a couple of hundred billion neurons, and each neuron is connected to at least 10,000 to 100,000 neurons,” Goldin said. “And the brain is the most energy efficient and powerful computer in the world. That is the metaphor we are using.”
KnuEdge has a new version of its chip under design. And the company has already generated revenue from sales of the prototype systems. Each board has about four chips.
As for the competition from IBM, Goldin said, “I believe we made the right decision and are going in the right direction. IBM’s approach is very different from what we have. We are not aiming at anyone. We are aiming at the future.
In his NASA days, Goldin had a lot of successes. There, he redesigned and delivered the International Space Station, tripled the number of space flights, and put a record number of people into space, all while reducing the agency’s planned budget by 25 percent. He also spent 25 years at TRW, where he led the development of satellite television services.
KnuEdge has 100 employees, but Goldin said the company outsources almost everything. Goldin said he is planning to raised a round of funding late this year or early next year. The company collaborated with the University of California at San Diego and UCSD’s California Institute for Telecommunications and Information Technology.
With computers that can handle natural language systems, many people in the world who can’t read or write will be able to fend for themselves more easily, Goldin said.
I want to be able to take machine learning and help people communicate and make a living,” he said. “This is just the beginning. This is the Wild West. We are talking to very large companies about this, and they are getting very excited.
A sample application is a home that has much greater self-awareness. If there’s something wrong in the house, the KnuEdge system could analyze it and figure out if it needs to alert the homeowner.
Goldin said it was hard to keep the company secret.
I’ve been biting my lip for ten years,” he said.
As for whether KnuEdge’s technology could be used to send people to Mars, Goldin said. “This is available to whoever is going to Mars. I tried twice. I would love it if they use it to get there.
ORIGINAL: Venture Beat

holy smokes

,” he said. “It’s going to be too expensive. It’s not propulsion. It’s not environmental control. It’s not power. This software business is a very big problem, and that nation couldn’t afford it.

So Goldin looked further into the brains of the robotics, and that’s when he started thinking about the computing it would take.
Asked if it was easier to run NASA or a startup, Goldin let out a guffaw.
I love them both, but they’re both very different,” Goldin said. “At NASA, I spent a lot of time on non-technical issues. I had a project every quarter, and I didn’t want to become dull technically. I tried to always take on a technical job doing architecture, working with a design team, and always doing something leading edge. I grew up at a time when you graduated from a university and went to work for someone else. If I ever come back to this earth, I would graduate and become an entrepreneur. This is so wonderful.
Back in 1992, Goldin was planning on starting a wireless company as an entrepreneur. But then he got the call to “go serve the country,” and he did that work for a decade. He started KnuEdge (previously called Intellisis) in 2005, and he got very patient capital.
When I went out to find investors, I knew I couldn’t use the conventional Silicon Valley approach (impatient capital),” he said. “It is a fabulous approach that has generated incredible wealth. But I wanted to undertake revolutionary technology development. To build the future tools for next-generation machine learning, improving the natural interface between humans and machines. So I got patient capital that wanted to see lightning strike. Between all of us, we have a board of directors that can contact almost anyone in the world. They’re fabulous business people and technologists. We knew we had a ten-year run-up.
But he’s not saying who those people are yet.
KnuEdge’s chips are part of a larger platform. KnuEdge is also unveiling KnuVerse, a military-grade voice recognition and authentication technology that unlocks the potential of voice interfaces to power next-generation computing, Goldin said.
While the voice technology market has exploded over the past five years due to the introductions of Siri, Cortana, Google Home, Echo, and ViV, the aspirations of most commercial voice technology teams are still on hold because of security and noise issues. KnuVerse solutions are based on patented authentication techniques using the human voice — even in extremely noisy environments — as one of the most secure forms of biometrics. Secure voice recognition has applications in industries such as banking, entertainment, and hospitality.
KnuEdge says it is now possible to authenticate to computers, web and mobile apps, and Internet of Things devices (or everyday objects that are smart and connected) with only a few words spoken into a microphone — in any language, no matter how loud the background environment or how many other people are talking nearby. In addition to KnuVerse, KnuEdge offers Knurld.io for application developers, a software development kit, and a cloud-based voice recognition and authentication service that can be integrated into an app typically within two hours.
And KnuEdge is announcing KnuPath with LambdaFabric computing. KnuEdge’s first chip, built with an older manufacturing technology, has 256 cores, or neuron-like brain cells, on a single chip. Each core is a tiny digital signal processor. The LambdaFabric makes it possible to instantly connect those cores to each other — a trick that helps overcome one of the major problems of multicore chips, Goldin said. The LambdaFabric is designed to connect up to 512,000 devices, enabling the system to be used in the most demanding computing environments. From rack to rack, the fabric has a latency (or interaction delay) of only 400 nanoseconds. And the whole system is designed to use a low amount of power.
All of the company’s designs are built on biological principles about how the brain gets a lot of computing work done with a small amount of power. The chip is based on what Goldin calls “sparse matrix heterogeneous machine learning algorithms.” And it will run C++ software, something that is already very popular. Programmers can program each one of the cores with a different algorithm to run simultaneously, for the “ultimate in heterogeneity.” It’s multiple input, multiple data, and “that gives us some of our power,” Goldin said.

Above: KnuEdge’s KnuPath chip.
Image Credit: KnuEdge
KnuEdge is emerging out of stealth mode to aim its new Voice and Machine Learning technologies at key challenges in IoT, cloud based machine learning and pattern recognition,” said Paul Teich, principal analyst at Tirias Research, in a statement. “Dan Goldin used his experience in transforming technology to charter KnuEdge with a bold idea, with the patience of longer development timelines and away from typical startup hype and practices. The result is a new and cutting-edge path for neural computing acceleration. There is also a refreshing surprise element to KnuEdge announcing a relevant new architecture that is ready to ship… not just a concept or early prototype.”
Today, Goldin said the company is ready to show off its designs. The first chip was ready last December, and KnuEdge is sharing it with potential customers. That chip was built with a 32-nanometer manufacturing process, and even though that’s an older technology, it is a powerful chip, Goldin said. Even at 32 nanometers, the chip has something like a two-times to six-times performance advantage over similar chips, KnuEdge said.
The human brain has a couple of hundred billion neurons, and each neuron is connected to at least 10,000 to 100,000 neurons,” Goldin said. “And the brain is the most energy efficient and powerful computer in the world. That is the metaphor we are using.”
KnuEdge has a new version of its chip under design. And the company has already generated revenue from sales of the prototype systems. Each board has about four chips.
As for the competition from IBM, Goldin said, “I believe we made the right decision and are going in the right direction. IBM’s approach is very different from what we have. We are not aiming at anyone. We are aiming at the future.
In his NASA days, Goldin had a lot of successes. There, he redesigned and delivered the International Space Station, tripled the number of space flights, and put a record number of people into space, all while reducing the agency’s planned budget by 25 percent. He also spent 25 years at TRW, where he led the development of satellite television services.
KnuEdge has 100 employees, but Goldin said the company outsources almost everything. Goldin said he is planning to raised a round of funding late this year or early next year. The company collaborated with the University of California at San Diego and UCSD’s California Institute for Telecommunications and Information Technology.
With computers that can handle natural language systems, many people in the world who can’t read or write will be able to fend for themselves more easily, Goldin said.
I want to be able to take machine learning and help people communicate and make a living,” he said. “This is just the beginning. This is the Wild West. We are talking to very large companies about this, and they are getting very excited.
A sample application is a home that has much greater self-awareness. If there’s something wrong in the house, the KnuEdge system could analyze it and figure out if it needs to alert the homeowner.
Goldin said it was hard to keep the company secret.
I’ve been biting my lip for ten years,” he said.
As for whether KnuEdge’s technology could be used to send people to Mars, Goldin said. “This is available to whoever is going to Mars. I tried twice. I would love it if they use it to get there.
ORIGINAL: Venture Beat

A Scale-up Synaptic Supercomputer (NS16e): Four Perspectives

By Hugo Angel,

Today, Lawrence Livermore National Lab (LLNL) and IBM announce the development of a new Scale-up Synaptic Supercomputer (NS16e) that highly integrates 16 TrueNorth Chips in a 4×4 array to deliver 16 million neurons and 256 million synapses. LLNL will also receive an end-to-end software ecosystem that consists of a simulator; a programming language; an integrated programming environment; a library of algorithms as well as applications; firmware; tools for composing neural networks for deep learning; a teaching curriculum; and cloud enablement. Also, don’t miss the story in The Wall Street Journal (sign-in required) and the perspective and a video by LLNL’s Brian Van Essen.
To provide insights into what it took to achieve this significant milestone in the history of our project, following are four intertwined perspectives from my colleagues:

  • Filipp Akopyan — First Steps to an Efficient Scalable NeuroSynaptic Supercomputer.
  • Bill Risk and Ben Shaw — Creating an Iconic Enclosure for the NS16e.
  • Jun Sawada — NS16e System as a Neural Network Development Workstation.
  • Brian Taba — How to Program a Synaptic Supercomputer.
The following timeline provides context for today’s milestone in terms of the continued evolution of our project.
Illustration Credit: William Risk

Memory capacity of brain is 10 times more than previously thought

By Hugo Angel,

Data from the Salk Institute shows brain’s memory capacity is in the petabyte range, as much as entire Web

LA JOLLA—Salk researchers and collaborators have achieved critical insight into the size of neural connections, putting the memory capacity of the brain far higher than common estimates. The new work also answers a longstanding question as to how the brain is so energy efficient and could help engineers build computers that are incredibly powerful but also conserve energy.
This is a real bombshell in the field of neuroscience,” said Terry Sejnowski from the Salk Institute for Biological Studies. “Our new measurements of the brain’s memory capacity increase conservative estimates by a factor of 10 to at least a petabyte (215 Bytes = 1000 TeraBytes), in the same ballpark as the World Wide Web.
Our memories and thoughts are the result of patterns of electrical and chemical activity in the brain. A key part of the activity happens when branches of neurons, much like electrical wire, interact at certain junctions, known as synapses. An output ‘wire’ (an axon) from one neuron connects to an input ‘wire’ (a dendrite) of a second neuron. Signals travel across the synapse as chemicals called neurotransmitters to tell the receiving neuron whether to convey an electrical signal to other neurons. Each neuron can have thousands of these synapses with thousands of other neurons.
When we first reconstructed every dendrite, axon, glial process, and synapse from a volume of hippocampus the size of a single red blood cell, we were somewhat bewildered by the complexity and diversity amongst the synapses,” says Kristen Harris, co-senior author of the work and professor of neuroscience at the University of Texas, Austin. “While I had hoped to learn fundamental principles about how the brain is organized from these detailed reconstructions, I have been truly amazed at the precision obtained in the analyses of this report.
Synapses are still a mystery, though their dysfunction can cause a range of neurological diseases. Larger synapses—with more surface area and vesicles of neurotransmitters—are stronger, making them more likely to activate their surrounding neurons than medium or small synapses.
The Salk team, while building a 3D reconstruction of rat hippocampus tissue (the memory center of the brain), noticed something unusual. In some cases, a single axon from one neuron formed two synapses reaching out to a single dendrite of a second neuron, signifying that the first neuron seemed to be sending a duplicate message to the receiving neuron.
At first, the researchers didn’t think much of this duplicity, which occurs about 10 percent of the time in the hippocampus. But Tom Bartol, a Salk staff scientist, had an idea: if they could measure the difference between two very similar synapses such as these, they might glean insight into synaptic sizes, which so far had only been classified in the field as small, medium and large.
In a computational reconstruction of brain tissue in the hippocampus, Salk scientists and UT-Austin scientists found the unusual occurrence of two synapses from the axon of one neuron (translucent black strip) forming onto two spines on the same dendrite of a second neuron (yellow). Separate terminals from one neuron’s axon are shown in synaptic contact with two spines (arrows) on the same dendrite of a second neuron in the hippocampus. The spine head volumes, synaptic contact areas (red), neck diameters (gray) and number of presynaptic vesicles (white spheres) of these two synapses are almost identical. Credit: Salk Institut
To do this, researchers used advanced microscopy and computational algorithms they had developed to image rat brains and reconstruct the connectivity, shapes, volumes and surface area of the brain tissue down to a nanomolecular level.
The scientists expected the synapses would be roughly similar in size, but were surprised to discover the synapses were nearly identical.
We were amazed to find that the difference in the sizes of the pairs of synapses were very small, on average, only about 8 percent different in size,” said Tom Bartol, one of the scientists. “No one thought it would be such a small difference. This was a curveball from nature.
Because the memory capacity of neurons is dependent upon synapse size, this eight percent difference turned out to be a key number the team could then plug into their algorithmic models of the brain to measure how much information could potentially be stored in synaptic connections.
It was known before that the range in sizes between the smallest and largest synapses was a factor of 60 and that most are small.
But armed with the knowledge that synapses of all sizes could vary in increments as little as eight percent between sizes within a factor of 60, the team determined there could be about 26 categories of sizes of synapses, rather than just a few.
Our data suggests there are 10 times more discrete sizes of synapses than previously thought,” says Bartol. In computer terms, 26 sizes of synapses correspond to about 4.7 “bits” of information. Previously, it was thought that the brain was capable of just one to two bits for short and long memory storage in the hippocampus.
This is roughly an order of magnitude of precision more than anyone has ever imagined,” said Sejnowski.
What makes this precision puzzling is that hippocampal synapses are notoriously unreliable. When a signal travels from one neuron to another, it typically activates that second neuron only 10 to 20 percent of the time.
We had often wondered how the remarkable precision of the brain can come out of such unreliable synapses,” says Bartol. One answer, it seems, is in the constant adjustment of synapses, averaging out their success and failure rates over time. The team used their new data and a statistical model to find out how many signals it would take a pair of synapses to get to that eight percent difference.
The researchers calculated that
  • for the smallest synapses, about 1,500 events cause a change in their size/ability (20 minutes) and
  • for the largest synapses, only a couple hundred signaling events (1 to 2 minutes) cause a change.
This means that every 2 or 20 minutes, your synapses are going up or down to the next size,” said Bartol. “The synapses are adjusting themselves according to the signals they receive.
From left: Terry Sejnowski, Cailey Bromer and Tom Bartol. Credit: Salk Institute
Our prior work had hinted at the possibility that spines and axons that synapse together would be similar in size, but the reality of the precision is truly remarkable and lays the foundation for whole new ways to think about brains and computers,” says Harris. “The work resulting from this collaboration has opened a new chapter in the search for learning and memory mechanisms.” Harris adds that the findings suggest more questions to explore, for example, if similar rules apply for synapses in other regions of the brain and how those rules differ during development and as synapses change during the initial stages of learning.
The implications of what we found are far-reaching. Hidden under the apparent chaos and messiness of the brain is an underlying precision to the size and shapes of synapses that was hidden from us.
The findings also offer a valuable explanation for the brain’s surprising efficiency. The waking adult brain generates only about 20 watts of continuous power—as much as a very dim light bulb. The Salk discovery could help computer scientists build ultra-precise but energy-efficient computers, particularly ones that employ deep learning and neural nets techniques capable of sophisticated learning and analysis, such as speech, object recognition and translation.
This trick of the brain absolutely points to a way to design better computers,”said Sejnowski. “Using probabilistic transmission turns out to be as accurate and require much less energy for both computers and brains.
Other authors on the paper were Cailey Bromer of the Salk Institute; Justin Kinney of the McGovern Institute for Brain Research; and Michael A. Chirillo and Jennifer N. Bourne of the University of Texas, Austin.
The work was supported by the NIH and the Howard Hughes Medical Institute.
ORIGINAL: Salk.edu
January 20, 2016

IBM’S ‘Rodent Brain’ Chip Could Make Our Phones Hyper-Smart

By admin,

At a lab near San Jose, IBM has built the digital equivalent of a rodent brain—roughly speaking. It spans 48 of the company’s experimental TrueNorth chips, a new breed of processor that mimics the brain’s biological building blocks. IBM
DHARMENDRA MODHA WALKS me to the front of the room so I can see it up close. About the size of a bathroom medicine cabinet, it rests on a table against the wall, and thanks to the translucent plastic on the outside, I can see the computer chips and the circuit boards and the multi-colored lights on the inside. It looks like a prop from a ’70s sci-fi movie, but Modha describes it differently. “You’re looking at a small rodent,” he says.
He means the brain of a small rodent—or, at least, the digital equivalent. The chips on the inside are designed to behave like neurons—the basic building blocks of biological brains. Modha says the system in front of us spans 48 million of these artificial nerve cells, roughly the number of neurons packed into the head of a rodent.
Modha oversees the cognitive computing group at IBM, the company that created these “neuromorphic” chips. For the first time, he and his team are sharing their unusual creations with the outside world, running a three-week “boot camp” for academics and government researchers at an IBM R&D lab on the far side of Silicon Valley. Plugging their laptops into the digital rodent brain at the front of the room, this eclectic group of computer scientists is exploring the particulars of IBM’s architecture and beginning to build software for the chip dubbed TrueNorth.
We want to get as close to the brain as possible while maintaining flexibility.’DHARMENDRA MODHA, IBM
Some researchers who got their hands on the chip at an engineering workshop in Colorado the previous month have already fashioned software that can identify images, recognize spoken words, and understand natural language. Basically, they’re using the chip to run “deep learning” algorithms, the same algorithms that drive the internet’s latest AI services, including the face recognition on Facebook and the instant language translation on Microsoft’s Skype. But the promise is that IBM’s chip can run these algorithms in smaller spaces with considerably less electrical power, letting us shoehorn more AI onto phones and other tiny devices, including hearing aids and, well, wristwatches.
What does a neuro-synaptic architecture give us? It lets us do things like image classification at a very, very low power consumption,” says Brian Van Essen, a computer scientist at the Lawrence Livermore National Laboratory who’s exploring how deep learning could be applied to national security. “It lets us tackle new problems in new environments.
The TrueNorth is part of a widespread movement to refine the hardware that drives deep learning and other AI services. Companies like Google and Facebook and Microsoft are now running their algorithms on machines backed with GPUs (chips originally built to render computer graphics), and they’re moving towards FPGAs (chips you can program for particular tasks). For Peter Diehl, a PhD student in the cortical computation group at ETH Zurich and University Zurich, TrueNorth outperforms GPUs and FPGAs in certain situations because it consumes so little power.
The main difference, says Jason Mars, a professor of a computer science at the University of Michigan, is that the TrueNorth dovetails so well with deep-learning algorithms. These algorithms mimic neural networks in much the same way IBM’s chips do, recreating the neurons and synapses in the brain. One maps well onto the other. “The chip gives you a highly efficient way of executing neural networks,” says Mars, who declined an invitation to this month’s boot camp but has closely followed the progress of the chip.
That said, the TrueNorth suits only part of the deep learning process—at least as the chip exists today—and some question how big an impact it will have. Though IBM is now sharing the chips with outside researchers, it’s years away from the market. For Modha, however, this is as it should be. As he puts it: “We’re trying to lay the foundation for significant change.
The Brain on a Phone
Peter Diehl recently took a trip to China, where his smartphone didn’t have access to the `net, an experience that cast the limitations of today’s AI in sharp relief. Without the internet, he couldn’t use a service like Google Now, which applies deep learning to speech recognition and natural language processing, because most the computing takes place not on the phone but on Google’s distant servers. “The whole system breaks down,” he says.
Deep learning, you see, requires enormous amounts of processing power—processing power that’s typically provided by the massive data centers that your phone connects to over the `net rather than locally on an individual device. The idea behind TrueNorth is that it can help move at least some of this processing power onto the phone and other personal devices, something that can significantly expand the AI available to everyday people.
To understand this, you have to understand how deep learning works. It operates in two stages. 
  • First, companies like Google and Facebook must train a neural network to perform a particular task. If they want to automatically identify cat photos, for instance, they must feed the neural net lots and lots of cat photos. 
  • Then, once the model is trained, another neural network must actually execute the task. You provide a photo and the system tells you whether it includes a cat. The TrueNorth, as it exists today, aims to facilitate that second stage.
Once a model is trained in a massive computer data center, the chip helps you execute the model. And because it’s small and uses so little power, it can fit onto a handheld device. This lets you do more at a faster speed, since you don’t have to send data over a network. If it becomes widely used, it could take much of the burden off data centers. “This is the future,” Mars says. “We’re going to see more of the processing on the devices.”
Neurons, Axons, Synapses, Spikes
Google recently discussed its efforts to run neural networks on phones, but for Diehl, the TrueNorth could take this concept several steps further. The difference, he explains, is that the chip dovetails so well with deep learning algorithms. Each chip mimics about a million neurons, and these can communicate with each other via something similar to a synapse, the connections between neurons in the brain.
‘Silicon operates in a very different way than the stuff our brains are made of.’
The setup is quite different than what you find in chips on the market today, including GPUs and FPGAs. Whereas these chips are wired to execute particular “instructions,” the TrueNorth juggles “spikes,” much simpler pieces of information analogous to the pulses of electricity in the brain. Spikes, for instance, can show the changes in someone’s voice as they speak—or changes in color from pixel to pixel in a photo. “You can think of it as a one-bit message sent from one neuron to another.” says Rodrigo Alvarez-Icaza, one of the chip’s chief designers.
The upshot is a much simpler architecture that consumes less power. Though the chip contains 5.4 billion transistors, it draws about 70 milliwatts of power. A standard Intel computer processor, by comparison, includes 1.4 billion transistors and consumes about 35 to 140 watts. Even the ARM chips that drive smartphones consume several times more power than the TrueNorth.
Of course, using such a chip also requires a new breed of software. That’s what researchers like Diehl are exploring at the TrueNorth boot camp, which began in early August and runs for another week at IBM’s research lab in San Jose, California. In some cases, researchers are translating existing code into the “spikes” that the chip can read (and back again). But they’re also working to build native code for the chip.
Parting Gift
Like these researchers, Modha discusses the TrueNorth mainly in biological terms. Neurons. Axons. Synapses. Spikes. And certainly, the chip mirrors such wetware in some ways. But the analogy has its limits. “That kind of talk always puts up warning flags,” says Chris Nicholson, the co-founder of deep learning startup Skymind. “Silicon operates in a very different way than the stuff our brains are made of.
Modha admits as much. When he started the project in 2008, backed by $53.5M in funding from Darpa, the research arm for the Department of Defense, the aim was to mimic the brain in a more complete way using an entirely different breed of chip material. But at one point, he realized this wasn’t going to happen anytime soon. “Ambitions must be balanced with reality,” he says.
In 2010, while laid up in bed with the swine flu, he realized that the best way forward was a chip architecture that loosely mimicked the brain—an architecture that could eventually recreate the brain in more complete ways as new hardware materials were developed. “You don’t need to model the fundamental physics and chemistry and biology of the neurons to elicit useful computation,” he says. “We want to get as close to the brain as possible while maintaining flexibility.
This is TrueNorth. It’s not a digital brain. But it is a step toward a digital brain. And with IBM’s boot camp, the project is accelerating. The machine at the front of the room is really 48 separate machines, each built around its own TrueNorth processors. Next week, as the boot camp comes to a close, Modha and his team will separate them and let all those academics and researchers carry them back to their own labs, which span over 30 institutions on five continents. “Humans use technology to transform society,” Modha says, pointing to the room of researchers. “These are the humans..
ORIGINAL: Wired
08.17.15

Scientists have built artificial neurons that fully mimic human brain cells

By admin,

They could supplement our brain function.

Researchers have built the world’s first artificial neuron that’s capable of mimicking the function of an organic brain cell – including the ability to translate chemical signals into electrical impulses, and communicate with other human cells.
These artificial neurons are the size of a fingertip and contain no ‘living’ parts, but the team is working on shrinking them down so they can be implanted into humans. This could allow us to effectively replace damaged nerve cells and develop new treatments for neurological disorders, such as spinal cord injuries and Parkinson’s disease.
Our artificial neuron is made of conductive polymers and it functions like a human neuron,” lead researcher Agneta Richter-Dahlfors from the Karolinska Institutet in Sweden said in a press release.

Agneta Richter-Dahlfors

Until now, scientists have only been able to stimulate brain cells using electrical impulses, which is how they transmit information within the cells. But in our bodies they’re stimulated by chemical signals, and this is how they communicate with other neurons.
By connecting enzyme-based biosensors to organic electronic ion pumps, Richter-Dahlfors and her team have now managed to create an artificial neuron that can mimic this function, and they’ve shown that it can communicate chemically with organic brain cells even over large distances.
The sensing component of the artificial neuron senses a change in chemical signals in one dish, and translates this into an electrical signal,said Richter-Dahlfors. “This electrical signal is next translated into the release of the neurotransmitter acetylcholine in a second dish, whose effect on living human cells can be monitored.
This means that artificial neurons could theoretically be integrated into complex biological systems, such as our bodies, and could allow scientists to replace or bypass damaged nerve cells. So imagine being able to use the device to restore function to paralysed patients, or heal brain damage.
Next, we would like to miniaturise this device to enable implantation into the human body,said Richer-Dahlfors.“We foresee that in the future, by adding the concept of wireless communication, the biosensor could be placed in one part of the body, and trigger release of neurotransmitters at distant locations.
Using such auto-regulated sensing and delivery, or possibly a remote control, new and exciting opportunities for future research and treatment of neurological disorders can be envisaged,she added.
The results of lab trials have been published in the journal Biosensors and Bioelectronics.
We’re really looking forward to seeing where this research goes. While the potential for treating neurological disorders are incredibly exciting, the artificial neurons could one day also help us to supplement our mental abilities and add extra memory storage or offer faster processing, and that opens up some pretty awesome possibilities.
ORIGINAL: Science Alert
By FIONA MACDONALD
29 JUN 2015

BrainCard, pattern recognition for ALL

By admin,

ORIGINAL: IndieGogo
Embedded recognition for images, speech, sound, biosensors or any signal with zero programming. Petaluma, California, United States Technology

Text and Numbers
 

Pattern & image recognition module with neuromorphic learning for all your maker projects.

Robotics fans, drone pilots, hackers & data-miners – rejoice!

The BrainCard is an open source hardware platform featuring the worlds only fully functional and field-tested
Neuromorphic Chip containing 1024 silicon neurons. It is able to learn and
recognize patterns within any dataset generated by any source, from the physical (sensors), to the virtual
(data).

Offered here, for the first time, to makers
in a format compatible with nearly all other popular electronics platforms — from
Raspberry Pi to Arduino and Intel Edison —  we aim to help
you add cognitive perception to any electronics project.

Add a brain to: Robots, toys or an old
GoPro
. Give them the ability to recognize and recall almost anything… You can also add a brain to any digital cameras including dash
cams. Vision not your thing? The same technology can recognize patterns in data like that packet of code you’re looking for in a sea of C++, a phrase in an eBook (regardless of the books length), even real time data: Build your own biosensors!  Make any appliance you like “smart”, like a coffee pot that recognizes you and starts making your coffee the way you like best.

Simply put; make it think.

 The BrainCard is an open source hardware platform featuring the worlds only fully functional and field-tested

Neuromorphic Chip containing 1024 silicon neurons. It is able to learn and
recognize patterns within any dataset generated by any source, from the physical (sensors), to the virtual
(data).
Offered here, for the first time, to makers
in a format compatible with nearly all other popular electronics platforms — from
Raspberry Pi to Arduino and Intel Edison —  we aim to help
you add cognitive perception to any electronics project.
Add a brain to: Robots, toys or an old
GoPro
. Give them the ability to recognize and recall almost anything… You can also add a brain to any digital cameras including dash cams. Vision not your thing? The same technology can recognize patterns in data like that packet of code you’re looking for in a sea of C++, a phrase in an eBook (regardless of the books length), even real time data: Build your own biosensors!  Make any appliance you like “smart”, like a coffee pot that recognizes you and starts making your coffee the way you like best.
Simply put; make it think.

Cannot wait for technical details?

Before we carry on, for those of you that are quick studies and/or already know everything, we thought you might like to skip straight to the specs so here you go:

BrainCard Specifications (Hardware and API)

For everyone else – please read on…

Unfamiliar with Neural Networks or Neuromorphic Chips? Watch this:

(If you want some more background info, click here)

Now back to you project…

The BrainCard™ is a small electronics board with a NeuroMem® CM1K device plus a FPGA (Field Programmable Gate Array) chip to connect to platform buses and sensor inputs. There is even an optional image sensor featured on the BrainCard 1KIS (Image Sensor) version. It can be connected to almost any popular electronics platform including Arduino/Raspberry Pi/Intel Edison and enables users to massively boost any devices capability by creating a brain-like system architecture – hence the name.

The CM1K chip(s) on the BrainCard essentially acts as a right-brain hemisphere ready to learn, recognize and recall patterns/images/sounds/inputs from any incoming data stream. This allows the accompanying MPU device to concentrate on what it’s good at — left-brain functions such as logic, procedural computing and as a communications and I/O interface.

The BrainCard is an open source hardware platform featuring the world’s only fully functional and field-tested
Neuromorphic Chip containing
1024 silicon neurons. It is able to learn and
recognize patterns within any dataset generated by any source, from the physical (sensors), to the virtual (data).
Offered here, for the first time, to makers in a format compatible with nearly all other popular electronics platforms — from
Raspberry Pi to Arduino and Intel Edison —  we aim to help you add cognitive perception to any electronics project.

Add a brain to: Robots, toys or an old GoPro. Give them the ability to recognize and recall almost anything… You can also add a brain to any digital cameras including dash cams. Vision not your thing? The same technology can recognize patterns in data like that packet of code you’re looking for in a sea of C++, a phrase in an eBook (regardless of the books length), even real time data: Build your own biosensors!  Make any appliance you like “smart”, like a coffee pot that recognizes you and starts making your coffee the way you like best.

Simply put; make it think.

The key to success is teaching BrainCard as you would a child: Teach it too conservatively and it will not generalize enough; too moderately and it could get confused. It is not like traditional programming and we have found that part of the fun in building projects with the BrainCard is in this new learning parameter.

It’s really quite simple: Show the BrainCard what it must recognize and assign the example a category. So: This face is John, that voice is Emma, this vibration is made by your cat purring and so on.

Getting started:

The BrainCard is delivered with a default configuration which can communicate with either one of the proposed controllers (Arduino, Raspberry PI or Edison) through a same communication protocol over their SPI lines.  Access to generic pattern learning and recognition functions using the CM1K chip are made through a simple API delivered for the different IDE (Arduino and Eclipse). More specific function libraries will be released shortly after and we hope to start a repository of your libraries too!

  1. Install and connect the BrainCard to the MPU/Device of your choice. View the hardware datasheet
  2. Install the API in the IDE of your choice (Arduino, Eclipse). View the BrainCard API preliminary datasheet
  3. Now, you can program to teach the BrainCard using examples previously collected and saved to disk (waveforms, images, movies). Or you can program some GPIOs to trigger teaching (bush buttons, keyboard inputs and even voice control! As illustrated in the following video, teaching amounts to selecting examples and sending one of more signatures of this example to the neurons of the BrainCard. The neurons will decide if the example is worth learning based on what they already know. If applicable, some neurons will autonomosuly correct themselfves if they contradict the teacher and never repeat this mistake again.
  4. Recognition is the same as learning except that this time, your program monitors the response of the neurons to the incoming signatures instead of sending them learning commands. Your program can then act based on what is recognized using the wealth of GPIOs available through Arduino Shields, as well as  DeviceToDevice or DeviceToCloud communications, and more. 

 

So what can it do?

This is a great
question, as even we have not fully explored the full range of the
BrainCard/CM1K’s capabilities. Almost every day we are coming up with new
applications for the technology, which is one of our quandaries, and is where YOU come in. It’s also why we are
choosing to announce ourselves to the world via Indiegogo.

A simple list of known capabilities

Object recognition
Using the KIS vesion or an off-the-shelf image sensor of your own and teach your BrainCard to recognize shapes, colors, objects, signs, people and animals.
Stereoscopic vision
With two image sensors attached, along with a CPU, your project can work in stereoscopic vision! The processor can
triangulate distance and the CM1K can recognize what it’s looking at. Add some motors to the image sensors and it can track things too.

 

Audio RecognitionAttach a microphone and teach theBrainCard to recognize a noise, a voice, YOUR voice or other audio signals like a bird song or a dog.Vibration and motionAttach a MEMS (Micro Electrical Mechanical Systems) device and teach the BrainCard to recognize vibrations or physical motion.

Bio signals
BrainCard can recognize data from any Bio-signal source – such as:

Electroencephalogram (EEG), Electrocardiogram (ECG), Electromyogram (EMG), Mechanomyogram (MMG), Electrooculography (EOG), Galvanic skin response (GSR), Magnetoencephalogram (MEG).

 

 

Text and Numbers

You can run your data through theBrainCard in any form — from text to binary to DNA sequences — and teach it to recognize patterns, which will allow it to detect anomalies, identify clusters and make predictions.There are MANY MORE applications we just haven’t tried yet…

Flexibility
If you go crazy while teaching and fill all 1024 neurons on a chip, don’t panic. BrainCard provides an expansion bus to stack more CM1K chips in boards of two, thereby increasing the number of modules (subject to availability) you can teach by increments of 2048 (1x CM1K equals 1,024 neurons). This expansion can be done at any time to its maximum of 8,192 (plus the original 1024 on the BrainCard), and will not impact your teaching allowing you to experiment to your heart’s content.

Maturity
The NeuroMem CM1K technology has already found many applications in industry and has been working in the real world since 2007 – so we know everything we’re claiming above is 100% true, because most of these applications have been built somewhere.

What we need, and what you getThis Indiegogo campaign has been launched with one aim: To generate the volume and revenue we need to manufacture the maker version of the CM1K technology — the BrainCard.

By supporting this Indiegogo project you will be a part of the first chapter of a much bigger story: We aim to change the way the world computes with neural network technology. We’re looking to raise at least $200k to start manufacturing in volume, which will make the BrainCard as cheap as possible.

We’re beginning with 1000 chips that we already have in inventory which were originally ordered by an industrial client. After that, we will aim to start manufacturing on a mass production line, and this will take approximately six months. So, those first 1000 purchasers will be the only ones able to experience the unique capabilities of the BrainCard until mid-2015.

The first 1000 BrainCard’s will cost $199 and are what we call IWIN (I Want It Now), or $219 for a version including an image sensor (the IS version) – so 500 of each version.

If we don’t reach the goal, all the money raised will be aimed at manufacturing as many BrainCards as we can, so that it can be more affordable for the masses.

This is why we’re turning to the maker community — we’d like to crowdsource our research and development through YOU!

The impact
Neural networksshould be everywhere by now, in your phone, in wearable technology. TheNeuroMem technology is mature and the market needs exist. This projecthas the ability to propelneuromorphic technology into the mainstream consciousness by showing electronics manufacturers whatcan be done with it.This is why we’re turning to the maker community — we’d like to crowdsource our research and development through YOU!

Risks and challenges

The core of the NeuroMem/NeuromorThings team has been in place for 16 years and has plenty of research and industrial customers already using the CM1K chip, so this is not a typical “prototype” project.

We have a full supply chain already in place for both the board and for mounting the chips. We also have a wealth of knowledge in developing board-level and semiconductor technologies — all of which makes the risks to you a bare minimum.

We just need your support to complete prototyping/testing and to begin volume manufacturing. The first 1000 IWIN BrainCards will have exclusive access to the technology for the three months it takes us to make the new batch of chips.

Once we begin mass manufacturing the BrainCard, we will begin our long development roadmap on its successors and other neuromorthings.

After the first run of IWIN devices, the rest of the time will be dedicated to mounting the chips to the boards and testing them. With enough support we can get production runs up to very large numbers per month very quickly.

Shipping
Shipping a technology product is fraught with issues like export restrictions. We’ve tried to make it as simple as possible and built shipping as a perk.

In the US, Mexico and Canada? included

Rest of World? $30 Shipping & Packing

Due to the technical nature of the BrainCard it can be liable to Export Restrictions in certain countries under United States Law. If you are unsure if you are effected – please contact us at: [email protected] and put “Export” in the subject line and we’ll do everything we can to help.

Other Ways You Can HelpCan’t buy a BrainCard? How about giving us a High $5? High 5’ers will all feature on the website and be written into NeuromorThings lore… it’s a program for those interested in the technology and who want to help but who can’t spring for their own BrainCard.

Got no cash at all? No problem – simply SPREAD THE WORD! Tell everyone you know about us and help us that way instead, on Facebook, on Twitter – wherever.

Every little bit helps!

Export regulations:
It might occurs, in certain rare cases that your country is under export embargo and we cannot ship because of the nature of the technology included in the BrainCard.If this exceptional situation occurs your money will be fully refunded.
Find This Campaign On
Team
Do you think this campaign contains prohibited content?Let us know.

A Worm’s Mind In A Lego Body

By admin,

ORIGINAL: i-Programmer
Written by Lucy Black
16 November 2014
Take the connectome of a worm and transplant it as software in a Lego Mindstorms EV3 robot – what happens next?
It is a deep and long standing philosophical question. Are we just the sum of our neural networks. Of course, if you work in AI you take the answer mostly for granted, but until someone builds a human brain and switches it on we really don’t have a concrete example of the principle in action. 
KDS444, modified by Nnemo
The nematode worm Caenorhabditis elegans (C. elegans) is tiny and only has 302 neurons. These have been completely mapped and the OpenWorm project is working to build a complete simulation of the worm in software. One of the founders of the OpenWorm project, Timothy Busbice, has taken the connectome and implemented an object oriented neuron program.
The model is accurate in its connections and makes use of UDP packets to fire neurons. If two neurons have three synaptic connections then when the first neuron fires a UDP packet is sent to the second neuron with the payload “3”. The neurons are addressed by IP and port number. The system uses an integrate and fire algorithm. Each neuron sums the weights and fires if it exceeds a threshold. The accumulator is zeroed if no message arrives in a 200ms window or if the neuron fires. This is similar to what happens in the real neural network, but not exact.
The software works with sensors and effectors provided by a simple LEGO robot. The sensors are sampled every 100ms. For example, the sonar sensor on the robot is wired as the worm’s nose. If anything comes within 20cm of the “nose” then UDP packets are sent to the sensory neurons in the network.
The same idea is applied to the 95 motor neurons but these are mapped from the two rows of muscles on the left and right to the left and right motors on the robot. The motor signals are accumulated and applied to control the speed of each motor. The motor neurons can be excitatory or inhibitory and positive and negative weights are used.
And the result?
It is claimed that the robot behaved in ways that are similar to observed C. elegans. Stimulation of the nose stopped forward motion. Touching the anterior and posterior touch sensors made the robot move forward and back accordingly. Stimulating the food sensor made the robot move forward.
Watch the video to see it in action. 
The key point is that there was no programming or learning involved to create the behaviors. The connectome of the worm was mapped and implemented as a software system and the behaviors emerge.
The conectome may only consist of 302 neurons but it is self-stimulating and it is difficult to understand how it works – but it does.
Currently the connectome model is being transferred to a Raspberry Pi and a self-contained Pi robot is being constructed. It is suggested that it might have practical application as some sort of mobile sensor – exploring its environment and reporting back results. Given its limited range of behaviors, it seems unlikely to be of practical value, but given more neurons this might change.
  • Is the robot a C. elegans in a different body or is it something quite new? 
  • Is it alive?
These are questions for philosophers, but it does suggest that the ghost in the machine is just the machine.
For us AI researchers, we still need to know if the principle of implementing a connectome scales.
More Information
Related Articles
To be informed about new articles on I Programmer, install the I Programmer Toolbar, subscribe to the RSS feed, follow us on, Twitter, Facebook, Google+ or Linkedin, or sign up for our weekly newsletter.

“Brain” In A Dish Acts As Autopilot Living Computer

By admin,

ORIGINAL: U of Florida
by Jennifer Viegas
Nov 27, 2012
A glass dish contains a “brain” — a living network of 25,000 rat brain cells connected to an array of 60 electrodes.University of Florida/Ray Carson

downloadable pdf

A University of Florida scientist has grown a living “brain” that can fly a simulated plane, giving scientists a novel way to observe how brain cells function as a network.The “brain” — a collection of 25,000 living neurons, or nerve cells, taken from a rat’s brain and cultured inside a glass dish — gives scientists a unique real-time window into the brain at the cellular level. By watching the brain cells interact, scientists hope to understand what causes neural disorders such as epilepsy and to determine noninvasive ways to intervene.


2012 U of Florida - Brain Test

Thomas DeMarse holds a glass dish containing a living network of 25,000 rat brain cells connected to an array of 60 electrodes that can interact with a computer to fly a simulated F-22 fighter plane.

As living computers, they may someday be used to fly small unmanned airplanes or handle tasks that are dangerous for humans, such as search-and-rescue missions or bomb damage assessments.”

We’re interested in studying how brains compute,” said Thomas DeMarse, the UF assistant professor of biomedical engineering who designed the study. “If you think about your brain, and learning and the memory process, I can ask you questions about when you were 5 years old and you can retrieve information. That’s a tremendous capacity for memory. In fact, you perform fairly simple tasks that you would think a computer would easily be able to accomplish, but in fact it can’t.
… Continue reading

IBM’s Brain-Inspired Computer Chip Comes from the Future

By admin,

ORIGINAL: IEEE Spectrum
By Jeremy Hsu
7 Aug 2014
Illustration: IBM
Brain-inspired computers have tickled the public imagination ever since Arnold Schwarzenegger’s character in “Terminator 2: Judgment Day” uttered: “My CPU is a neural net processor; a learning computer.” Today, IBM researchers backed by U.S. military funding unveiled a new computer chip that they say could revolutionize everything from smartphones to smart cars—and perhaps pave the way for neural networks to someday approach the computing capabilities of the human brain.
The IBM neurosynaptic computer chip consists of one million programmable neurons and 256 million programmable synapses conveying signals between the digital neurons. Each of chip’s 4,096 neurosynaptic cores includes the entire computing package—memory, computation, and communication. They all operate in parallel based on “event-driven” computing, similar to the signal spikes and cascades of activity when human brain cells work in concert. Such architecture helps to bypass the bottleneck in traditional computing where program instructions and operation data cannot pass through the same route simultaneously.
We have not built a brain,” says Dharmendra Modha, chief scientist and founder of IBM’s Cognitive Computing group at IBM Research-Almaden. “But we have come the closest to creating learning function and capturing it in silicon in a scalable way to provide new computing capability that was not possible before.”
Such capability could enable new mobile device applications that emulate the human brain’s capability to swiftly process information about new events or other changes in real-world environments, whether that involves recognizing familiar sounds or a certain face in a moving crowd. IBM envisions its new chips working together with traditional computing devices as hybrid machines—providing an added dose of brain-like intelligence for smart car sensors, cloud computing applications or mobile devices such as smartphones. The chip’s architecture was detailed in a new paper published in the 7 August online issue of the journal Science.
With a total of 5.4 billion transistors the computer chip, named TrueNorth, is one of the largest CMOS chips ever built. Yet the chip uses just 70 milliwatts while running and has a power density of 20 milliwatts per square centimeter— almost 1/10,000th the power of most modern microprocessors. That brings the new chip’s efficiency much closer to the human brain’s astounding power consumption of just 20 watts, or less than the average incandescent light bulb.
This is literally a supercomputer the size of a postage stamp, light like a feather, and low power like a hearing aid,” Modha says.
One reason IBM was able to minimize power usage is that its chip’s computation only triggers when needed. Traditional computer chips have a clock that uses power to trigger and coordinate all the computational processes. But the IBM chip’s digital neurons can work together asynchronously when triggered by the signal spikes. IBM also designed its chip to have low power consumption by creating an on-chip network to interconnect all the neurosynaptic cores and building the chip with a low-power process technology used for making mobile devices.
It’s also a supercomputer that can easily scale up in size. IBM designed its computer chip architecture so that it could simply add new neurosynatpic cores within the chip. The chips themselves can be arranged in a repeatable 2-D tile pattern to create bigger machines—IBM has already tested that idea with a 16-chip configuration. That’s the “blueprint of a scalable supercomputer,” Modha says.
Past brain-inspired neural networks have used a combination of both analog and digital to represent the individual neurons. IBM chose to represent the neurons in digital form, which provided several advantages. (At least one other project, SpiNNaker also depends on digital.)
First, the choice allowed IBM engineers to avoid the physical problems of dealing with differences in the manufacturing process or temperature fluctuations. Second, it provided a “one to one equivalence with software and hardware” that allowed the IBM software team to build applications on a simulator even before the physical chip had been designed and tested—applications that ran without problems on the finished chip. Third, the lack of analog circuitry allowed the IBM team to dramatically shrink the size of its circuits. (IBM fabricated its chip using Samsung’s 28-nm process technology—typical for manufacturing chips for mobile devices.)
IBM’s new chip represents the culmination of a decade of Modha’s personal research and almost six years of funding from the U.S. Defense Advanced Research Projects Agency (DARPA). Modha currently heads DARPA’s SyNAPSE project, a global effort that has committed US $53 million to making learning computers since 2008.
Now IBM has built an entire ecosystem around its new chip hardware and software, including a new programming language and a curriculum to teach coders everything they need to know. And the company is reaching out to potential customers, universities, government agencies, and IBM employees to fully explore the commercial applications of its chip technology.
Our long-term end goal is to build a ‘brain in a box’ with 100 billion synapses consuming 1 kilowatt of power,” Modha says. “In the near future, we’ll be looking at multiple things for empowering smartphones, mobile devices and cloud services with this technology.

IBM Chip Processes Data Similar to the Way Your Brain Does

By admin,

ORIGINAL: Tech Review
August 7, 2014
A chip that uses a million digital neurons and 256 million synapses may signal the beginning of a new era of more intelligent computers.
WHY IT MATTERS

Computers that can comprehend messy data such as images could revolutionize what technology can do for us.

New thinking: IBM has built a processor designed using principles at work in your brain.
A new kind of computer chip, unveiled by IBM today, takes design cues from the wrinkled outer layer of the human brain. Though it is no match for a conventional microprocessor at crunching numbers, the chip consumes significantly less power, and is vastly better suited to processing images, sound, and other sensory data.
IBM’s SyNapse chip, as it is called, processes information using a network of just over one million “neurons,” which communicate with one another using electrical spikes—as actual neurons do. The chip uses the same basic components as today’s commercial chips—silicon transistors. But its transistors are configured to mimic the behavior of both neurons and the connections—synapses—between them.
The SyNapse chip breaks with a design known as the Von Neuman architecture that has underpinned computer chips for decades. Although researchers have been experimenting with chips modeled on brains—known as neuromorphic chips—since the late 1980s, until now all have been many times less complex, and not powerful enough to be practical (see “Thinking in Silicon”). Details of the chip were published today in the journal Science.
The new chip is not yet a product, but it is powerful enough to work on real-world problems. In a demonstration at IBM’s Almaden research center, MIT Technology Review saw one recognize cars, people, and bicycles in video of a road intersection. A nearby laptop that had been programed to do the same task processed the footage 100 times slower than real time, and it consumed 100,000 times as much power as the IBM chip. IBM researchers are now experimenting with connecting multiple SyNapse chips together, and they hope to build a supercomputer using thousands.
When data is fed into a SyNapse chip it causes a stream of spikes, and its neurons react with a storm of further spikes. The just over one million neurons on the chip are organized into 4,096 identical blocks of 250, an arrangement inspired by the structure of mammalian brains, which appear to be built out of repeating circuits of 100 to 250 neurons, says Dharmendra Modha, chief scientist for brain-inspired computing at IBM. Programming the chip involves choosing which neurons are connected, and how strongly they influence one another. To recognize cars in video, for example, a programmer would work out the necessary settings on a simulated version of the chip, which would then be transferred over to the real thing.
In recent years, major breakthroughs in image analysis and speech recognition have come from using large, simulated neural networks to work on data (see “Deep Learning”). But those networks require giant clusters of conventional computers. As an example, Google’s famous neural network capable of recognizing cat and human faces required 1,000 computers with 16 processors apiece (see “Self-Taught Software”).
Although the new SyNapse chip has more transistors than most desktop processors, or any chip IBM has ever made, with over five billion, it consumes strikingly little power. When running the traffic video recognition demo, it consumed just 63 milliwatts of power. Server chips with similar numbers of transistors consume tens of watts of power—around 10,000 times more.
The efficiency of conventional computers is limited because they store data and program instructions in a block of memory that’s separate from the processor that carries out instructions. As the processor works through its instructions in a linear sequence, it has to constantly shuttle information back and forth from the memory store—a bottleneck that slows things down and wastes energy.
IBM’s new chip doesn’t have separate memory and processing blocks, because its neurons and synapses intertwine the two functions. And it doesn’t work on data in a linear sequence of operations; individual neurons simply fire when the spikes they receive from other neurons cause them to.
Horst Simon, the deputy director of Lawrence Berkeley National Lab and an expert in supercomputing, says that until now the industry has focused on tinkering with the Von Neuman approach rather than replacing it, for example by using multiple processors in parallel, or using graphics processors to speed up certain types of calculations. The new chip “may be a historic development,” he says. “The very low power consumption and scalability of this architecture are really unique.”
One downside is that IBM’s chip requires an entirely new approach to programming. Although the company announced a suite of tools geared toward writing code for its forthcoming chip last year (see “IBM Scientists Show Blueprints for Brainlike Computing”), even the best programmers find learning to work with the chip bruising, says Modha: “It’s almost always a frustrating experience.” His team is working to create a library of ready-made blocks of code to make the process easier.
Asking the industry to adopt an entirely new kind of chip and way of coding may seem audacious. But IBM may find a receptive audience because it is becoming clear that current computers won’t be able to deliver much more in the way of performance gains. “This chip is coming at the right time,” says Simon.

Palm’s Jeff Hawkins is building a brain-like AI. He told us why he thinks his life’s work is right

By admin,

ORIGINAL: The Register
By Jack Clark,
29 Mar 2014

 

Inside a big bet on future machine intelligence
Feature Jeff Hawkins has bet his reputation, fortune, and entire intellectual life on one idea: that he understands the brain well enough to create machines with an intelligence we recognize as our own.

If his bet is correct, the Palm Pilot inventor will father a new technology, one that becomes the crucible in which a general artificial intelligence is one day forged. If his bet is wrong, then Hawkins will have wasted his life. At 56 years old that might sting a little.

I want to bring about intelligent machines, machine intelligence, accelerated greatly from where it was going to happen and I don’t want to be consumed – I want to come out at the other end as a normal person with my sanity,” Hawkins told The Register. “My mission, the mission of Numenta, is to be a catalyst for machine intelligence.
A catalyst, he says, staring intently at your correspondent, “is something which accelerates a reaction by a thousand or ten thousand or a million-fold, and doesn’t get consumed in the process.

His goal is ambitious, to put it mildly.

Before we dig deep into Hawkins’ idiosyncratic approach to artificial intelligence, it’s worth outlining the state of current AI research, why his critics have a right to be skeptical of his grandiose claims, and how his approach is different to the one being touted by consumer web giants such as Google.

Jeff Hawkins
AI researcher Jeff Hawkins
The road to a successful, widely deployable framework for an artificial mind is littered with failed schemes, dead ends, and traps. No one has come to the end of it, yet. But while major firms like Google and Facebook, and small companies like Vicarious, are striding over well-worn paths, Hawkins believes he is taking a new approach that could take him and his colleagues at his company,Numenta, all the way.
For over a decade, Hawkins has poured his energy into amassing enough knowledge about the brain and about how to program it in software. Now, he believes he is on the cusp of a great period of invention that may yield some very powerful technology.
Some people believe in him, others doubt him, and some academics El Reg has spoken with are suspicious of his ideas.
One thing we have established is that the work to which Hawkins has dedicated his life has become an influential touchstone within the red-hot modern artificial intelligence industry. His 2004 book, On Intelligence, appears to have been read by and inspired many of the most prominent figures in AI, and the tech Numenta is creating may trounce other commercial efforts by much larger companies such as Google, Facebook, and Microsoft.
I think Jeff is largely right in what he wrote in On Intelligence,” explains Hawkins’ former colleague Dileep George (now running his own AI startup, Vicarious, which recently received $40m in funding from Mark Zuckerberg, space pioneer Elon Musk, and actor-turned-VC Ashton Kutcher). “Hierarchical systems, associative memory, time and attention – I think all those ideas are correct.
One of Google’s most prominent AI experts agrees: “Jeff Hawkins … has served as inspiration to countless AI researchers, for which I give him a lot of credit,” explains former Google brain king and current Stanford Professor Andrew Ng.
Some organizations have taken Hawkins’ ideas and stealthily run with them, with schemes already underway at companies like IBM and federal organizations like DARPA to implement his ideas in silicon, paving the way for neuromorphic processors that process information in near–real time, develop representations of patterns, and make predictions. If successful, these chips will make Qualcomm‘s “neuromorphic” Zeroth processors look like toys.
He has also inspired software adaptations of his work, such as CEPT, which has built an intriguing natural language processing engine partly out of Hawkins’ ideas.
How we think: time and hierarchy

Hawkins’ idea is that to build systems that behave like the brain, you have to be able to 

  • take in a stream of changing information, 
  • recognize patterns in it without knowing anything about the input source, 
  • make predictions, and 
  • react accordingly.

The only context you have for this analysis is an ability to observe how the stream of data changes over time.

Though this sounds similar to some of the data processing systems being worked on by researchers at Google, Microsoft, and Facebook, it has some subtle differences.
Part of it is heritage – Hawkins traces his ideas back to his own understanding of how our neocortex works based on a synthesis of thousands of academic papers, chats with researchers, and his own work at two of his prior tech companies, Palm and Handspring, whereas the inspiration for most other approaches are neural networks based on technology from the 80s, which itself was refined out of a 1940s paper [PDF], “A Logical Calculus of the Ideas Immanent in Nervous Activity“.
That may be the right thing to do, but it’s not the way brains work and it’s not the principles of intelligence and it’s not going to lead to a system that can explore the world or systems that can have behavior,” Hawkins tells us.
So far he has outlined the ideas for this approach in his influential On Intelligence, plus a white paper published in 2011, a set of open source algorithms called NuPIC based on his Hierarchical Temporal Memory approach, and hundreds of talks given at universities and at companies ranging from Google to small startups.
Six easy pieces and the one true algorithm
Hawkins’ work has “popularized the hypothesis that much of intelligence might be due to one learning algorithm,” explains Ng.
Part of why Hawkins’ approach is so controversial is that rather than assembling a set of advanced software components for specific computing functions and lashing them together via ever more complex collections of software, Hawkins has dedicated his research to figuring out an implementation of a single, basic approach.
This approach stems from an observation that our brain doesn’t appear to come preloaded with any specific instructions or routines, but rather is an architecture that is able to take in, process, and store an endless stream of information and develop higher-order understandings out of that.
The manifestation of Hawkins’ approach is the Cortical Learning Algorithm, or CLA.
People used to think the neocortex was divided into sensory regions and motor regions,” he explains. “We know now that is not true – the whole neocortex is sensory and motor.”
Ultimately, the CLA will be a single system that involves both sensory processing and motor control – brain functions that Hawkins believes must be fused together to create the possibility of consciousness. For now, most work has been done on the sensory layer, though he has recently made some breakthroughs on the motor integration as well.

To build his Cortical Learning Algorithm system, Hawkins says, he has developed six principles that define a cortical-like processor. These traits are

  • “on-line learning from streaming data”, 
  • “hierarchy of memory regions”, 
  • “sequence memory”, 
  • “sparse distributed representations”,
  •  “all regions are sensory and motor”, and 
  • “attention”.
These principles are based on his own study of the work being done by neuroscientists around the world.

Now, Hawkins says, Numenta is on the verge of a breakthrough that could see the small company birth a framework for building intelligence machines. And unlike the hysteria that greeted AI in the 70s and 80s as the defense industry pumped money into AI, this time may not be a false dawn.

I am thrilled at the progress we’re making,” he told El Reg one sunny afternoon at Numenta’s whiteboard-crammed offices in Redwood City, California. “It’s accelerating. These things are compounding, and it feels like these things are all coming together very rapidly.”
The approach Numenta has been developing is producing better and better results, he says, and the CLA is gaining broader capabilities. In the past months, Hawkins has gone through a period of fecund creativity, and has solved one of the main problems that have bedeviled his system (temporal pooling), he says. He sees 2014 as a critical year for the company.
He is confident that he has bet correctly – but it’s been a hard road to get here.
That long, hard road
Hawkins’ interest in the brain dates back to his childhood, as does his frustration with how it is studied.
Growing up, Hawkins spent time with his father in an old shipyard on the north shore of Long Island, inventing all manner of boats with his father, an inventor with the enthusiasm for creativity of a Dr. Seuss character. In high school, the young Hawkins developed an interest in biophysics and, as he recounts in his book On Intelligence, tried to find out more about the brain at a local library.
My search for a satisfying brain book turned up empty. I came to realize that no one had any idea how the brain actually worked. There weren’t even any bad or unproven theories; there simply were none,” he wrote.
This realization sparked a lifelong passion to try to understand the grand, intricate system that makes people who they are, and to eventually model the brain and create machines built in the same manner.
Hawkins graduated from Cornell in 1979 with a Bachelor of Science in Electronic Engineering. After a stint at Intel, he applied to MIT to study artificial intelligence, but had his application rejected because he wanted to understand how brains work, rather than build artificial intelligence. After this he worked at laptop start-up GRiD Systems, but during this time “could not get my curiosity about the brain and intelligent machines out of my head,” so he did a correspondence course in physiology and ultimately applied to and was accepted in the biophysics program at the University of California, Berkeley.
When Hawkins started at Berkeley in 1986, his ambition to study a theory of the brain collided with the university administration, which disagreed with his course of study. Though Berkeley was not able to give him a course of study, Hawkins spent almost two years ensconced in the school’s many libraries reading as much of the literature available on neuroscience as possible.
This deep immersion in neuroscience became the lens through which Hawkins viewed the world, with his later business accomplishments – Palm, Handspring – all leading to valuable insights on how the brain works and why the brain behaves as it does.
The way Hawkins recounts his past makes it seem as if the creation of a billion-dollar business in Palm, and arguably the prototype of the modern smartphone in Handspring, was a footnote along his journey to understand the brain.
This makes more sense when viewed against what he did in 2002, when he founded the Redwood Neuroscience Institute (now a part of the University of California at Berkeley and an epicenter of cutting-edge neuroscience research in its own right), and in 2005 founded Numenta with Palm/Handspring collaborator Donna Dublinksy and cofounder Dileep George.
These decades gave Hawkins the business acumen, money, and perspective needed to make a go at crafting his foundation for machine intelligence.
Controversy
His media-savvy, confident approach appears to have stirred up some ill feeling among other academics who point out, correctly, that Hawkins hasn’t published widely, nor has he invented many ideas on his own.
Numenta has also had troubles, partly due to Hawkins’ idiosyncratic view on how the brain works.
In 2010, for example, Numenta cofounder Dileep George left to found his own company, Vicarious, to pick some of the more low-hanging fruit in the promising field of AI. From what we understand, this amicable separation stemmed from a difference of opinion between George and Hawkins, as George tended towards a more mathematical approach, and Hawkins to a more biological one.
Hawkins has also come in for a bit of a drubbing from the intelligentsia, with NYU psychology professor Gary Marcus dismissing Numenta’s approach in a New Yorker article titled “Steamrolled by Big Data“.
Other academics El Reg interviewed for this article did not want to be quoted, as they felt Hawkins’ lack of peer reviewed papers combined with his entrepreneurial persona reduced the credibility of his entire approach.
Hawkins brushes off these criticisms and believes they come down to a difference of opinion between him and the AI intelligentsia.
These are complex biological systems that were not designed by mathematical principles [that are] very difficult to formalize completely,” he told us.
This reminds me a bit of the beginning of the computer era,” he said. “If you go back to the 1930s and early 40s, when people first started thinking about computers they were really interested in whether an algorithm would complete, and they were looking for mathematical completeness, a mathematical proof, that if you implemented something like an algorithm today when we build a computer, no one sits around saying “Let’s look at the mathematical formalism of this computer.’ It reminds me a little about that. We still have people saying ‘You don’t have enough math here!’ There’s some people that just don’t like that.
Hawkins’ confidence stems from the way Numenta has built its technology, which far from merely taking inspiration from the brain – as many other startups claim to do – is actively built as a digital implementation of everything Hawkins has learned about how the dense, napkin-sized sheet of cells that is our neocortex works.

I know of no other cortical theories/models that incorporate any of the following: 

  • active dendrites, 
  • differences between proximal and distal dendrites, 
  • synapse growth and decay, 
  • potential synapses, 
  • dendrite growth, 
  • depolarization as a mode of prediction, 
  • mini-columns, 
  • multiple types of inhibition and their corresponding inhibitory neurons, 
  • etcetera. 

The new temporal pooling mechanism we are working on requires metabotropic receptors in the locations they are, and are not, found. Again, I don’t know of any theories that have been reduced to practice that incorporate any, let alone all of these concepts,” he wrote in a post to the discussion mailing list for NuPic, an open source implementation of Numenta’s CLA, in February.

Deep learning is the new shallow learning
But for all the apparent rigorousness of Hawkins’ approach, during the years he has worked on the technology there has been a fundamental change in the landscape of AI development: the rise of the consumer internet giants, and with them the appearance of various cavernous stores of user data on which to train learning algorithms.
Google, for instance, was said in January of 2014 to be assembling the team required for the “Manhattan Project for AI“, according to a source who spoke anonymously to online publication Re/code. But Hawkins thinks that for all its grand aims, Google’s approach may be based on a flawed presumption.
The collective term for the approach pioneered by companies like Google, Microsoft, and Facebook is “Deep Learning“, but Hawkins fears it may be another blind path.
Deep learning could be the greatest thing in the world, but it’s not a brain theory,” he says.
Deep learning approaches, Hawkins says, encourage the industry to go about refining methods based on old technology, itself based on an oversimplified version of the neurons in a brain.
Because of the vast stores of user data available, the companies are all compelled to approach the quest of creating artificial intelligence through building machines that compute over certain types of data.
In many cases, much of the development at places like Google, Microsoft, and Facebook has revolved around vision – a dead end, according to Hawkins.
Where the whole community got tripped up – and I’m talking fifty years tripped up – is vision,” Hawkins explains. “They said, ‘Your eyes are moving all the time, your head is moving, the world is moving – let us focus on a simpler problem: spatial inference in vision’. This turns out to be a very small subset of what vision is. Vision turns out to be an inference problem. What that did is they threw out the most important part of vision – you must learn first how to do time-based vision.”
The acquisitions these companies have made speak to this apparent flaw.
Google, for instance, hired AI luminary and University of Toronto professor Geoff Hinton and his startup DNNresearch last year to have him apply his “Deep Belief Networks” approach to Google’s AI efforts.
In a talk given at the University of Toronto last year, Hinton said he believed more advanced AI should be based on existing approaches, rather than a rethought understanding of the brain.
The kind of neural inspiration I like is when making it more like the brain works better,” Hinton said. “There’s lots of people who say you ought to make it more like the brain – like Henry Markram [of the European Union’s brain simulation project], for example. He says, ‘Give me a billion dollars and I’ll make something like the brain,’ but he doesn’t actually know how to make it work – he just knows how to make something more and more like the brain. That seems to me not the right approach. What we should do is stick with things that actually work and make them more like the brain, and notice when making them more like the brain is actually helpful. There’s not much point in making things work worse.”
Hawkins vehemently disagrees with this point, and believes that basing approaches on existing methods means Hinton and other AI researchers are not going to be able to imbue their systems with the generality needed for true machine intelligence.
Another influential Googler agrees.
We have neuroscientists in our team so we can be biologically inspired but are not slavish to it,” Google Fellow Jeff Dean (creator of MapReduce, the Google File System, and now a figure in Google’s own “Brain Project” team, also known as its AI division) told us this year.
I’m surprised by how few people believe they need to understand how the brain works to build intelligent machines,” Hawkins says. “I’m disappointed by this.”
Hinton’s foundational technologies, for example, are Boltzmann machinesadvanced “stochastic recurrent neural network” tools that try to mimic some of the characteristics of the brain, which sit at the heart of Hinton’s “Deep Belief Networks” (2006).
The neurons in a restricted Boltzmann machine are not even close [to the brain] – it’s not even an approximation,” Hawkins explains.
Even Google is not sure about which way to bet on how to build a mind, as illustrated by its buy of UK company “DeepMind Technologies” earlier this year.
That company’s founder, Demis Hassabis, has done detailed work on fundamental neuroscience, and has built technology out of this understanding. In 2010, it was reported that he mentioned both Hawkins’ Hierarchical Temporal Memory and Hinton’s Deep Belief Nets when giving a talk on viable general artificial intelligence approaches.
Facebook has gone down similar paths by hiring the influential artificial intelligence academic Yann LeCun to help it “predict what a user is going to do next,” among other things.
Microsoft has developed significant capabilities as well, with systems like the Siri-beater “Cortana” and various endeavors by the company’s research division, MSR.
Though the techniques these various researchers employ differ, they all depend on training a dataset over a large amount of information, and then selectively retraining it as information changes.

These AI efforts are built around dealing with problems backed up by large and relatively predictable datasets. This has yielded some incredible inventions, such as

  • reasonable natural language processing,
  • image detection, and
  • video tagging.

It has not and cannot, however, yield a framework for a general intelligence, as it doesn’t have the necessary architecture for data 

  • apprehension, 
  • analysis, 
  • retention, and 
  • recognition

that our own brains do, Hawkins claims.

Hawkins’ focus on time is why he believes his approach will win – something that the consumer internet giants are slowly waking up to.
It’s all about time
I would say that Hawkins is focusing more on how things unfold over time, which I think is very important,” Google’s research director Peter Norvig told El Reg via email, “while most of the current deep learning work assumes a static representation, unchanging over time. I suspect that as we scale up the applications (i.e., from still images to video sequences, and from extracting noun-phrase entities in text to dealing with whole sentences denoting actions), that there will be more emphasis on the unfolding of dynamic processes over time.
Another former Googler concurs, with Andrew Ng telling us via email, “Hawkins’ work places a huge emphasis on learning from sequences. While most deep learning researchers also think that learning from sequences is important, we just haven’t figured out ways to do so that we’re happy with yet.
Geoff Hinton echoes this praise. “He has great insights about the types of computation the brain must be doing,” he tells us – but argues that Jeff Hawkins’ actual algorithmic contributions have been “disappointing” so far.
An absolutely crucial ingredient to AI
Time “is one hundred per cent crucial” to the creation of true artificial intelligence, Hawkins tells us. “If you accept the fact intelligent machines are going to work on the principles of the neocortex, it is the entire thing, basically. The only way.

The brain does two things: 

  • it does inference, which is recognizing patterns, and 
  • it does behavior, which is generating patterns or generating motor behavior,

Hawkins explains. “Ninety-nine percent of inference is time-based – language, audition, touch – it’s all time-based. You can’t understand touch without moving your hand. The order in which patterns occur is very important.

Numenta’s approach relies on time. Its Cortical Learning Algorithm (white paper) amounts to an engine for

  • processing streams of information,
  • classifying them,
  • learning to spot differences, and
  • using time-based patterns to make predictions about the future.
As mentioned above, there are several efforts underway at companies like IBM and federal research agencies like DARPA to implement Hawkins’ systems in custom processors, and these schemes all recognize the importance of Hawkins’ reliance on time.
What I found intriguing about [his approach] – time is not an afterthought. In all of these [other] things, time has been an afterthought,” one source currently working on implementing Hawkins’ ideas tells us.

So far, Hawkins has used his system to make predictions of diverse phenomena such as 

  • hourly energy use and 
  • stock trading volumes, and 
  • to detect anomalies in data streams.

Numenta’s commercial product, Grok, detects anomalies in computer servers running on Amazon’s cloud service.

Hawkins described to us one way to understand the power of this type of pattern recognition. “Imagine you are listening to a musician,” he suggested. “After hearing her play for several days, you learn the kind of music she plays, how talented she is, how much she improvises, and how many mistakes she makes. Your brain learns her style, and then has expectations about what she will play and what it will sound like. As you continue to listen to her play, you will detect if her style changes, if the type of music she plays changes, or if she starts making more errors. The same kind of patterns exist in machine-generated data, and Grok will detect changes.
Here again the wider AI community appears to be dovetailing into Hawkins’ ideas, with one of Andrew Ng‘s former Stanford students Honglak Lee having published a paper called “A classification-based polyphonic piano transcription approach using learned feature representations” in 2011. However, the method if implementation is different.

Obscurity through biology

Part of the reason why Hawkins’ technology is not more widely known is because for current uses it is hard for it to demonstrate a vast lead over rival approaches. For all of Hawkins’ belief in the tech, it is hard to demonstrate a convincing killer application for it that other approaches can’t do. The point, Hawkins says, is that the CLA’s internal structure gets rid of some of the stumbling blocks that exist in the future of other approaches.
Hawkins believes the CLA’s implicit dependence on time means that eventually it will become the dominant approach.
At the bottom of the [neocortex’s] hierarchy are fast-changing patterns and they form sequences – some of them are predictable and some of them are not – and what the neocortex is doing is trying to understand the set of patterns here and give it a constant representation – a name for the sequence, if you will – and it forms that as the next level of the hierarchy so the next level up is more stable,” Hawkins explains.
Changing patterns lead to changing representations in the hierarchy that are more stable, and then it learns the changes in those patterns, and as you go up the hierarchy it forms more and more stable representations of the world and they also tend to be independent of your body position and your senses.
Illustration: A comparison between biological neurons and HTM cells
A comparison between Hawkins’ Hierarchical Temporal Memory cells (right),
a neural network neuron (center), and the brain’s own neuron (left)

He believes his technology is more effective than the approaches taken by his rivals due to its use of sparse distributed representations as an input device to a storage system he terms “sequence memory“.

Sequence memory refers to how information makes its way into the brain as a stream of information that comes in from both external stimuli and internal stimuli, such as signals from the broader body.
Sparse Distributed Representations (SDRs) are partially based on the work of mathematician Pentti Kanerva on “Sparse Distributed Memory” [PDF].
They refer to how the brain represents and stores information. They are designed to mimic the way our brain is believed to encode memories, which is through neuron firings across a very large area in response to inputs. To achieve this, SDRs are written, roughly, as a 2000-bit string of which perhaps two percent are active. This means that you don’t need to read all active bits in an SDR to say that it is similar to another, because it merely needs to share a few of the activated bits to be considered similar, due to the sparsity.
Hawkins believes SDRs give input data inherent meaning through this representation approach.

This means that if two vectors have 1s in the same position, they are semantically similar. Vectors can therefore be expressed in degrees of similarity rather than simply being identical or different. These large vectors can be stored accurately even using a subsampled index of, say, 10 of 2,000 bits. This makes SDR memory fault tolerant to gaps in data. SDRs also exhibit properties that reliably allow the neocortex to determine if a new input is unexpected,” the company’s commercial website for Grok says.

But what are the drawbacks?
So if Hawkins thinks he has the theory and is on the way to building the technology, and other companies are implementing it, then why are we even calling what he is doing a “bet“? The answer comes down to credibility.
Hawkins’ idiosyncratic nature and decision to synthesize insights from two different fields – neuroscience and computer science – are his strengths, but also his drawbacks.
No one knows how the cortex works, so there is no way to know if Jeff is on the right track or not,Dr. Terry Sejnowski, the laboratory head of the Computational Neurobiology Laboratory at the SALK Institute for Biological Studies, tells us. “To the extent that [Hawkins] incorporates new data into his models he may have a shot, and there will be a flood of data coming from the BRAIN Initiative that was announced by Obama last April.
Hawkins says that this response is typical of the academic community, and that there is enough data available to learn about the brain. You just have to look for it.
We’re not going to replicate the neocortex, we’re not going to simulate the neocortex, we just need to understand how it works in sufficient detail so we can say ‘A-ha!’ and build things like it,” Hawkins says. “There is an incredible amount of unassimilated data that exists. Fifty years of papers. Thousands of papers a year. It’s unbelievable, and it’s always the next set of papers that people think is going to do it. … it’s not true that you have to wait for that stuff.”
The root of the problems Hawkins faces may be his approach, which stems more from biology than from mathematics. His old colleague and cofounder of Numenta, Dileep George, confirms this.
I think Jeff is largely right in what he wrote in On Intelligence,” George told us. “There are different approaches on how to bring those ideas. Jeff has an angle on it; we have a different angle on it; the rest of the community have another perspective on it.
These ideas are echoed by Google’s Norvig. “Hawkins, at least in his general-public-facing-persona, seems to be more driven by duplicating what the brain does, while the deep learning researchers take some concepts from the brain, but then mostly are trying to optimize mathematical equations,” he told us via email.
I live in the middle,” Hawkins explains. “Where I know the neuroscience details very very well, and I have a theoretical framework, and I bounce back and forth between these over and over again.

The future

Hawkins reckons that what he is doing today “is maybe 5 per cent of how humans learn,” Hawkins says.
He believes that during the coming year he will begin work on the next major area of development for his technology: action.
For Hawkins’ machines to gain independence – the ability, say, to not only recognize and classify patterns, but actively tune themselves to hunt for specific bits of information – the motor component needs to be integrated, he explains.
What we’ve proven so far – I say built and tested and put into a product – is pure sensor. It’s like an ear listening to sounds that doesn’t have a chance to move,” he tells us.
If you can add in the motor component, “an entire world opens up,” he says.
For example, I could have something like a web bot – an internet crawler. Today’s web crawlers are really stupid, they’re like wall-following rats. They just go up and down the length up and down the length,” he says.
If I wanted to look and understand the web, I could have a virtual system that is basically moving through cyberspace thinking about ‘What is the structure here? How do I model this?’ And so that’s an example of a behavioral system that has no physical presence. It basically says, ‘OK, I’m looking at this data, now where do I go next to look? Oh, I’m going to follow this link and do that in an intelligent way’.
By creating this technology, Hawkins hopes to dramatically accelerate the speed with which generally applicable artificial intelligence is developed and integrated into our world.
It’s taken a lot to get here, and the older Hawkins gets and the more rival companies spend, the bigger the stakes get. As of 2014, he is still betting his life on the fact that he is right and they are wrong. ®