Category: Senses


The future of AI is neuromorphic. Meet the scientists building digital ‘brains’ for your phone

By Hugo Angel,

Neuromorphic chips are being designed to specifically mimic the human brain – and they could soon replace CPUs
BRAIN ACTIVITY MAP
Neuroscape Lab
AI services like Apple’s Siri and others operate by sending your queries to faraway data centers, which send back responses. The reason they rely on cloud-based computing is that today’s electronics don’t come with enough computing power to run the processing-heavy algorithms needed for machine learning. The typical CPUs most smartphones use could never handle a system like Siri on the device. But Dr. Chris Eliasmith, a theoretical neuroscientist and co-CEO of Canadian AI startup Applied Brain Research, is confident that a new type of chip is about to change that.
Many have suggested Moore’s law is ending and that means we won’t get ‘more compute’ cheaper using the same methods,” Eliasmith says. He’s betting on the proliferation of ‘neuromorphics’ — a type of computer chip that is not yet widely known but already being developed by several major chip makers.
Traditional CPUs process instructions based on “clocked time” – information is transmitted at regular intervals, as if managed by a metronome. By packing in digital equivalents of neurons, neuromorphics communicate in parallel (and without the rigidity of clocked time) using “spikes” – bursts of electric current that can be sent whenever needed. Just like our own brains, the chip’s neurons communicate by processing incoming flows of electricity – each neuron able to determine from the incoming spike whether to send current out to the next neuron.
What makes this a big deal is that these chips require far less power to process AI algorithms. For example, one neuromorphic chip made by IBM contains five times as many transistors as a standard Intel processor, yet consumes only 70 milliwatts of power. An Intel processor would use anywhere from 35 to 140 watts, or up to 2000 times more power.
Eliasmith points out that neuromorphics aren’t new and that their designs have been around since the 80s. Back then, however, the designs required specific algorithms be baked directly into the chip. That meant you’d need one chip for detecting motion, and a different one for detecting sound. None of the chips acted as a general processor in the way that our own cortex does.
This was partly because there hasn’t been any way for programmers to design algorithms that can do much with a general purpose chip. So even as these brain-like chips were being developed, building algorithms for them has remained a challenge.
 
Eliasmith and his team are keenly focused on building tools that would allow a community of programmers to deploy AI algorithms on these new cortical chips.
Central to these efforts is Nengo, a compiler that developers can use to build their own algorithms for AI applications that will operate on general purpose neuromorphic hardware. Compilers are a software tool that programmers use to write code, and that translate that code into the complex instructions that get hardware to actually do something. What makes Nengo useful is its use of the familiar Python programming language – known for it’s intuitive syntax – and its ability to put the algorithms on many different hardware platforms, including neuromorphic chips. Pretty soon, anyone with an understanding of Python could be building sophisticated neural nets made for neuromorphic hardware.
Things like vision systems, speech systems, motion control, and adaptive robotic controllers have already been built with Nengo,Peter Suma, a trained computer scientist and the other CEO of Applied Brain Research, tells me.
Perhaps the most impressive system built using the compiler is Spaun, a project that in 2012 earned international praise for being the most complex brain model ever simulated on a computer. Spaun demonstrated that computers could be made to interact fluidly with the environment, and perform human-like cognitive tasks like recognizing images and controlling a robot arm that writes down what it’s sees. The machine wasn’t perfect, but it was a stunning demonstration that computers could one day blur the line between human and machine cognition. Recently, by using neuromorphics, most of Spaun has been run 9000x faster, using less energy than it would on conventional CPUs – and by the end of 2017, all of Spaun will be running on Neuromorphic hardware.
Eliasmith won NSERC’s John C. Polyani award for that project — Canada’s highest recognition for a breakthrough scientific achievement – and once Suma came across the research, the pair joined forces to commercialize these tools.
While Spaun shows us a way towards one day building fluidly intelligent reasoning systems, in the nearer term neuromorphics will enable many types of context aware AIs,” says Suma. Suma points out that while today’s AIs like Siri remain offline until explicitly called into action, we’ll soon have artificial agents that are ‘always on’ and ever-present in our lives.
Imagine a SIRI that listens and sees all of your conversations and interactions. You’ll be able to ask it for things like – “Who did I have that conversation about doing the launch for our new product in Tokyo?” or “What was that idea for my wife’s birthday gift that Melissa suggested?,” he says.
When I raised concerns that some company might then have an uninterrupted window into even the most intimate parts of my life, I’m reminded that because the AI would be processed locally on the device, there’s no need for that information to touch a server owned by a big company. And for Eliasmith, this ‘always on’ component is a necessary step towards true machine cognition. “The most fundamental difference between most available AI systems of today and the biological intelligent systems we are used to, is the fact that the latter always operate in real-time. Bodies and brains are built to work with the physics of the world,” he says.
Already, major efforts across the IT industry are heating up to get their AI services into the hands of users. Companies like Apple, Facebook, Amazon, and even Samsung, are developing conversational assistants they hope will one day become digital helpers.
ORIGINAL: Wired
Monday 6 March 2017

First Human Tests of Memory Boosting Brain Implant—a Big Leap Forward

By Hugo Angel,

You have to begin to lose your memory, if only bits and pieces, to realize that memory is what makes our lives. Life without memory is no life at all.” — Luis Buñuel Portolés, Filmmaker
Image Credit: Shutterstock.com
Every year, hundreds of millions of people experience the pain of a failing memory.
The reasons are many:

  • traumatic brain injury, which haunts a disturbingly high number of veterans and football players; 
  • stroke or Alzheimer’s disease, which often plagues the elderly; or 
  • even normal brain aging, which inevitably touches us all.
Memory loss seems to be inescapable. But one maverick neuroscientist is working hard on an electronic cure. Funded by DARPA, Dr. Theodore Berger, a biomedical engineer at the University of Southern California, is testing a memory-boosting implant that mimics the kind of signal processing that occurs when neurons are laying down new long-term memories.
The revolutionary implant, already shown to help memory encoding in rats and monkeys, is now being tested in human patients with epilepsy — an exciting first that may blow the field of memory prosthetics wide open.
To get here, however, the team first had to crack the memory code.

Deciphering Memory
From the very onset, Berger knew he was facing a behemoth of a problem.
We weren’t looking to match everything the brain does when it processes memory, but to at least come up with a decent mimic, said Berger.
Of course people asked: can you model it and put it into a device? Can you get that device to work in any brain? It’s those things that lead people to think I’m crazy. They think it’s too hard,” he said.
But the team had a solid place to start.
The hippocampus, a region buried deep within the folds and grooves of the brain, is the critical gatekeeper that transforms memories from short-lived to long-term. In dogged pursuit, Berger spent most of the last 35 years trying to understand how neurons in the hippocampus accomplish this complicated feat.
At its heart, a memory is a series of electrical pulses that occur over time that are generated by a given number of neurons, said Berger. This is important — it suggests that we can reduce it to mathematical equations and put it into a computational framework, he said.
Berger hasn’t been alone in his quest.
By listening to the chatter of neurons as an animal learns, teams of neuroscientists have begun to decipher the flow of information within the hippocampus that supports memory encoding. Key to this process is a strong electrical signal that travels from CA3, the “input” part of the hippocampus, to CA1, the “output” node.
This signal is impaired in people with memory disabilities, said Berger, so of course we thought if we could recreate it using silicon, we might be able to restore — or even boost — memory.

Bridging the Gap
Yet this brain’s memory code proved to be extremely tough to crack.
The problem lies in the non-linear nature of neural networks: signals are often noisy and constantly overlap in time, which leads to some inputs being suppressed or accentuated. In a network of hundreds and thousands of neurons, any small change could be greatly amplified and lead to vastly different outputs.
It’s a chaotic black box, laughed Berger.
With the help of modern computing techniques, however, Berger believes he may have a crude solution in hand. His proof?
Use his mathematical theorems to program a chip, and then see if the brain accepts the chip as a replacement — or additional — memory module.
Berger and his team began with a simple task using rats. They trained the animals to push one of two levers to get a tasty treat, and recorded the series of CA3 to CA1 electronic pulses in the hippocampus as the animals learned to pick the correct lever. The team carefully captured the way the signals were transformed as the session was laid down into long-term memory, and used that information — the electrical “essence” of the memory — to program an external memory chip.
They then injected the animals with a drug that temporarily disrupted their ability to form and access long-term memories, causing the animals to forget the reward-associated lever. Next, implanting microelectrodes into the hippocampus, the team pulsed CA1, the output region, with their memory code.
The results were striking — powered by an external memory module, the animals regained their ability to pick the right lever.
Encouraged by the results, Berger next tried his memory implant in monkeys, this time focusing on a brain region called the prefrontal cortex, which receives and modulates memories encoded by the hippocampus.
Placing electrodes into the monkey’s brains, the team showed the animals a series of semi-repeated images, and captured the prefrontal cortex’s activity when the animals recognized an image they had seen earlier. Then with a hefty dose of cocaine, the team inhibited that particular brain region, which disrupted the animal’s recall.
Next, using electrodes programmed with the “memory code,” the researchers guided the brain’s signal processing back on track — and the animal’s performance improved significantly.
A year later, the team further validated their memory implant by showing it could also rescue memory deficits due to hippocampal malfunction in the monkey brain.

A Human Memory Implant
Last year, the team cautiously began testing their memory implant prototype in human volunteers.
Because of the risks associated with brain surgery, the team recruited 12 patients with epilepsy, who already have electrodes implanted into their brain to track down the source of their seizures.
Repeated seizures steadily destroy critical parts of the hippocampus needed for long-term memory formation, explained Berger. So if the implant works, it could benefit these patients as well.
The team asked the volunteers to look through a series of pictures, and then recall which ones they had seen 90 seconds later. As the participants learned, the team recorded the firing patterns in both CA1 and CA3 — that is, the input and output nodes.
Using these data, the team extracted an algorithm — a specific human “memory code” — that could predict the pattern of activity in CA1 cells based on CA3 input. Compared to the brain’s actual firing patterns, the algorithm generated correct predictions roughly 80% of the time.
It’s not perfect, said Berger, but it’s a good start.
Using this algorithm, the researchers have begun to stimulate the output cells with an approximation of the transformed input signal.
We have already used the pattern to zap the brain of one woman with epilepsy, said Dr. Dong Song, an associate professor working with Berger. But he remained coy about the result, only saying that although promising, it’s still too early to tell.
Song’s caution is warranted. Unlike the motor cortex, with its clear structured representation of different body parts, the hippocampus is not organized in any obvious way.
It’s hard to understand why stimulating input locations can lead to predictable results, said Dr. Thoman McHugh, a neuroscientist at the RIKEN Brain Science Institute. It’s also difficult to tell whether such an implant could save the memory of those who suffer from damage to the output node of the hippocampus.
That said, the data is convincing,” McHugh acknowledged.
Berger, on the other hand, is ecstatic. “I never thought I’d see this go into humans,” he said.
But the work is far from done. Within the next few years, Berger wants to see whether the chip can help build long-term memories in a variety of different situations. After all, the algorithm was based on the team’s recordings of one specific task — what if the so-called memory code is not generalizable, instead varying based on the type of input that it receives?
Berger acknowledges that it’s a possibility, but he remains hopeful.
I do think that we will find a model that’s a pretty good fit for most conditions, he said. After all, the brain is restricted by its own biophysics — there’s only so many ways that electrical signals in the hippocampus can be processed, he said.
The goal is to improve the quality of life for somebody who has a severe memory deficit,” said Berger. “If I can give them the ability to form new long-term memories for half the conditions that most people live in, I’ll be happy as hell, and so will be most patients.
ORIGINAL: Singularity Hub

Apple co-founder on artificial intelligence: ‘The future is scary and very bad for people’

By admin,

Steve Wozniak speaks at the Worldwebforum in Zurich on March 10. (Steffen Schmidt/European Pressphoto Agency)

The Super Rich Technologists Making Dire Predictions About Artificial Intelligence club gained another fear-mongering member this week: Apple co-founder Steve Wozniak.In an interview with the Australian Financial Review, Wozniak joined original club members Bill Gates, Stephen Hawking and Elon Musk by making his own casually apocalyptic warning about machines superseding the human race.

Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people,” Wozniak said. “If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently.

[Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned’]

Doling out paralyzing chunks of fear like gumdrops to sweet-toothed children on Halloween, Woz continued: “Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don’t know about that … But when I got that thinking in my head about if I’m going to be treated in the future as a pet to these smart machines … well I’m going to treat my own pet dog really nice.

Seriously? Should we even get up tomorrow morning, or just order pizza, log onto Netflix and wait until we find ourselves looking through the bars of a dog crate? Help me out here, man!

Wozniak’s warning seemed to follow the exact same story arc as Season 1 Episode 2 of Adult Swim‘s “Rick and Morty Show.” Not accusing him of apocalyptic plagiarism or anything; just noting.

For what it’s worth, Wozniak did outline a scenario by which super-machines will be stopped in their human-enslaving tracks. Citing Moore’s Law — “the pattern whereby computer processing speeds double every two years” — Wozniak pointed out that at some point, the size of silicon transistors, which allow processing speeds to increase as they reduce size, will eventually reach the size of an atom, according to the Financial Review.

Any smaller than that, and scientists will need to figure out how to manipulate subatomic particles — a field commonly referred to as quantum computing — which has not yet been cracked,Quartz notes.

Wozniak’s predictions represent a bit of a turnaround, the Financial Review pointed out. While he previously rejected the predictions of futurists such as the pill-popping Ray Kurzweil, who argued that super machines will outpace human intelligence within several decades, Wozniak told the Financial Review that he came around after he realized the prognostication was coming true.

Computers are going to take over from humans, no question,” Wozniak said, nearly prompting me to tender my resignation and start watching this cute puppies compilation video until forever.

“I hope it does come, and we should pursue it because it is about scientific exploring,” he added. “But in the end we just may have created the species that is above us.

In January, during a Reddit AMA, Gates wrote: “I am in the camp that is concerned about super intelligence.” His comment came a month after Hawking said artificial intelligence “could spell the end of the human race.

British inventor Clive Sinclair has also said he thinks artificial intelligence will doom humankind.Once you start to make machines that are rivaling and surpassing humans with intelligence, it’s going to be very difficult for us to survive,he told the BBC. “It’s just an inevitability.

Musk was among the earliest members of this club. Speaking at the MIT aeronautics and astronautics department’s Centennial Symposium in October, the Tesla founder said: “With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like, yeah, he’s sure he can control the demon. Didn’t work out.


MORE READING:

ORIGINAL: Washington Post

March 24, 2015

The networked beauty of forests (TED) & Mother Tree – Suzanne Simard

By admin,

Learn about the sophisticated, underground, fungal network trees use to communicate and even share nutrients. UBC professor Suzanne Simard leads us through the forrest to investigate this underground community.


Deforestation causes more greenhouse gas emissions than all trains, planes and automobiles combined. What can we do to change this contributor to global warming? Suzanne Simard examines how the complex, symbiotic networks of our forests mimic our own neural and social networks — and how those connections might make all the difference.

ORIGINAL: TED Lessons

“Las plantas tienen nuestros cinco sentidos y quince más”: Stefano Mancuso, neurobiólogo vegetal

By admin,

Foto: Xavier Gómez
Inteligencia vegetal
Representan el 98,7% de la vida en el planeta; sin embargo, sólo el 3% de los científicos estudian las plantas. ¡Sólo el 3% para estudiar casi la totalidad de la vida! Absurdo. Mancuso es uno de ellos, con más de 250 artículos científicos sobre el tema y que acaba de publicar, con la periodista Alessandra Viola, Sensibilidad e inteligencia en el mundo vegetal (Galaxia Gutenberg), en el que narra los estudios y resultados más recientes, propios y ajenos, y que demuestran que las plantas se comunican entre ellas y con otros animales, duermen, memorizan, aprenden, cuidan de su prole, toman decisiones, e incluso son capaces de manipular a otras especies. Un mundo por descubrir. Las plantas sienten?
Mucho más de lo que sentimos los animales. Y no es mi opinión o percepción, es una evidencia científica.No es usted un iluminado.
No. Sabemos que perciben los cambios eléctricos, el campo magnético, el gradiente químico, la presencia de patógenos

¿Oyen, ven…?
Las plantas tienen nuestros cinco sentidos y quince más. No tienen ojos y oídos como nosotros, pero perciben todas las gradaciones de la luz y las vibraciones sonoras.

¿Y les gusta la música?
Ciertas frecuencias, sobre todo las bajas (entre los 100 Hz y los 500 Hz), favorecen la germinación de las semillas y el crecimiento de las plantas hacia la fuente de ese sonido, que equivale a frecuencias naturales como la del agua que corre, pero hablar o cantar a las plantas es perder el tiempo.

¿Hay sonidos bajo tierra?

Se ha descubierto que las raíces producen sonido y son capaces de percibirlo. Eso sugiere la existencia de una vía de comunicación subterránea.

Tampoco tienen nariz.
Su olfato y gusto son muy sensibles. Perciben las moléculas químicas, es su modo de comunicación, cada olor es un mensaje. Y tienen tacto, basta ver a cámara rápida cómo palpa una planta trepadora.

¿Y dice que se comunican?
Se comunican con otras plantas de la misma especie a través de moléculas químicas volátiles, mandan por ejemplo mensajes de peligro. Si un insecto se le está comiendo las hojas, la planta produce al instante determinadas moléculas que se difunden kilómetros y que avisan de que hay un ataque en curso.

¿Y cómo se defienden?
De muchas maneras. Pueden aumentar sus moléculas venenosas o producir proteínas indigeribles para el insecto. Muchas plantas al ser comidas por un insecto emiten determinadas sustancias para atraer a otros insectos que lo depreden.

Eso es comunicación entre especies.
Las plantas producen muchas moléculas químicas cuyo único objeto es manipular el cerebro de los animales, en ese contexto se inscriben las drogas.

Un ejemplo…
Estudios recientes demuestran que un naranjo o un limonero en flor actúa de diferente manera según la cantidad de polen que lleve el insecto. Si lleva mucho polen, aumenta en el néctar la cantidad de cafeína para activar su cerebro, para que se acuerde de esa planta y vuelva. Si lleva poco polen, corta la cafeína.

¿Inteligencia vegetal?
Si inteligencia es la capacidad para resolver problemas, las plantas son capaces de responder de manera adecuada a estímulos externos e internos, es decir: son conscientes de lo que son y de lo que las rodea.

¡Eso es mucho!
Hemos ignorado cómo funciona el 99,7% de la vida en el planeta y no podemos permitírnoslo porque nuestra dependencia del reino vegetal incluye -además del aire, la comida y los fármacos- la energía (los combustibles fósiles son depósitos orgánicos).

Desconocemos el 90 por ciento de las plantas.

En su evolución las plantas han producido millones de soluciones que son muy distintas de las que han producido los animales. Hasta ahora el hombre ha basado su tecnología en cómo estamos hechos nosotros: un centro de mando y una jerarquía de órganos, y así se organizan nuestras sociedades, gobiernos, máquinas…

Hay otro mundo en el que inspirarnos.
Estudiar las plantas nos dará una cantidad ingente de posibilidades tecnológicas. Por ejemplo, las redes: una red de internet y un conjunto de raíces son muy similares. Pero las plantas son redes vivas, imagine lo que podemos llegar a aprender de ellas.

¿Son altruistas?
Compiten con otras especies y cooperan si son del mismo clan. Pero hay algunos ejemplos extraordinarios en los que podemos hablar de un alto grado de altruismo. Hay una investigación muy hermosa que se hizo hace cuatro años en Canadá.

Cuénteme.
Se aisló a un gran abeto del acceso al agua, y los abetos de alrededor le pasaron sus nutrientes durante años para que no muriera. Las plantas son organismos sociales tan sofisticados y evolucionados como nosotros.

¿Cuidan de su prole?
En las plantas observamos el cuidado parental que observamos en los animales más evolucionados. En un bosque denso, para que un árbol recién nacido adquiera cierta altura para poder hacer la fotosíntesis y ser autosuficiente han de pasar al menos diez o quince años durante los cuales será alimentado y cuidado por su familia.

¿Dónde tienen el cerebro?
Las neuronas son las únicas células en los animales que producen y transmiten señales eléctricas. En las plantas, la mayor parte de las células de su cuerpo lo hacen, y en la punta de las raíces tienen muchísimas. Podríamos decir que toda la planta es cerebro.
ORIGINAL: Vanguardia.es
Victor-M Amela, Ima Sanchís, Lluís Amiguet
31/03/2015