Artificial Intelligence in The Food Industry: Empowering Farmers’ Decision Making

By Hugo Angel,

Can artificial intelligence save our food system? From precision farming to personalized nutrition, there are many potential technological applications in 

  • farming, 
  • food production, and 
  • food consumption. 

However, technological performances, user acceptance, and practical applications of the technology continue to pose challenges. In this three-part series (

Chiara Cecchini investigates the main challenges and opportunities of this niche, exploring how we might use artificial brains leverage to ensure healthy lives and promote well-being.

 
According to The One Hundred Year Study on Artificial Intelligence, lead by Stanford University, artificial neural networks can now be trained with huge data sets and large-scale computing (deep learning), boosting data-driven solutions for improving decision making. Artificial neural networks are computing systems inspired by the biological brain neural networks. As previously written in part one and part two of this article series, human beings base their choices on limited knowledge, increasing risks and inefficiencies. AI offers the opportunity to emulate human cognitive capabilities for sophisticated tasks, with the potential to reduce risks and enhance positive outcomes, through these artificial neuron networks.
Agriculture, health, and nutrition have long occupied separate domains at both a political and social level. Now, it is largely recognized that one of the most important tasks, globally, is to provide food of sufficient quantity and quality to sustainably feed and nourish the growing world population. In order to do that, according to the World Economic Forum, there is the immediate need to promote “smarter agricultural growth.
Data generated by sensors in farms, on the field or during transportation, offer an unprecedented wealth of information. Consequently, artificial intelligence applied to agriculture can potentially optimize and increase yields, improve farm planning, optimize resources, and considerably prevent waste. It is estimated that by 2020, more than 75 million agricultural connected devices will be in use, while the average farm is expected to generate an average of 4.1 million data points every day in 2050.
There are several examples across the farming industry: from

  • precision weeding and
  • picking to
  • disease recognition,

artificial intelligence has the potential to carve out new scenarios for the farming system.

A group of researchers at Cornell University recently published research explaining how they built and trained a neural network able to identify brown leaf spot disease on cassava leaves with 98-percent accuracy. CAMP3 deploys and manages wireless sensor networks for collecting fields images and automatically spot plant diseases and pests early on.
For precision weeding and picking, Abundant Robotics recently raised US$10 million for building a robot able to pick the right apples. Another example is Vision Robotics, a San Diego company working on a pair of robots that would trundle through orchards plucking oranges. These types of solutions have the potential to save farmers millions of dollars in labor costs and spoiled fruit, tackling the 1.3 billion tons of food lost (US$750 billion) each year.
AI also has the potential to positively impact soil health. Each soil tablespoon contains millions of microbes that form an ecosystem for the plant, and companies such as Trace Genomics are able to

  • extract the DNA from soil,
  • analyze its microbial community, and
  • provide AI-based recommendations for maximizing soil health and crop yield.
Global food security is one of the most pressing issues for humanity, and agricultural production is critical for achieving this.

Now those working with artificial intelligence and machine learning are hoping to shape a new Green Revolution: the sooner we start looking at it, the bigger will be the value we all take out of them.

ORIGINAL: Foodtank
2017/11/27

The Chinese Artificial Intelligence Revolution

By Hugo Angel,

The Bund by Shizhao This file is licensed under the Creative CommonsAttribution 1.0 Generic license
The artificial intelligence (AI) world summit took place in Singapore on 3 and 4 October 2017 (The AI summit Singapore). If we follow this global trend of heavy emphasis on AI, we can note the convergence between artificial intelligence and the emergence of “smart cities” in Asia, especially in China (Imran Khan, “Asia is leading the “Smart city” charge, but we’re not there yet, TechinAsia, January 19 2016). The development of artificial intelligence indeed combines with the current urbanization of the Chinese population.

This “intelligentization” of smart cities in China is induced by the necessity to master urban growth, while adapting urban areas to the emerging energy, water, food, health challenges, through the treatment of big data by artificial intelligence (Jean-Michel Valantin, “China: Towards the digital ecological revolution?”, The Red (Team) Analysis Society, October 22, 2017). Reciprocally, the smart urban development is a powerful driver, among others, of the development of artificial intelligence (Linda Poon, “What artificial intelligence reveals about urban change?” City Lab, July 13, 2017).

In this article, we shall thus focus upon the combination of artificial intelligence and cities that indeed creates the so-called “smart cities” in China. After having presented how this combination looks like through Chinese examples, we shall explain how this trend is implemented. Finally, we shall see how the development of artificial intelligence within the latest generations of smart cities is disrupting geopolitics through the combination of industry and intelligentization.

Artificial intelligence and smart cities
In China, the urban revolution induced by the acceleration of rural exodus is entwined with the digital and artificial intelligence revolution. This can be seen through the national program of urban development that is transforming “small” (3 million people) and middle size cities (5 million people) into smart cities. The new 95 Chinese smart cities are meant to shelter the coming 250 millions people expected to relocate into towns between the end of 2017 and 2026 (Chris Weller, “Here’s China’s genius plan to move 250 millions people from farms to cities”, Business Insider, 5 August 2015). However, these 95 cities are part of the 500 smart cities that are expected to be developed before the end of 2017 (“Chinese “smart cities” to number 500 before end of 2017“, China Daily, 21-04-2017).

In order to manage the mammoth challenges of these huge cities, artificial intelligence is on the rise. Deep learning is notably the type of AI that is used to make these cities smart. Deep learning is both able to treat the massive flow of data generated by cities and made possible by the exponentially growing flows of these big data – as these very data allow the AI to learn by themselves, through the creation, among other things, of the codes needed to apprehend new kinds of data and issues (Michael Copeland, “What’s the difference between AI, machine learning and deep learning?”, NVIDIA Blog, July 29, 2016).

For example, since 2016, the Hangzhou municipal government has integrated artificial intelligence, notably with “city brain”, which helps improving traffic efficiency through the use of the big data streams generated by a myriad of captors and cameras. The “city brain” project is led the giant technology company Alibaba. This “intelligentization” of traffic management helps reduce traffic jam, improves street surveillance, as well as air pollution for the 9 millions residents of Hangzhou. However, it is only the first step before turning the city into an intelligent and sustainable smart city (Du Yifei, “Hangzhou growing “smarter” thanks to AI technology”, People’s Daily, October 20, 2017).

“Intelligentizing cities”

Through the developing internet of things (IoT), the convergence of “intelligent” infrastructures, of big data management, and of urban artificial intelligence is going to be increasingly important to improve traffic, and thus energy efficiency, air pollution and economic development (Sarah Hsu, “China is investing heavily into Artificial intelligence, and could soon catch up with US”, Forbes, July 3, 2017). The Hangzhou experiment is duplicated in Suzhou, Quzhou and Macao.

Meanwhile, Baidu Inc, the Chinese largest search engine, develops a partnership with the Shanxi province in order to implement “city brain”, which is dedicated to create smart cities in the northern province, while improving coal mining management and chemical treatment (“Baidu partners with Shanxi province to integrate AI with city management”, China Money Network, July 13). As a result, the AI is going to be used to alleviate the use of this energy, which is also responsible of the Chinese “airpocalypse” (Jean-Michel Valantin, “The Arctic, Russia and China’s energy transition”, The Red (Team) Analysis Society, February 2, 2017).

In the meantime, Tencent, another mammoth Chinese technology company, is multiplying partnerships with 14 Chinese provinces and 50 cities to develop and integrate urban artificial intelligences. In the same time, the Hong Kong government is getting ready to implement an artificial intelligence program to tackle the 21st urban challenges, chief among them urban development management and climate change impacts.

When looking closely at this development of artificial intelligence in order to support the management of Chinese cities and at the multiplication of smart cities, we notice both also coincide with the political will aimed at reducing the growth of already clogged Chinese megacities of more than ten million people – such as

  • Beijing (21,5 millions people), 
  • Shanghai (25 millions), and 
  • the urban areas around them – and of the network of very great cities where more than 5 to 10 million people live. 

Indeed, the problem is that these very large cities and megalopolis have reached highly dangerous levels of water and air pollution, hence the “airpocalypse”, created by the noxious mix of car fume and coal plants exhaust.

From the intelligentization of Chinese cities to the “smart cars revolution”
This Chinese AI-centred urban development strategy also drives a gigantic urban, technological and industrial revolution, that turns China into a possible world leader

  • in clean energy, 
  • in electric and smart cars and 
  • in urban development. 

The development of the new generations of smart car is thus going to be coupled with latest advances in artificial intelligence. As a result, China can position itself in the “middle” of the major trends of globalization. Indeed, smart electric cars are the “new frontier” of the car industry that supports the economy of great economic powers as such as the U.S., Japan, and Germany (Michael Klare, Blood and oil, 2005), while artificial intelligence is the new frontier of industry and the building of the future. The emergence of China as an “electric and smart cars” provider could have massive implications for the industrial and economic development of these countries.

Add caption

In 2015, in the case of Shanghai, the number of cars grew by more than 13%, reaching the staggering total of 2.5 million cars in a 25 millions people strong megacity. In order to mitigate the impact of the car flow on the atmosphere, the municipal authorities use new “smart street” technologies. For example, the Ningbo-Hangzhou-Shanghai highway, daily used by more than 40 000 cars, is being equipped with a cyber network allowing drivers to pay tolls in advance with their smartphones. This application allows a significant decrease in pollution, because the lines of thousands of cars stopping in front of paybooths are reduced (“Chinese “smart cities” to number 500 before end of 2017”, China Daily, 21 April 2017).

In the meantime, the tech giant Tencent, the creator of WeChat, the enormous Chinese social network, which attracts more than 889 million users per month (“2017 WeChat Users Behavior Report”, China Channel, April 25, 2017), is developing a partnership with the Guangzhou automobile Group to develop smart cars. Baidu is doing the same with the Chinese BYD, Chery and BAIC, while launching Apollo, the open source platform on AI-powered smart cars. Alibaba, the giant of e-commerce, with more than 454 millions users during the first quarter of 2017 (“Number of active buyers across Alibaba’s online shopping properties from 2nd quarter 2012 to 1st quarter 2017 (in millions)”, Statista, The Statistical Portal, 2017) is developing a partnership with the Chinese brand SAIC motors and has already launched the Yunos System, which connects cars to the cloud and internet services. (Charles Clover and Sherry Fei Ju, “Tencent and Guangzhou team up to produce smart cars“, Financial Times, 19 september 2017).

It must be kept in mind that these three Chinese giant tech companies are thus connecting the development of their own services with artificial intelligence development, notably with smart cars development, in the context of the urban, digital and ecological transformation of China. In other terms, “city brains” and “smart cars” are going to become an immense “digital ecosystem” that artificial intelligences are going to manage, thus giving China an imposing technological edge.

This means that artificial intelligence is becoming the common support of the social and urban transformation of China, as well as the ways and means of the transformation of the Chinese urban network into smart cities. It is also a scientific, technological and industrial revolution.

This revolution is going to be based on the new international distribution of power between artificial intelligence-centred countries, and the others.

Indeed, in China, artificial intelligence is creating new social, economic and political conditions. This means that China is using artificial intelligence in order to manage its own social evolution, while becoming a mammoth artificial intelligence great power.

It now remains to be seen how the latest generations of smart cities powered by developing artificial intelligence accompanies the way some countries are getting ready for the economic, industrial and ecological, as well as security and military challenges of the 21 century, and how this urban and artificial intelligence is preparing an immense geopolitical revolution. This revolution is going to be based on the new international distribution of power between artificial intelligence-centred countries, and the others.

About the author: Jean-Michel Valantin (PhD Paris) leads the Environment and Geopolitics Department of The Red (Team) Analysis Society. He is specialised in strategic studies and defence sociology with a focus on environmental geostrategy.

ORIGINAL: RedAnalysis

Stunning AI Breakthrough Takes Us One Step Closer to the Singularity

By Hugo Angel,

As a new Nature paper points out, “There are an astonishing 10 to the
power of 170 possible board configurations in Go—more than the number of
atoms in the known universe.” (Image: DeepMind)
Remember AlphaGo, the first artificial intelligence to defeat a grandmaster at Go?
Well, the program just got a major upgrade, and it can now teach itself how to dominate the game without any human intervention. But get this: In a tournament that pitted AI against AI, this juiced-up version, called AlphaGo Zero, defeated the regular AlphaGo by a whopping 100 games to 0, signifying a major advance in the field. Hear that? It’s the technological singularity inching ever closer.A new paper published in Nature today describes how the artificially intelligent system that defeated Go grandmaster Lee Sedol in 2016 got its digital ass kicked by a new-and-improved version of itself. And it didn’t just lose by a little—it couldn’t even muster a single win after playing a hundred games. Incredibly, it took AlphaGo Zero (AGZ) just three days to train itself from scratch and acquire literally thousands of years of human Go knowledge simply by playing itself. The only input it had was what it does to the positions of the black and white pieces on the board.

  • In addition to devising completely new strategies,
  • the new system is also considerably leaner and meaner than the original AlphaGo.
Lee Sedol getting crushed by AlphaGo in 2016. (Image: AP)

Now, every once in a while the field of AI experiences a “holy shit” moment, and this would appear to be one of those moments. Looking back, other “holy shit” moments include:

This latest achievement qualifies as a “holy shit” moment for a number of reasons.

First of all, the original AlphaGo had the benefit of learning from literally thousands of previously played Go games, including those played by human amateurs and professionals. AGZ, on the other hand, received no help from its human handlers, and had access to absolutely nothing aside from the rules of the game. Using “reinforcement learning,” AGZ played itself over and over again, “starting from random play, and without any supervision or use of human data,” according to the Google-owned DeepMind researchers in their study. This allowed the system to improve and refine its digital brain, known as a neural network, as it continually learned from experience. This basically means that AlphaGo Zero was its own teacher.

This technique is more powerful than previous versions of AlphaGo because it is no longer constrained by the limits of human knowledge,” notes the DeepMind team in a release. “Instead, it is able to learn tabula rasa [from a clean slate] from the strongest player in the world: AlphaGo itself.

When playing Go, the system considers the most probable next moves (a “policy network”), and then estimates the probability of winning based on those moves (its “value network”). AGZ requires about 0.4 seconds to make these two assessments. The original AlphaGo was equipped with a pair of neural networks to make similar evaluations, but for AGZ, the Deepmind developers merged the policy and value networks into one, allowing the system to learn more efficiently. What’s more, the new system is powered by four tensor processing units (TPUS)—specialized chips for neural network training. Old AlphaGo needed 48 TPUs.

After just three days of self-play training and a total of 4.9 million games played against itself, AGZ acquired the expertise needed to trounce AlphaGo (by comparison, the original AlphaGo had 30 million games for inspiration). After 40 days of self-training, AGZ defeated another, more sophisticated version of AlphaGo called AlphaGo “Master” that defeated the world’s best Go players and the world’s top ranked Go player, Ke Jie. Earlier this year, both the original AlphaGo and AlphaGo Master won a combined 60 games against top professionals. The rise of AGZ, it would now appear, has made these previous versions obsolete.

The time when humans can have a meaningful conversation with an AI has always seemed far off and the stuff of science fiction. But for Go players, that day is here.

This is a major achievement for AI, and the subfield of reinforcement learning in particular. By teaching itself, the system matched and exceeded human knowledge by an order of magnitude in just a few days, while also developing 

  • unconventional strategies and
  • creative new moves.

For Go players, the breakthrough is as sobering as it is exciting; they’re learning things from AI that they could have never learned on their own, or would have needed an inordinate amount of time to figure out.
[AlphaGo Zero’s] games against AlphaGo Master will surely contain gems, especially because its victories seem effortless,” wrote Andy Okun and Andrew Jackson, members of the American Go Association, in a Nature News and Views article. “At each stage of the game, it seems to gain a bit here and lose a bit there, but somehow it ends up slightly ahead, as if by magic… The time when humans can have a meaningful conversation with an AI has always seemed far off and the stuff of science fiction. But for Go players, that day is here.”

No doubt, AGZ represents a disruptive advance in the world of Go, but what about its potential impact on the rest of the world? According to Nick Hynes, a grad student at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), it’ll be a while before a specialized tool like this will have an impact on our daily lives.“So far, the algorithm described only works for problems where there are a countable number of actions you can take, so it would need modification before it could be used for continuous control problems like locomotion [for instance],” Hynes told Gizmodo. “Also, it requires that you have a really good model of the environment. In this case, it literally knows all of the rules. That would be as if you had a robot for which you could exactly predict the outcomes of actions—which is impossible for real, imperfect physical systems.

The nice part, he says, is that there are several other lines of AI research that address both of these issues (e.g. machine learning, evolutionary algorithms, etc.), so it’s really just a matter of integration. “The real key here is the technique,” says Hynes.

It’s like an alien civilization inventing its own mathematics which allows it to do things like time travel…Although we’re still far from ‘The Singularity,’ we’re definitely heading in that direction.
As expected—and desired—we’re moving farther away from the classic pattern of getting a bunch of human-labeled data and training a model to imitate it,” he said. “What we’re seeing here is a model free from human bias and presuppositions: It can learn whatever it determines is optimal, which may indeed be more nuanced that our own conceptions of the same. It’s like an alien civilization inventing its own mathematics which allows it to do things like time travel,” to which he added: “Although we’re still far from ‘The Singularity,’ we’re definitely heading in that direction.


Noam Brown, a Carnegie Mellon University computer scientist who helped to develop the first AI to defeat top humans in no-limit poker, says the DeepMind researchers have achieved an impressive result, and that it could lead to bigger, better things in AI.

While the original AlphaGo managed to defeat top humans, it did so partly by relying on expert human knowledge of the game and human training data,” Brown told Gizmodo. “That led to questions of whether the techniques could extend beyond Go. AlphaGo Zero achieves even better performance without using any expert human knowledge. It seems likely that the same approach could extend to all perfect-information games [such as chess and checkers]. This is a major step toward developing general-purpose AIs.

As both Hynes and Brown admit, this latest breakthrough doesn’t mean the technological singularity—that hypothesized time in the future when greater-than-human machine intelligence achieves explosive growth—is imminent. But it should cause pause for thought. Once 

  • we teach a system the rules of a game or 
  • the constraints of a real-world problem, 

the power of reinforcement learning makes it possible to simply press the start button and let the system do the rest. It will then figure out the best ways to succeed at the task, devising solutions and strategies that are beyond human capacities, and possibly even human comprehension.

As noted, AGZ and the game of Go represent an oversimplified, constrained, and highly predictable picture of the world, but in the future, AI will be tasked with more complex challenges. Eventually, self-teaching systems will be used to solve more pressing problems, such as protein folding to conjure up new medicines and biotechnologies, figuring out ways to reduce energy consumption, or when we need to design new materials. A highly generalized self-learning system could also be tasked with improving itself, leading to artificial general intelligence (i.e. a very human-like intelligence) and even artificial superintelligence.

As the DeepMind researchers conclude in their study, “Our results comprehensively demonstrate that a pure reinforcement learning approach is fully feasible, even in the most challenging of domains: it is possible to train to superhuman level, without human examples or guidance, given no knowledge of the domain beyond basic rules.

And indeed, now that human players are no longer dominant in games like chess and Go, it can be said that we’ve already entered into the era of superintelligence. This latest breakthrough is the tiniest hint of what’s still to come.

[Nature]

ORIGINAL: Gizmodo

By George Dvorsky
2017/10/18

Artificial intelligence pioneer says we need to start over

By Hugo Angel,

 Geoffrey Hinton harbors doubts about AI's current workhorse. (Johnny Guatto / University of Toronto)

Geoffrey Hinton harbors doubts about AI’s current workhorse. (Johnny Guatto / University of Toronto)

Steve LeVine Sep 15

In 1986, Geoffrey Hinton co-authored a paper that, four decades later, is central to the explosion of artificial intelligence. But Hinton says his breakthrough method should be dispensed with, and a new path to AI found.

Speaking with Axios on the sidelines of an AI conference in Toronto on Wednesday, Hinton, a professor emeritus at the University of Toronto and a Google researcher, said he is now “deeply suspicious” of back-propagation, the workhorse method that underlies most of the advances we are seeing in the AI field today, including the capacity to sort through photos and talk to Siri. “My view is throw it all away and start again,” he said.

The bottom line: Other scientists at the conference said back-propagation still has a core role in AI’s future. But Hinton said that, to push materially ahead, entirely new methods will probably have to be invented.Max Planck said, ‘Science progresses one funeral at a time.’ The future depends on some graduate student who is deeply suspicious of everything I have said.

How it works: In back propagation, labels or “weights” are used to represent a photo or voice within a brain-like neural layer. The weights are then adjusted and readjusted, layer by layer, until the network can perform an intelligent function with the fewest possible errors.

But Hinton suggested that, to get to where neural networks are able to become intelligent on their own, what is known as “unsupervised learning,” “I suspect that means getting rid of back-propagation.

I don’t think it’s how the brain works,” he said. “We clearly don’t need all the labeled data.

“Es el momento de hacer que nuestros niños sean más inteligentes que la inteligencia artificial”

By Hugo Angel,

Danqing Wang Computer ABC

ENTREVISTA | Noriko Arai, directora del Todai Robot Project »

Noriko Arai quiere revolucionar el sistema educativo para que los humanos no pierdan la batalla laboral contra los robots

Noriko Arai durante su charla TED en Vancouver.

Noriko Arai durante su charla TED en Vancouver. Bret Hartman / TED


Una vez al año, medio millón de estudiantes japoneses realizan el examen de acceso a la universidad, ocho pruebas tipo test. Menos del 3% lo harán suficientemente bien como para hacer la segunda parte, un examen escrito diseñado especialmente para el acceso a la Universidad de Tokio (Todai), la más prestigiosa de Japón. Noriko Arai, de 54 años, directora del Centro de Investigación para el Conocimiento en la Comunidad del Instituto Nacional de Informática y del Todai Robot Project, está trabajando en un robot que pueda aprobar todos estos exámenes para aprender las posibilidades y las limitaciones de la inteligencia artificial.

En 2013, tras dos años de Proyecto, el robot Todai sacó una nota suficientemente buena para ser admitido en 472 de 581 universidades privadas. En 2016, su nota estuvo entre el 20% de las mejores en los exámenes tipo test, y en entre el 1% de los mejores en uno de los dos exámenes de matemáticas. Además, fue capaz de escribir una redacción sobre el comercio marítimo del siglo XVII mejor que la mayoría de los estudiantes. “Tomó información del libro de texto y de Wikipedia y la combinó sin entender ni pizca”, explicó Arai durante su reciente charla TED en Vancouver. “Ni Watson, ni Siri, ni Todai Robot pueden leer. La inteligencia artificial no puede entender, solo hace como que entiende”.

Más que contenta por su robot, Arai quedó alarmada por los resultados. “¿Cómo es posible que esta máquina no inteligente lo hiciera mejor que nuestros niños?”, se preguntaba. Preocupada por el futuro laboral de las nuevas generaciones, realizó un experimento con estudiantes y descubrió que un tercio de ellos fallaron preguntas sencillas porque no leen bien, un problema que, piensa, existe en todo el mundo. “Nosotros, los humanos, podemos comprender el significado de las cosas, algo que no puede hacer la inteligencia artificial. Pero la mayoría de los estudiantes reciben conocimiento sin comprender el significado, y eso no es conocimiento, es memorización, y la inteligencia artificial puede hacer lo mismo. Debemos crear un nuevo sistema educativo”.

Pregunta: ¿Por qué decidió una matemática como usted meterse en el mundo de los robots?
Respuesta: La inteligencia artificial consiste en intentar escribir el pensamiento en lenguaje matemático. No hay otra forma para que la inteligencia artificial sea inteligente. Como matemática creo que el pensamiento no puede escribirse en el lenguaje matemático. Descartes dijo lo mismo. Mi primera impresión fue que la inteligencia artificial es imposible. Utiliza probabilidad y estadística sumada a la lógica. En el siglo XX se usaba solo la lógica, y por supuesto que no todo puede ser escrito con lógica, como los sentimientos, por ejemplo. Ahora están usando estadística, imitando el pasado para ver cómo actuar cuando encontremos cosas nuevas.

P. No le gusta cuando la gente dice que la inteligencia artificial podría conquistar el mundo…
R. Estoy harta de esa imagen, por eso decidí crear un robot muy inteligente, utilizando lo último en investigación para ver sus limitaciones. Watson de IBM y Google Car, por ejemplo, tienden a mostrar solo las cosas buenas. Nosotros queremos mostrarlo todo. También lo que no es capaz de hacer.

P. Al intentar mejorar la inteligencia artificial, usted vio que había que mejorar la educación.

R. Sabía que mi robot era ininteligente, cargado de conocimientos que no sabe cómo usar correctamente porque no entiende el significado. Quedé estupefacta al ver que este robot que no es inteligente escribió una redacción mejor que la mayoría de los estudiantes. Así que decidí investigar lo que estaba ocurriendo en el mundo humano. Estaría más contenta si hubiera descubierto que la inteligencia artificial adelantó a los estudiantes porque es mejor en memorizar y computar, pero ese no era el caso. El robot no comprende el significado, pero tampoco la mayoría de los estudiantes.

P. ¿Cree usted que el problema es que dependemos tanto de Siri y Google para resolver nuestras dudas que por eso no procesamos la información bien?
R. Estamos analizando el porqué. Algo que podemos ver es que antes todo el mundo leía el periódico, incluso la gente pobre. Pero ahora la mayoría de las parejas jóvenes no leen el diario porque lo tienen en su teléfono. No compran libros porque la mayoría de las historias están en blogs. No tienen calendario o hasta reloj en casa porque lo tienen en el teléfono. Los niños se crían sin números ni letras en su ambiente. Y también tienden a tener conversaciones en mensajes de texto muy cortos. Tienen menos oportunidades de leer, creo.

P. Parte del proyecto Todai es ver qué tipo de trabajos la inteligencia artificial podría quitarle a los humanos.
R. En Japón, antes, todo el mundo era clase media, no había gente muy rica, ni gente muy pobre. Pero cuando la inteligencia artificial llega a una sociedad se lleva muchos trabajos, incluidos los puestos de banqueros o analistas. Quienes pierden su trabajo por culpa de la inteligencia artificial puede que no encuentren otro en mucho tiempo. Quizás haya trabajos como corregir los errores cometidos por la inteligencia artificial, trabajos muy duros y más insignificantes que nunca, como en Tiempos Modernos de Chaplin. Alguien con talento, creativo, inteligente, determinado, bueno en la lectura y la escritura, tendrá más oportunidades que nunca porque incluso si nació en un pueblo, mientras tenga acceso a Internet dispondrá de mucha información para aprender gratuitamente y llegar a hacerse millonario. Es mucho más fácil comenzar un negocio que en el siglo XX. Pero alguien que no tiene ese tipo de inteligencia, probablemente se quede atrapado entre las multitudes. Lo que pasa es que todos tienen derecho a voto, y, en ese sentido, somos todos iguales. Si cada vez hay más y más gente que se siente atrapada y solo la gente inteligente gana dinero, y los utiliza para ganar más dinero, pensarán mal de la sociedad, odiarán a la sociedad, y las consecuencias las sufriremos todos, en todo el mundo.

P. ¿Cuál piensa que es la solución?
R. Ahora es el momento de hacer que nuestros niños sean más inteligentes que la inteligencia artificial. He inaugurado el Instituto de Investigación de la Ciencia para la Educación este mes para investigar cuántos estudiantes tienen malos hábitos de lectura y escritura, y por qué, y ver cómo podemos ayudarles a modificar esos hábitos para que puedan adelantar al robot usando su poderío humano. Me gustaría que estuviéramos como en Japón de los años setenta, cuando todo el mundo era de clase media, todos nos ayudábamos y no necesitábamos más dinero del que somos capaces de gastar en nuestra vida. Todo el mundo debería estar bien educado, saber leer y escribir, pero no solo el significado literal. Todos deberíamos aprender con profundidad, leer con profundidad para poder mantener nuestro trabajo.

ORIGINAL: El País
Por Isaac Hernández Isaac Hernández. Vancouver
6 JUN 2017

IBM Makes Breakthrough in Race to Commercialize Quantum Computers

By Hugo Angel,

Photographer: David Paul Morris

Researchers at International Business Machines Corp. have developed a new approach for simulating molecules on a quantum computer.

The breakthrough, outlined in a research paper to be published in the scientific journal Nature Thursday, uses a technique that could eventually allow quantum computers to solve difficult problems in chemistry and electro-magnetism that cannot be solved by even the most powerful supercomputers today.

In the experiments described in the paper, IBM researchers used a quantum computer to derive the lowest energy state of a molecule of beryllium hydride. Knowing the energy state of a molecule is a key to understanding chemical reactions.

In the case of beryllium hydride, a supercomputer can solve this problem, but the standard techniques for doing so cannot be used for large molecules because the number of variables exceeds the computational power of even these machines.

The IBM researchers created a new algorithm specifically designed to take advantage of the capabilities of a quantum computer that has the potential to run similar calculations for much larger molecules, the company said.

The problem with existing quantum computers – including the one IBM used for this research — is that they produce errors and as the size of the molecule being analyzed grows, the calculation strays further and further from chemical accuracy. The inaccuracy in IBM’s experiments varied between 2 and 4 percent, Jerry Chow, the manager of experimental quantum computing for IBM, said in an interview.

Alan Aspuru-Guzik, a professor of chemistry at Harvard University who was not part of the IBM research, said that the Nature paper is an important step. “The IBM team carried out an impressive series of experiments that holds the record as the largest molecule ever simulated on a quantum computer,” he said.

But Aspuru-Guzik said that quantum computers would be of limited value until their calculation errors can be corrected. “When quantum computers are able to carry out chemical simulations in a numerically exact way, most likely when we have error correction in place and a large number of logical qubits, the field will be disrupted,” he said in a statement. He said applying quantum computers in this way could lead to the discovery of new pharmaceuticals or organic materials.

IBM has been pushing to commercialize quantum computers and recently began allowing anyone to experiment with running calculations on a 16-qubit quantum computer it has built to demonstrate the technology.

In a classical computer, information is stored using binary units, or bits. A bit is either a 0 or 1. A quantum computer instead takes advantage of quantum mechanical properties to process information using quantum bits, or qubits. A qubit can be both a 0 or 1 at the same time, or any range of numbers between 0 and 1. Also, in a classical computer, each logic gate functions independently. In a quantum computer, the qubits affect one another. This allows a quantum computer, in theory, to process information far more efficiently than a classical computer.

The machine IBM used for the Nature paper consisted of seven quibits created from supercooled superconducting materials. In the experiment, six of these quibits were used to map the energy states of the six electrons in the beryllium hydride molecule. Rather than providing a single, precise and accurate answer, as a classical computer does, a quantum computer must run a calculation hundreds of times, with an average used to arrive at a final answer.

Chow said his team is currently working to improve the speed of its quantum computer with the aim of reducing the time it takes to run each calculation from seconds to microseconds. He said they were also working on ways to reduce its error rate.

IBM is not the only company working on quantum computing. Alphabet Inc.’s Google is working toward creating a 50 qubit quantum computer. The company has pledged to use this machine to solve a previously unsolvable calculation from chemistry or electro-magnetism by the end of the year. Also competing to commercialize quantum computing is Rigetti Computing, a startup in Berkeley, California, which is building its own machine, and Microsoft Corp. which is working with an unproven quantum computing architecture that is, in theory, inherently error-free. D-Wave Systems Inc., a Canadian company, is currently the only company to sell

ORIGINAL: Bloomberg
By Jeremy Kahn September 13, 2017

Researchers take major step forward in Artificial Intelligence

By Hugo Angel,

The long-standing dream of using Artificial Intelligence (AI) to build an artificial brain has taken a significant step forward, as a team led by Professor Newton Howard from the University of Oxford has successfully prototyped a nanoscale, AI-powered, artificial brain in the form factor of a high-bandwidth neural implant.
Professor Newton Howard (pictured above and below) holding parts of the implant device
In collaboration with INTENT LTD, Qualcomm Corporation, Intel Corporation, Georgetown University and the Brain Sciences Foundation, Professor Howard’s Oxford Computational Neuroscience Lab in the Nuffield Department of Surgical Sciences has developed the proprietary algorithms and the optoelectronics required for the device. Rodents’ testing is on target to begin very soon.
This achievement caps over a decade of research by Professor Howard at MIT’s Synthetic Intelligence Lab and the University of Oxford, work that resulted in several issued US patents on the technologies and algorithms that power the device, 
  • the Fundamental Code Unit of the Brain (FCU)
  • the Brain Code (BC) and the Biological Co-Processor (BCP) 

are the latest advanced foundations for any eventual merger between biological intelligence and human intelligence. Ni2o (pronounced “Nitoo”) is the entity that Professor Howard licensed to further develop, market and promote these technologies.

The Biological Co-Processor is unique in that it uses advanced nanotechnology, optogenetics and deep machine learning to intelligently map internal events, such as neural spiking activity, to external physiological, linguistic and behavioral expression. The implant contains over a million carbon nanotubes, each of which is 10,000 times smaller than the width of a human hair. Carbon nanotubes provide a natural, high-bandwidth interface as they conduct heat, light and electricity instantaneously updating the neural laces. They adhere to neuronal constructs and even promote neural growth. Qualcomm team leader Rudy Beraha commented, ‘Although the prototype unit shown today is tethered to external power, a commercial Brain Co-Processor unit will be wireless and inductively powered, enabling it to be administered with a minimally-invasive procedures.
The device uses a combination of methods to write to the brain, including 
  • pulsed electricity
  • light and 
  • various molecules that simulate or inhibit the activation of specific neuronal groups
These can be targeted to stimulate a desired response, such as releasing chemicals in patients suffering from a neurological disorder or imbalance. The BCP is designed as a fully integrated system to use the brain’s own internal systems and chemistries to pattern and mimic healthy brain behavior, an approach that stands in stark contrast to the current state of the art, which is to simply apply mild electrocution to problematic regions of the brain. 
Therapeutic uses
The Biological Co-Processor promises to provide relief for millions of patients suffering from neurological, psychiatric and psychological disorders as well as degenerative diseases. Initial therapeutic uses will likely be for patients with traumatic brain injuries and neurodegenerative disorders, such as Alzheimer’s, as the BCP will strengthen the weak, shortening connections responsible for lost memories and skills. Once implanted, the device provides a closed-loop, self-learning platform able to both determine and administer the perfect balance of pharmaceutical, electroceutical, genomeceutical and optoceutical therapies.
Dr Richard Wirt, a Senior Fellow at Intel Corporation and Co-Founder of INTENT, the company’s partner of Ni2o bringing BCP to market, commented on the device, saying, ‘In the immediate timeframe, this device will have many benefits for researchers, as it could be used to replicate an entire brain image, synchronously mapping internal and external expressions of human response. Over the long term, the potential therapeutic benefits are unlimited.
The brain controls all organs and systems in the body, so the cure to nearly every disease resides there.- Professor Newton Howard
Rather than simply disrupting neural circuits, the machine learning systems within the BCP are designed to interpret these signals and intelligently read and write to the surrounding neurons. These capabilities could be used to reestablish any degenerative or trauma-induced damage and perhaps write these memories and skills to other, healthier areas of the brain. 
One day, these capabilities could also be used in healthy patients to radically augment human ability and proactively improve health. As Professor Howard points out: ‘The brain controls all organs and systems in the body, so the cure to nearly every disease resides there.‘ Speaking more broadly, Professor Howard sees the merging of man with machine as our inevitable destiny, claiming it to be ‘the next step on the blueprint that the author of it all built into our natural architecture.
With the resurgence of neuroscience and AI enhancing machine learning, there has been renewed interest in brain implants. This past March, Elon Musk and Bryan Johnson independently announced that they are focusing and investing in for the brain/computer interface domain. 
When asked about these new competitors, Professor Howard said he is happy to see all these new startups and established names getting into the field – he only wonders what took them so long, stating: ‘I would like to see us all working together, as we have already established a mathematical foundation and software framework to solve so many of the challenges they will be facing. We could all get there faster if we could work together – after all, the patient is the priority.
© 2017 Nuffield Department of Surgical Sciences, John Radcliffe Hospital, Headington, Oxford, OX3 9DU
ORIGINAL: NDS Oxford
2 June 2017 

Spectacular Visualizations of Brain Scans Enhanced with 1,750 Pieces of Gold Leaf

By Hugo Angel,

Self Reflected, 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The entire Self Reflected microetching under violet and white light. (photo by Greg Dunn and Will Drinker)
Anyone who thinks that scientists can’t be artists need look no further than Dr. Greg Dunn and Dr. Brian Edwards. The neuroscientist and applied physicist have paired together to create an artistic series of images that the artists describe as “the most fundamental self-portrait ever created.Literally going inside, the pair has blown up a thin slice of the brain 22 times in a series called Self-Reflected.
Traveling across 500,000 neurons, the images took two years to complete, as Dunn and Edwards developed special technology for the project. Using a technique they’ve called reflective microetching, they microscopically manipulated the reflectivity of the brain’s surface. Different regions of the brain were hand painted and digitized, later using a computer program created by Edwards to show the complex choreography our mind undergoes as it processes information.
After printing the designs onto transparencies, the duo added 1,750 gold leaf sheets to increase the art’s reflectivity. The astounding results are images that demonstrate the delicate flow and balance of our brain’s activity. “Self Reflected was created to remind us that the most marvelous machine in the known universe is at the core of our being and is the root of our shared humanity,” the artists share.
Self Reflected fine art prints and microetchings are available for purchase via Dunn’s website.
Self Reflected is an unprecedented look inside the brain.
Self Reflected (detail), 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The parietal gyrus where movement and vision are integrated. (photo by Greg Dunn and Will Drinker)

 

Self Reflected (detail), 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The brainstem and cerebellum, regions that control basic body and motor functions. (photo by Greg Dunn and Will Drinker)

 

An astounding achievement in scientific art, the artists applied 1,750 leaves of gold to the final microetchings.
Self Reflected (detail), 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The laminar structure of the cerebellum, a region involved in movement and proprioception (calculating where your body is in space).

 

Self Reflected (detail), 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The pons, a region involved in movement and implicated in consciousness. (photo by Greg Dunn and Will Drinker)

 

Self Reflected (detail), 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. Raw colorized microetching data from the reticular formation.

 

Self Reflected (detail), 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The visual cortex, the region located at the back of the brain that processes visual information.

 

Self Reflected (detail), 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The thalamus and basal ganglia, sorting senses, initiating movement, and making decisions. (photo by Greg Dunn and Will Drinker)

 

Self Reflected, 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The entire Self Reflected microetching under white light. (photo by Greg Dunn and Will Drinker)
Self Reflected (detail), 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The midbrain, an area that carries out diverse functions in reward, eye movement, hearing, attention, and movement. (photo by Greg Dunn and Will Drinker)
This video shows how the etched neurons twinkle as a light source is moved.

Interested in learning more? Watch Dr. Greg Dunn present the project at The Franklin Institute.
Dr. Greg Dunn: Website | Facebook | Instagram
My Modern Met granted permission to use photos by Dr. Greg Dunn.

ORIGINAL: My MET
By Jessica Stewart 
April 12, 2017

  Category: Art, Brain, Neuroscience, Visualization
  Comments: Comments Off on Spectacular Visualizations of Brain Scans Enhanced with 1,750 Pieces of Gold Leaf

Scientists Have Created an Artificial Synapse That Can Learn Autonomously

By Hugo Angel,

Sergey Tarasov/Shutterstock
Developments and advances in artificial intelligence (AI) have been due in large part to technologies that mimic how the human brain works. In the world of information technology, such AI systems are called neural networks.
These contain algorithms that can be trained, among other things, to imitate how the brain recognises speech and images. However, running an Artificial Neural Network consumes a lot of time and energy.
Now, researchers from the National Centre for Scientific Research (CNRS) in Thales, the University of Bordeaux in Paris-Sud, and Evry have developed an artificial synapse called a memristor directly on a chip.
It paves the way for intelligent systems that required less time and energy to learn, and it can learn autonomously.
In the human brain, synapses work as connections between neurons. The connections are reinforced and learning is improved the more these synapses are stimulated.
The memristor works in a similar fashion. It’s made up of a thin ferroelectric layer (which can be spontaneously polarised) that is enclosed between two electrodes.
Using voltage pulses, their resistance can be adjusted, like biological neurons. The synaptic connection will be strong when resistance is low, and vice-versa.
Figure 1
(a) Sketch of pre- and post-neurons connected by a synapse. The synaptic transmission is modulated by the causality (Δt) of neuron spikes. (b) Sketch of the ferroelectric memristor where a ferroelectric tunnel barrier of BiFeO3 (BFO) is sandwiched between a bottom electrode of (Ca,Ce)MnO3 (CCMO) and a top submicron pillar of Pt/Co. YAO stands for YAlO3. (c) Single-pulse hysteresis loop of the ferroelectric memristor displaying clear voltage thresholds ( and ). (d) Measurements of STDP in the ferroelectric memristor. Modulation of the device conductance (ΔG) as a function of the delay (Δt) between pre- and post-synaptic spikes. Seven data sets were collected on the same device showing the reproducibility of the effect. The total length of each pre- and post-synaptic spike is 600 ns.
Source: Nature Communications
The memristor’s capacity for learning is based on this adjustable resistance.
AI systems have developed considerably in the past couple of years. Neural networks built with learning algorithms are now capable of performing tasks which synthetic systems previously could not do.
For instance, intelligent systems can now compose music, play games and beat human players, or do your taxes. Some can even identify suicidal behaviour, or differentiate between what is lawful and what isn’t.
This is all thanks to AI’s capacity to learn, the only limitation of which is the amount of time and effort it takes to consume the data that serve as its springboard.
With the memristor, this learning process can be greatly improved. Work continues on the memristor, particularly on exploring ways to optimise its function.
For starters, the researchers have successfully built a physical model to help predict how it functions.
Their work is published in the journal Nature Communications.
ORIGINAL: ScienceAlert
DOM GALEON, FUTURISM
7 APR 2017

Google DeepMind has built an AI machine that could learn as quickly as humans before long

By Hugo Angel,

Neural Episodic Control. Architecture of episodic memory module for a single action

Emerging Technology from the arXiv

Intelligent machines have humans in their sights.

Deep-learning machines already have superhuman skills when it comes to tasks such as

  • face recognition,
  • video-game playing, and
  • even the ancient Chinese game of Go.

So it’s easy to think that humans are already outgunned.

But not so fast. Intelligent machines still lag behind humans in one crucial area of performance: the speed at which they learn. When it comes to mastering classic video games, for example, the best deep-learning machines take some 200 hours of play to reach the same skill levels that humans achieve in just two hours.

So computer scientists would dearly love to have some way to speed up the rate at which machines learn.

Today, Alexander Pritzel and pals at Google’s DeepMind subsidiary in London claim to have done just that. These guys have built a deep-learning machine that is capable of rapidly assimilating new experiences and then acting on them. The result is a machine that learns significantly faster than others and has the potential to match humans in the not too distant future.

First, some background.

Deep learning uses layers of neural networks to look for patterns in data. When a single layer spots a pattern it recognizes, it sends this information to the next layer, which looks for patterns in this signal, and so on.

So in face recognition,

  • one layer might look for edges in an image,
  • the next layer for circular patterns of edges (the kind that eyes and mouths make), and
  • the next for triangular patterns such as those made by two eyes and a mouth.
  • When all this happens, the final output is an indication that a face has been spotted.

Of course, the devil is in the details. There are various systems of feedback to allow the system to learn by adjusting various internal parameters such as the strength of connections between layers. These parameters must change slowly, since a big change in one layer can catastrophically affect learning in the subsequent layers. That’s why deep neural networks need so much training and why it takes so long.

Pritzel and co have tackled this problem with a technique they call Neural Episodic Control. “Neural episodic control demonstrates dramatic improvements on the speed of learning for a wide range of environments,” they say. “Critically, our agent is able to rapidly latch onto highly successful strategies as soon as they are experienced, instead of waiting for many steps of optimisation.

The basic idea behind DeepMind’s approach is to copy the way humans and animals learn quickly. The general consensus is that humans can tackle situations in two different ways.

  • If the situation is familiar, our brains have already formed a model of it, which they use to work out how best to behave. This uses a part of the brain called the prefrontal cortex.
  • But when the situation is not familiar, our brains have to fall back on another strategy. This is thought to involve a much simpler test-and-remember approach involving the hippocampus. So we try something and remember the outcome of this episode. If it is successful, we try it again, and so on. But if it is not a successful episode, we try to avoid it in future.

This episodic approach suffices in the short term while our prefrontal brain learns. But it is soon outperformed by the prefrontal cortex and its model-based approach.

Pritzel and co have used this approach as their inspiration. Their new system has two approaches.

  • The first is a conventional deep-learning system that mimics the behaviur of the prefrontal cortex.
  • The second is more like the hippocampus. When the system tries something new, it remembers the outcome.

But crucially, it doesn’t try to learn what to remember. Instead, it remembers everything. “Our architecture does not try to learn when to write to memory, as this can be slow to learn and take a significant amount of time,” say Pritzel and co. “Instead, we elect to write all experiences to the memory, and allow it to grow very large compared to existing memory architectures.

They then use a set of strategies to read from this large memory quickly. The result is that the system can latch onto successful strategies much more quickly than conventional deep-learning systems.

They go on to demonstrate how well all this works by training their machine to play classic Atari video games, such as Breakout, Pong, and Space Invaders. (This is a playground that DeepMind has used to train many deep-learning machines.)

The team, which includes DeepMind cofounder Demis Hassibis, shows that neural episodic control vastly outperforms other deep-learning approaches in the speed at which it learns. “Our experiments show that neural episodic control requires an order of magnitude fewer interactions with the environment,” they say.

That’s impressive work with significant potential. The researchers say that an obvious extension of this work is to test their new approach on more complex 3-D environments.

It’ll be interesting to see what environments the team chooses and the impact this will have on the real world. We’ll look forward to seeing how that works out.

Ref: Neural Episodic Control : arxiv.org/abs/1703.01988

ORIGINAL: MIT Technology Review

The future of AI is neuromorphic. Meet the scientists building digital ‘brains’ for your phone

By Hugo Angel,

Neuromorphic chips are being designed to specifically mimic the human brain – and they could soon replace CPUs
BRAIN ACTIVITY MAP
Neuroscape Lab
AI services like Apple’s Siri and others operate by sending your queries to faraway data centers, which send back responses. The reason they rely on cloud-based computing is that today’s electronics don’t come with enough computing power to run the processing-heavy algorithms needed for machine learning. The typical CPUs most smartphones use could never handle a system like Siri on the device. But Dr. Chris Eliasmith, a theoretical neuroscientist and co-CEO of Canadian AI startup Applied Brain Research, is confident that a new type of chip is about to change that.
Many have suggested Moore’s law is ending and that means we won’t get ‘more compute’ cheaper using the same methods,” Eliasmith says. He’s betting on the proliferation of ‘neuromorphics’ — a type of computer chip that is not yet widely known but already being developed by several major chip makers.
Traditional CPUs process instructions based on “clocked time” – information is transmitted at regular intervals, as if managed by a metronome. By packing in digital equivalents of neurons, neuromorphics communicate in parallel (and without the rigidity of clocked time) using “spikes” – bursts of electric current that can be sent whenever needed. Just like our own brains, the chip’s neurons communicate by processing incoming flows of electricity – each neuron able to determine from the incoming spike whether to send current out to the next neuron.
What makes this a big deal is that these chips require far less power to process AI algorithms. For example, one neuromorphic chip made by IBM contains five times as many transistors as a standard Intel processor, yet consumes only 70 milliwatts of power. An Intel processor would use anywhere from 35 to 140 watts, or up to 2000 times more power.
Eliasmith points out that neuromorphics aren’t new and that their designs have been around since the 80s. Back then, however, the designs required specific algorithms be baked directly into the chip. That meant you’d need one chip for detecting motion, and a different one for detecting sound. None of the chips acted as a general processor in the way that our own cortex does.
This was partly because there hasn’t been any way for programmers to design algorithms that can do much with a general purpose chip. So even as these brain-like chips were being developed, building algorithms for them has remained a challenge.
 
Eliasmith and his team are keenly focused on building tools that would allow a community of programmers to deploy AI algorithms on these new cortical chips.
Central to these efforts is Nengo, a compiler that developers can use to build their own algorithms for AI applications that will operate on general purpose neuromorphic hardware. Compilers are a software tool that programmers use to write code, and that translate that code into the complex instructions that get hardware to actually do something. What makes Nengo useful is its use of the familiar Python programming language – known for it’s intuitive syntax – and its ability to put the algorithms on many different hardware platforms, including neuromorphic chips. Pretty soon, anyone with an understanding of Python could be building sophisticated neural nets made for neuromorphic hardware.
Things like vision systems, speech systems, motion control, and adaptive robotic controllers have already been built with Nengo,Peter Suma, a trained computer scientist and the other CEO of Applied Brain Research, tells me.
Perhaps the most impressive system built using the compiler is Spaun, a project that in 2012 earned international praise for being the most complex brain model ever simulated on a computer. Spaun demonstrated that computers could be made to interact fluidly with the environment, and perform human-like cognitive tasks like recognizing images and controlling a robot arm that writes down what it’s sees. The machine wasn’t perfect, but it was a stunning demonstration that computers could one day blur the line between human and machine cognition. Recently, by using neuromorphics, most of Spaun has been run 9000x faster, using less energy than it would on conventional CPUs – and by the end of 2017, all of Spaun will be running on Neuromorphic hardware.
Eliasmith won NSERC’s John C. Polyani award for that project — Canada’s highest recognition for a breakthrough scientific achievement – and once Suma came across the research, the pair joined forces to commercialize these tools.
While Spaun shows us a way towards one day building fluidly intelligent reasoning systems, in the nearer term neuromorphics will enable many types of context aware AIs,” says Suma. Suma points out that while today’s AIs like Siri remain offline until explicitly called into action, we’ll soon have artificial agents that are ‘always on’ and ever-present in our lives.
Imagine a SIRI that listens and sees all of your conversations and interactions. You’ll be able to ask it for things like – “Who did I have that conversation about doing the launch for our new product in Tokyo?” or “What was that idea for my wife’s birthday gift that Melissa suggested?,” he says.
When I raised concerns that some company might then have an uninterrupted window into even the most intimate parts of my life, I’m reminded that because the AI would be processed locally on the device, there’s no need for that information to touch a server owned by a big company. And for Eliasmith, this ‘always on’ component is a necessary step towards true machine cognition. “The most fundamental difference between most available AI systems of today and the biological intelligent systems we are used to, is the fact that the latter always operate in real-time. Bodies and brains are built to work with the physics of the world,” he says.
Already, major efforts across the IT industry are heating up to get their AI services into the hands of users. Companies like Apple, Facebook, Amazon, and even Samsung, are developing conversational assistants they hope will one day become digital helpers.
ORIGINAL: Wired
Monday 6 March 2017

A Giant Neuron Has Been Found Wrapped Around the Entire Circumference of the Brain

By Hugo Angel,

Allen Institute for Brain Science

This could be where consciousness forms. For the first time, scientists have detected a giant neuron wrapped around the entire circumference of a mouse’s brain, and it’s so densely connected across both hemispheres, it could finally explain the origins of consciousness.

Using a new imaging technique, the team detected the giant neuron emanating from one of the best-connected regions in the brain, and say it could be coordinating signals from different areas to create conscious thought.

This recently discovered neuron is one of three that have been detected for the first time in a mammal’s brain, and the new imaging technique could help us figure out if similar structures have gone undetected in our own brains for centuries.

At a recent meeting of the Brain Research through Advancing Innovative Neurotechnologies initiative in Maryland, a team from the Allen Institute for Brain Science described how all three neurons stretch across both hemispheres of the brain, but the largest one wraps around the organ’s circumference like a “crown of thorns”.
You can see them highlighted in the image at the top of the page.

Lead researcher Christof Koch told Sara Reardon at Nature that they’ve never seen neurons extend so far across both regions of the brain before.
Oddly enough, all three giant neurons happen to emanate from a part of the brain that’s shown intriguing connections to human consciousness in the past – the claustrum, a thin sheet of grey matter that could be the most connected structure in the entire brain, based on volume.

This relatively small region is hidden between the inner surface of the neocortex in the centre of the brain, and communicates with almost all regions of cortex to achieve many higher cognitive functions such as

  • language,
  • long-term planning, and
  • advanced sensory tasks such as
  • seeing and
  • hearing.

Advanced brain-imaging techniques that look at the white matter fibres coursing to and from the claustrum reveal that it is a neural Grand Central Station,Koch wrote for Scientific American back in 2014. “Almost every region of the cortex sends fibres to the claustrum.”

The claustrum is so densely connected to several crucial areas in the brain that Francis Crick of DNA double helix fame referred to it a “conductor of consciousnessin a 2005 paper co-written with Koch.

They suggested that it connects all of our external and internal perceptions together into a single unifying experience, like a conductor synchronises an orchestra, and strange medical cases in the past few years have only made their case stronger.

Back in 2014, a 54-year-old woman checked into the George Washington University Medical Faculty Associates in Washington, DC, for epilepsy treatment.

This involved gently probing various regions of her brain with electrodes to narrow down the potential source of her epileptic seizures, but when the team started stimulating the woman’s claustrum, they found they could effectively ‘switch’ her consciousness off and on again.

Helen Thomson reported for New Scientist at the time:
When the team zapped the area with high frequency electrical impulses, the woman lost consciousness. She stopped reading and stared blankly into space, she didn’t respond to auditory or visual commands and her breathing slowed.

As soon as the stimulation stopped, she immediately regained consciousness with no memory of the event. The same thing happened every time the area was stimulated during two days of experiments.”

According to Koch, who was not involved in the study, this kind of abrupt and specific ‘stopping and starting‘ of consciousness had never been seen before.

Another experiment in 2015 examined the effects of claustrum lesions on the consciousness of 171 combat veterans with traumatic brain injuries.

They found that claustrum damage was associated with the duration, but not frequency, of loss of consciousness, suggesting that it could play an important role in the switching on and off of conscious thought, but another region could be involved in maintaining it.

And now Koch and his team have discovered extensive neurons in mouse brains emanating from this mysterious region.

In order to map neurons, researchers usually have to inject individual nerve cells with a dye, cut the brain into thin sections, and then trace the neuron’s path by hand.

It’s a surprisingly rudimentary technique for a neuroscientist to have to perform, and given that they have to destroy the brain in the process, it’s not one that can be done regularly on human organs.

Koch and his team wanted to come up with a technique that was less invasive, and engineered mice that could have specific genes in their claustrum neurons activated by a specific drug.

When the researchers fed the mice a small amount of the drug, only a handful of neurons received enough of it to switch on these genes,Reardon reports for Nature.

That resulted in production of a green fluorescent protein that spread throughout the entire neuron. The team then took 10,000 cross-sectional images of the mouse brain, and used a computer program to create a 3D reconstruction of just three glowing cells.

We should keep in mind that just because these new giant neurons are connected to the claustrum doesn’t mean that Koch’s hypothesis about consciousness is correct – we’re a long way from proving that yet.

It’s also important to note that these neurons have only been detected in mice so far, and the research has yet to be published in a peer-reviewed journal, so we need to wait for further confirmation before we can really delve into what this discovery could mean for humans.

But the discovery is an intriguing piece of the puzzle that could help up make sense of this crucial, but enigmatic region of the brain, and how it could relate to the human experience of conscious thought.

The research was presented at the 15 February meeting of the Brain Research through Advancing Innovative Neurotechnologies initiative in Bethesda, Maryland.

ORIGINAL: ScienceAlert

BEC CREW
28 FEB 2017
%d bloggers like this: