Category: AI


The Chinese Artificial Intelligence Revolution

By Hugo Angel,

The Bund by Shizhao This file is licensed under the Creative CommonsAttribution 1.0 Generic license
The artificial intelligence (AI) world summit took place in Singapore on 3 and 4 October 2017 (The AI summit Singapore). If we follow this global trend of heavy emphasis on AI, we can note the convergence between artificial intelligence and the emergence of “smart cities” in Asia, especially in China (Imran Khan, “Asia is leading the “Smart city” charge, but we’re not there yet, TechinAsia, January 19 2016). The development of artificial intelligence indeed combines with the current urbanization of the Chinese population.

This “intelligentization” of smart cities in China is induced by the necessity to master urban growth, while adapting urban areas to the emerging energy, water, food, health challenges, through the treatment of big data by artificial intelligence (Jean-Michel Valantin, “China: Towards the digital ecological revolution?”, The Red (Team) Analysis Society, October 22, 2017). Reciprocally, the smart urban development is a powerful driver, among others, of the development of artificial intelligence (Linda Poon, “What artificial intelligence reveals about urban change?” City Lab, July 13, 2017).

In this article, we shall thus focus upon the combination of artificial intelligence and cities that indeed creates the so-called “smart cities” in China. After having presented how this combination looks like through Chinese examples, we shall explain how this trend is implemented. Finally, we shall see how the development of artificial intelligence within the latest generations of smart cities is disrupting geopolitics through the combination of industry and intelligentization.

Artificial intelligence and smart cities
In China, the urban revolution induced by the acceleration of rural exodus is entwined with the digital and artificial intelligence revolution. This can be seen through the national program of urban development that is transforming “small” (3 million people) and middle size cities (5 million people) into smart cities. The new 95 Chinese smart cities are meant to shelter the coming 250 millions people expected to relocate into towns between the end of 2017 and 2026 (Chris Weller, “Here’s China’s genius plan to move 250 millions people from farms to cities”, Business Insider, 5 August 2015). However, these 95 cities are part of the 500 smart cities that are expected to be developed before the end of 2017 (“Chinese “smart cities” to number 500 before end of 2017“, China Daily, 21-04-2017).

In order to manage the mammoth challenges of these huge cities, artificial intelligence is on the rise. Deep learning is notably the type of AI that is used to make these cities smart. Deep learning is both able to treat the massive flow of data generated by cities and made possible by the exponentially growing flows of these big data – as these very data allow the AI to learn by themselves, through the creation, among other things, of the codes needed to apprehend new kinds of data and issues (Michael Copeland, “What’s the difference between AI, machine learning and deep learning?”, NVIDIA Blog, July 29, 2016).

For example, since 2016, the Hangzhou municipal government has integrated artificial intelligence, notably with “city brain”, which helps improving traffic efficiency through the use of the big data streams generated by a myriad of captors and cameras. The “city brain” project is led the giant technology company Alibaba. This “intelligentization” of traffic management helps reduce traffic jam, improves street surveillance, as well as air pollution for the 9 millions residents of Hangzhou. However, it is only the first step before turning the city into an intelligent and sustainable smart city (Du Yifei, “Hangzhou growing “smarter” thanks to AI technology”, People’s Daily, October 20, 2017).

“Intelligentizing cities”

Through the developing internet of things (IoT), the convergence of “intelligent” infrastructures, of big data management, and of urban artificial intelligence is going to be increasingly important to improve traffic, and thus energy efficiency, air pollution and economic development (Sarah Hsu, “China is investing heavily into Artificial intelligence, and could soon catch up with US”, Forbes, July 3, 2017). The Hangzhou experiment is duplicated in Suzhou, Quzhou and Macao.

Meanwhile, Baidu Inc, the Chinese largest search engine, develops a partnership with the Shanxi province in order to implement “city brain”, which is dedicated to create smart cities in the northern province, while improving coal mining management and chemical treatment (“Baidu partners with Shanxi province to integrate AI with city management”, China Money Network, July 13). As a result, the AI is going to be used to alleviate the use of this energy, which is also responsible of the Chinese “airpocalypse” (Jean-Michel Valantin, “The Arctic, Russia and China’s energy transition”, The Red (Team) Analysis Society, February 2, 2017).

In the meantime, Tencent, another mammoth Chinese technology company, is multiplying partnerships with 14 Chinese provinces and 50 cities to develop and integrate urban artificial intelligences. In the same time, the Hong Kong government is getting ready to implement an artificial intelligence program to tackle the 21st urban challenges, chief among them urban development management and climate change impacts.

When looking closely at this development of artificial intelligence in order to support the management of Chinese cities and at the multiplication of smart cities, we notice both also coincide with the political will aimed at reducing the growth of already clogged Chinese megacities of more than ten million people – such as

  • Beijing (21,5 millions people), 
  • Shanghai (25 millions), and 
  • the urban areas around them – and of the network of very great cities where more than 5 to 10 million people live. 

Indeed, the problem is that these very large cities and megalopolis have reached highly dangerous levels of water and air pollution, hence the “airpocalypse”, created by the noxious mix of car fume and coal plants exhaust.

From the intelligentization of Chinese cities to the “smart cars revolution”
This Chinese AI-centred urban development strategy also drives a gigantic urban, technological and industrial revolution, that turns China into a possible world leader

  • in clean energy, 
  • in electric and smart cars and 
  • in urban development. 

The development of the new generations of smart car is thus going to be coupled with latest advances in artificial intelligence. As a result, China can position itself in the “middle” of the major trends of globalization. Indeed, smart electric cars are the “new frontier” of the car industry that supports the economy of great economic powers as such as the U.S., Japan, and Germany (Michael Klare, Blood and oil, 2005), while artificial intelligence is the new frontier of industry and the building of the future. The emergence of China as an “electric and smart cars” provider could have massive implications for the industrial and economic development of these countries.

Add caption

In 2015, in the case of Shanghai, the number of cars grew by more than 13%, reaching the staggering total of 2.5 million cars in a 25 millions people strong megacity. In order to mitigate the impact of the car flow on the atmosphere, the municipal authorities use new “smart street” technologies. For example, the Ningbo-Hangzhou-Shanghai highway, daily used by more than 40 000 cars, is being equipped with a cyber network allowing drivers to pay tolls in advance with their smartphones. This application allows a significant decrease in pollution, because the lines of thousands of cars stopping in front of paybooths are reduced (“Chinese “smart cities” to number 500 before end of 2017”, China Daily, 21 April 2017).

In the meantime, the tech giant Tencent, the creator of WeChat, the enormous Chinese social network, which attracts more than 889 million users per month (“2017 WeChat Users Behavior Report”, China Channel, April 25, 2017), is developing a partnership with the Guangzhou automobile Group to develop smart cars. Baidu is doing the same with the Chinese BYD, Chery and BAIC, while launching Apollo, the open source platform on AI-powered smart cars. Alibaba, the giant of e-commerce, with more than 454 millions users during the first quarter of 2017 (“Number of active buyers across Alibaba’s online shopping properties from 2nd quarter 2012 to 1st quarter 2017 (in millions)”, Statista, The Statistical Portal, 2017) is developing a partnership with the Chinese brand SAIC motors and has already launched the Yunos System, which connects cars to the cloud and internet services. (Charles Clover and Sherry Fei Ju, “Tencent and Guangzhou team up to produce smart cars“, Financial Times, 19 september 2017).

It must be kept in mind that these three Chinese giant tech companies are thus connecting the development of their own services with artificial intelligence development, notably with smart cars development, in the context of the urban, digital and ecological transformation of China. In other terms, “city brains” and “smart cars” are going to become an immense “digital ecosystem” that artificial intelligences are going to manage, thus giving China an imposing technological edge.

This means that artificial intelligence is becoming the common support of the social and urban transformation of China, as well as the ways and means of the transformation of the Chinese urban network into smart cities. It is also a scientific, technological and industrial revolution.

This revolution is going to be based on the new international distribution of power between artificial intelligence-centred countries, and the others.

Indeed, in China, artificial intelligence is creating new social, economic and political conditions. This means that China is using artificial intelligence in order to manage its own social evolution, while becoming a mammoth artificial intelligence great power.

It now remains to be seen how the latest generations of smart cities powered by developing artificial intelligence accompanies the way some countries are getting ready for the economic, industrial and ecological, as well as security and military challenges of the 21 century, and how this urban and artificial intelligence is preparing an immense geopolitical revolution. This revolution is going to be based on the new international distribution of power between artificial intelligence-centred countries, and the others.

About the author: Jean-Michel Valantin (PhD Paris) leads the Environment and Geopolitics Department of The Red (Team) Analysis Society. He is specialised in strategic studies and defence sociology with a focus on environmental geostrategy.

ORIGINAL: RedAnalysis

Artificial intelligence pioneer says we need to start over

By Hugo Angel,

 Geoffrey Hinton harbors doubts about AI's current workhorse. (Johnny Guatto / University of Toronto)

Geoffrey Hinton harbors doubts about AI’s current workhorse. (Johnny Guatto / University of Toronto)

Steve LeVine Sep 15

In 1986, Geoffrey Hinton co-authored a paper that, four decades later, is central to the explosion of artificial intelligence. But Hinton says his breakthrough method should be dispensed with, and a new path to AI found.

Speaking with Axios on the sidelines of an AI conference in Toronto on Wednesday, Hinton, a professor emeritus at the University of Toronto and a Google researcher, said he is now “deeply suspicious” of back-propagation, the workhorse method that underlies most of the advances we are seeing in the AI field today, including the capacity to sort through photos and talk to Siri. “My view is throw it all away and start again,” he said.

The bottom line: Other scientists at the conference said back-propagation still has a core role in AI’s future. But Hinton said that, to push materially ahead, entirely new methods will probably have to be invented.Max Planck said, ‘Science progresses one funeral at a time.’ The future depends on some graduate student who is deeply suspicious of everything I have said.

How it works: In back propagation, labels or “weights” are used to represent a photo or voice within a brain-like neural layer. The weights are then adjusted and readjusted, layer by layer, until the network can perform an intelligent function with the fewest possible errors.

But Hinton suggested that, to get to where neural networks are able to become intelligent on their own, what is known as “unsupervised learning,” “I suspect that means getting rid of back-propagation.

I don’t think it’s how the brain works,” he said. “We clearly don’t need all the labeled data.

Researchers take major step forward in Artificial Intelligence

By Hugo Angel,

The long-standing dream of using Artificial Intelligence (AI) to build an artificial brain has taken a significant step forward, as a team led by Professor Newton Howard from the University of Oxford has successfully prototyped a nanoscale, AI-powered, artificial brain in the form factor of a high-bandwidth neural implant.
Professor Newton Howard (pictured above and below) holding parts of the implant device
In collaboration with INTENT LTD, Qualcomm Corporation, Intel Corporation, Georgetown University and the Brain Sciences Foundation, Professor Howard’s Oxford Computational Neuroscience Lab in the Nuffield Department of Surgical Sciences has developed the proprietary algorithms and the optoelectronics required for the device. Rodents’ testing is on target to begin very soon.
This achievement caps over a decade of research by Professor Howard at MIT’s Synthetic Intelligence Lab and the University of Oxford, work that resulted in several issued US patents on the technologies and algorithms that power the device, 
  • the Fundamental Code Unit of the Brain (FCU)
  • the Brain Code (BC) and the Biological Co-Processor (BCP) 

are the latest advanced foundations for any eventual merger between biological intelligence and human intelligence. Ni2o (pronounced “Nitoo”) is the entity that Professor Howard licensed to further develop, market and promote these technologies.

The Biological Co-Processor is unique in that it uses advanced nanotechnology, optogenetics and deep machine learning to intelligently map internal events, such as neural spiking activity, to external physiological, linguistic and behavioral expression. The implant contains over a million carbon nanotubes, each of which is 10,000 times smaller than the width of a human hair. Carbon nanotubes provide a natural, high-bandwidth interface as they conduct heat, light and electricity instantaneously updating the neural laces. They adhere to neuronal constructs and even promote neural growth. Qualcomm team leader Rudy Beraha commented, ‘Although the prototype unit shown today is tethered to external power, a commercial Brain Co-Processor unit will be wireless and inductively powered, enabling it to be administered with a minimally-invasive procedures.
The device uses a combination of methods to write to the brain, including 
  • pulsed electricity
  • light and 
  • various molecules that simulate or inhibit the activation of specific neuronal groups
These can be targeted to stimulate a desired response, such as releasing chemicals in patients suffering from a neurological disorder or imbalance. The BCP is designed as a fully integrated system to use the brain’s own internal systems and chemistries to pattern and mimic healthy brain behavior, an approach that stands in stark contrast to the current state of the art, which is to simply apply mild electrocution to problematic regions of the brain. 
Therapeutic uses
The Biological Co-Processor promises to provide relief for millions of patients suffering from neurological, psychiatric and psychological disorders as well as degenerative diseases. Initial therapeutic uses will likely be for patients with traumatic brain injuries and neurodegenerative disorders, such as Alzheimer’s, as the BCP will strengthen the weak, shortening connections responsible for lost memories and skills. Once implanted, the device provides a closed-loop, self-learning platform able to both determine and administer the perfect balance of pharmaceutical, electroceutical, genomeceutical and optoceutical therapies.
Dr Richard Wirt, a Senior Fellow at Intel Corporation and Co-Founder of INTENT, the company’s partner of Ni2o bringing BCP to market, commented on the device, saying, ‘In the immediate timeframe, this device will have many benefits for researchers, as it could be used to replicate an entire brain image, synchronously mapping internal and external expressions of human response. Over the long term, the potential therapeutic benefits are unlimited.
The brain controls all organs and systems in the body, so the cure to nearly every disease resides there.- Professor Newton Howard
Rather than simply disrupting neural circuits, the machine learning systems within the BCP are designed to interpret these signals and intelligently read and write to the surrounding neurons. These capabilities could be used to reestablish any degenerative or trauma-induced damage and perhaps write these memories and skills to other, healthier areas of the brain. 
One day, these capabilities could also be used in healthy patients to radically augment human ability and proactively improve health. As Professor Howard points out: ‘The brain controls all organs and systems in the body, so the cure to nearly every disease resides there.‘ Speaking more broadly, Professor Howard sees the merging of man with machine as our inevitable destiny, claiming it to be ‘the next step on the blueprint that the author of it all built into our natural architecture.
With the resurgence of neuroscience and AI enhancing machine learning, there has been renewed interest in brain implants. This past March, Elon Musk and Bryan Johnson independently announced that they are focusing and investing in for the brain/computer interface domain. 
When asked about these new competitors, Professor Howard said he is happy to see all these new startups and established names getting into the field – he only wonders what took them so long, stating: ‘I would like to see us all working together, as we have already established a mathematical foundation and software framework to solve so many of the challenges they will be facing. We could all get there faster if we could work together – after all, the patient is the priority.
© 2017 Nuffield Department of Surgical Sciences, John Radcliffe Hospital, Headington, Oxford, OX3 9DU
ORIGINAL: NDS Oxford
2 June 2017 

Scientists Have Created an Artificial Synapse That Can Learn Autonomously

By Hugo Angel,

Sergey Tarasov/Shutterstock
Developments and advances in artificial intelligence (AI) have been due in large part to technologies that mimic how the human brain works. In the world of information technology, such AI systems are called neural networks.
These contain algorithms that can be trained, among other things, to imitate how the brain recognises speech and images. However, running an Artificial Neural Network consumes a lot of time and energy.
Now, researchers from the National Centre for Scientific Research (CNRS) in Thales, the University of Bordeaux in Paris-Sud, and Evry have developed an artificial synapse called a memristor directly on a chip.
It paves the way for intelligent systems that required less time and energy to learn, and it can learn autonomously.
In the human brain, synapses work as connections between neurons. The connections are reinforced and learning is improved the more these synapses are stimulated.
The memristor works in a similar fashion. It’s made up of a thin ferroelectric layer (which can be spontaneously polarised) that is enclosed between two electrodes.
Using voltage pulses, their resistance can be adjusted, like biological neurons. The synaptic connection will be strong when resistance is low, and vice-versa.
Figure 1
(a) Sketch of pre- and post-neurons connected by a synapse. The synaptic transmission is modulated by the causality (Δt) of neuron spikes. (b) Sketch of the ferroelectric memristor where a ferroelectric tunnel barrier of BiFeO3 (BFO) is sandwiched between a bottom electrode of (Ca,Ce)MnO3 (CCMO) and a top submicron pillar of Pt/Co. YAO stands for YAlO3. (c) Single-pulse hysteresis loop of the ferroelectric memristor displaying clear voltage thresholds ( and ). (d) Measurements of STDP in the ferroelectric memristor. Modulation of the device conductance (ΔG) as a function of the delay (Δt) between pre- and post-synaptic spikes. Seven data sets were collected on the same device showing the reproducibility of the effect. The total length of each pre- and post-synaptic spike is 600 ns.
Source: Nature Communications
The memristor’s capacity for learning is based on this adjustable resistance.
AI systems have developed considerably in the past couple of years. Neural networks built with learning algorithms are now capable of performing tasks which synthetic systems previously could not do.
For instance, intelligent systems can now compose music, play games and beat human players, or do your taxes. Some can even identify suicidal behaviour, or differentiate between what is lawful and what isn’t.
This is all thanks to AI’s capacity to learn, the only limitation of which is the amount of time and effort it takes to consume the data that serve as its springboard.
With the memristor, this learning process can be greatly improved. Work continues on the memristor, particularly on exploring ways to optimise its function.
For starters, the researchers have successfully built a physical model to help predict how it functions.
Their work is published in the journal Nature Communications.
ORIGINAL: ScienceAlert
DOM GALEON, FUTURISM
7 APR 2017

Google DeepMind has built an AI machine that could learn as quickly as humans before long

By Hugo Angel,

Neural Episodic Control. Architecture of episodic memory module for a single action

Emerging Technology from the arXiv

Intelligent machines have humans in their sights.

Deep-learning machines already have superhuman skills when it comes to tasks such as

  • face recognition,
  • video-game playing, and
  • even the ancient Chinese game of Go.

So it’s easy to think that humans are already outgunned.

But not so fast. Intelligent machines still lag behind humans in one crucial area of performance: the speed at which they learn. When it comes to mastering classic video games, for example, the best deep-learning machines take some 200 hours of play to reach the same skill levels that humans achieve in just two hours.

So computer scientists would dearly love to have some way to speed up the rate at which machines learn.

Today, Alexander Pritzel and pals at Google’s DeepMind subsidiary in London claim to have done just that. These guys have built a deep-learning machine that is capable of rapidly assimilating new experiences and then acting on them. The result is a machine that learns significantly faster than others and has the potential to match humans in the not too distant future.

First, some background.

Deep learning uses layers of neural networks to look for patterns in data. When a single layer spots a pattern it recognizes, it sends this information to the next layer, which looks for patterns in this signal, and so on.

So in face recognition,

  • one layer might look for edges in an image,
  • the next layer for circular patterns of edges (the kind that eyes and mouths make), and
  • the next for triangular patterns such as those made by two eyes and a mouth.
  • When all this happens, the final output is an indication that a face has been spotted.

Of course, the devil is in the details. There are various systems of feedback to allow the system to learn by adjusting various internal parameters such as the strength of connections between layers. These parameters must change slowly, since a big change in one layer can catastrophically affect learning in the subsequent layers. That’s why deep neural networks need so much training and why it takes so long.

Pritzel and co have tackled this problem with a technique they call Neural Episodic Control. “Neural episodic control demonstrates dramatic improvements on the speed of learning for a wide range of environments,” they say. “Critically, our agent is able to rapidly latch onto highly successful strategies as soon as they are experienced, instead of waiting for many steps of optimisation.

The basic idea behind DeepMind’s approach is to copy the way humans and animals learn quickly. The general consensus is that humans can tackle situations in two different ways.

  • If the situation is familiar, our brains have already formed a model of it, which they use to work out how best to behave. This uses a part of the brain called the prefrontal cortex.
  • But when the situation is not familiar, our brains have to fall back on another strategy. This is thought to involve a much simpler test-and-remember approach involving the hippocampus. So we try something and remember the outcome of this episode. If it is successful, we try it again, and so on. But if it is not a successful episode, we try to avoid it in future.

This episodic approach suffices in the short term while our prefrontal brain learns. But it is soon outperformed by the prefrontal cortex and its model-based approach.

Pritzel and co have used this approach as their inspiration. Their new system has two approaches.

  • The first is a conventional deep-learning system that mimics the behaviur of the prefrontal cortex.
  • The second is more like the hippocampus. When the system tries something new, it remembers the outcome.

But crucially, it doesn’t try to learn what to remember. Instead, it remembers everything. “Our architecture does not try to learn when to write to memory, as this can be slow to learn and take a significant amount of time,” say Pritzel and co. “Instead, we elect to write all experiences to the memory, and allow it to grow very large compared to existing memory architectures.

They then use a set of strategies to read from this large memory quickly. The result is that the system can latch onto successful strategies much more quickly than conventional deep-learning systems.

They go on to demonstrate how well all this works by training their machine to play classic Atari video games, such as Breakout, Pong, and Space Invaders. (This is a playground that DeepMind has used to train many deep-learning machines.)

The team, which includes DeepMind cofounder Demis Hassibis, shows that neural episodic control vastly outperforms other deep-learning approaches in the speed at which it learns. “Our experiments show that neural episodic control requires an order of magnitude fewer interactions with the environment,” they say.

That’s impressive work with significant potential. The researchers say that an obvious extension of this work is to test their new approach on more complex 3-D environments.

It’ll be interesting to see what environments the team chooses and the impact this will have on the real world. We’ll look forward to seeing how that works out.

Ref: Neural Episodic Control : arxiv.org/abs/1703.01988

ORIGINAL: MIT Technology Review

The future of AI is neuromorphic. Meet the scientists building digital ‘brains’ for your phone

By Hugo Angel,

Neuromorphic chips are being designed to specifically mimic the human brain – and they could soon replace CPUs
BRAIN ACTIVITY MAP
Neuroscape Lab
AI services like Apple’s Siri and others operate by sending your queries to faraway data centers, which send back responses. The reason they rely on cloud-based computing is that today’s electronics don’t come with enough computing power to run the processing-heavy algorithms needed for machine learning. The typical CPUs most smartphones use could never handle a system like Siri on the device. But Dr. Chris Eliasmith, a theoretical neuroscientist and co-CEO of Canadian AI startup Applied Brain Research, is confident that a new type of chip is about to change that.
Many have suggested Moore’s law is ending and that means we won’t get ‘more compute’ cheaper using the same methods,” Eliasmith says. He’s betting on the proliferation of ‘neuromorphics’ — a type of computer chip that is not yet widely known but already being developed by several major chip makers.
Traditional CPUs process instructions based on “clocked time” – information is transmitted at regular intervals, as if managed by a metronome. By packing in digital equivalents of neurons, neuromorphics communicate in parallel (and without the rigidity of clocked time) using “spikes” – bursts of electric current that can be sent whenever needed. Just like our own brains, the chip’s neurons communicate by processing incoming flows of electricity – each neuron able to determine from the incoming spike whether to send current out to the next neuron.
What makes this a big deal is that these chips require far less power to process AI algorithms. For example, one neuromorphic chip made by IBM contains five times as many transistors as a standard Intel processor, yet consumes only 70 milliwatts of power. An Intel processor would use anywhere from 35 to 140 watts, or up to 2000 times more power.
Eliasmith points out that neuromorphics aren’t new and that their designs have been around since the 80s. Back then, however, the designs required specific algorithms be baked directly into the chip. That meant you’d need one chip for detecting motion, and a different one for detecting sound. None of the chips acted as a general processor in the way that our own cortex does.
This was partly because there hasn’t been any way for programmers to design algorithms that can do much with a general purpose chip. So even as these brain-like chips were being developed, building algorithms for them has remained a challenge.
 
Eliasmith and his team are keenly focused on building tools that would allow a community of programmers to deploy AI algorithms on these new cortical chips.
Central to these efforts is Nengo, a compiler that developers can use to build their own algorithms for AI applications that will operate on general purpose neuromorphic hardware. Compilers are a software tool that programmers use to write code, and that translate that code into the complex instructions that get hardware to actually do something. What makes Nengo useful is its use of the familiar Python programming language – known for it’s intuitive syntax – and its ability to put the algorithms on many different hardware platforms, including neuromorphic chips. Pretty soon, anyone with an understanding of Python could be building sophisticated neural nets made for neuromorphic hardware.
Things like vision systems, speech systems, motion control, and adaptive robotic controllers have already been built with Nengo,Peter Suma, a trained computer scientist and the other CEO of Applied Brain Research, tells me.
Perhaps the most impressive system built using the compiler is Spaun, a project that in 2012 earned international praise for being the most complex brain model ever simulated on a computer. Spaun demonstrated that computers could be made to interact fluidly with the environment, and perform human-like cognitive tasks like recognizing images and controlling a robot arm that writes down what it’s sees. The machine wasn’t perfect, but it was a stunning demonstration that computers could one day blur the line between human and machine cognition. Recently, by using neuromorphics, most of Spaun has been run 9000x faster, using less energy than it would on conventional CPUs – and by the end of 2017, all of Spaun will be running on Neuromorphic hardware.
Eliasmith won NSERC’s John C. Polyani award for that project — Canada’s highest recognition for a breakthrough scientific achievement – and once Suma came across the research, the pair joined forces to commercialize these tools.
While Spaun shows us a way towards one day building fluidly intelligent reasoning systems, in the nearer term neuromorphics will enable many types of context aware AIs,” says Suma. Suma points out that while today’s AIs like Siri remain offline until explicitly called into action, we’ll soon have artificial agents that are ‘always on’ and ever-present in our lives.
Imagine a SIRI that listens and sees all of your conversations and interactions. You’ll be able to ask it for things like – “Who did I have that conversation about doing the launch for our new product in Tokyo?” or “What was that idea for my wife’s birthday gift that Melissa suggested?,” he says.
When I raised concerns that some company might then have an uninterrupted window into even the most intimate parts of my life, I’m reminded that because the AI would be processed locally on the device, there’s no need for that information to touch a server owned by a big company. And for Eliasmith, this ‘always on’ component is a necessary step towards true machine cognition. “The most fundamental difference between most available AI systems of today and the biological intelligent systems we are used to, is the fact that the latter always operate in real-time. Bodies and brains are built to work with the physics of the world,” he says.
Already, major efforts across the IT industry are heating up to get their AI services into the hands of users. Companies like Apple, Facebook, Amazon, and even Samsung, are developing conversational assistants they hope will one day become digital helpers.
ORIGINAL: Wired
Monday 6 March 2017

Google Unveils Neural Network with “Superhuman” Ability to Determine the Location of Almost Any Image

By Hugo Angel,

Guessing the location of a randomly chosen Street View image is hard, even for well-traveled humans. But Google’s latest artificial-intelligence machine manages it with relative ease.
Here’s a tricky task. Pick a photograph from the Web at random. Now try to work out where it was taken using only the image itself. If the image shows a famous building or landmark, such as the Eiffel Tower or Niagara Falls, the task is straightforward. But the job becomes significantly harder when the image lacks specific location cues or is taken indoors or shows a pet or food or some other detail.Nevertheless, humans are surprisingly good at this task. To help, they bring to bear all kinds of knowledge about the world such as the type and language of signs on display, the types of vegetation, architectural styles, the direction of traffic, and so on. Humans spend a lifetime picking up these kinds of geolocation cues.So it’s easy to think that machines would struggle with this task. And indeed, they have.

Today, that changes thanks to the work of Tobias Weyand, a computer vision specialist at Google, and a couple of pals. These guys have trained a deep-learning machine to work out the location of almost any photo using only the pixels it contains.

Their new machine significantly outperforms humans and can even use a clever trick to determine the location of indoor images and pictures of specific things such as pets, food, and so on that have no location cues.

Their approach is straightforward, at least in the world of machine learning.

  • Weyand and co begin by dividing the world into a grid consisting of over 26,000 squares of varying size that depend on the number of images taken in that location.
    So big cities, which are the subjects of many images, have a more fine-grained grid structure than more remote regions where photographs are less common. Indeed, the Google team ignored areas like oceans and the polar regions, where few photographs have been taken.

 

  • Next, the team created a database of geolocated images from the Web and used the location data to determine the grid square in which each image was taken. This data set is huge, consisting of 126 million images along with their accompanying Exif location data.
  • Weyand and co used 91 million of these images to teach a powerful neural network to work out the grid location using only the image itself. Their idea is to input an image into this neural net and get as the output a particular grid location or a set of likely candidates. 
  • They then validated the neural network using the remaining 34 million images in the data set.
  • Finally they tested the network—which they call PlaNet—in a number of different ways to see how well it works.

The results make for interesting reading. To measure the accuracy of their machine, they fed it 2.3 million geotagged images from Flickr to see whether it could correctly determine their location. “PlaNet is able to localize 3.6 percent of the images at street-level accuracy and 10.1 percent at city-level accuracy,” say Weyand and co. What’s more, the machine determines the country of origin in a further 28.4 percent of the photos and the continent in 48.0 percent of them.

That’s pretty good. But to show just how good, Weyand and co put PlaNet through its paces in a test against 10 well-traveled humans. For the test, they used an online game that presents a player with a random view taken from Google Street View and asks him or her to pinpoint its location on a map of the world.

Anyone can play at www.geoguessr.com. Give it a try—it’s a lot of fun and more tricky than it sounds.

GeoGuesser Screen Capture Example

Needless to say, PlaNet trounced the humans. “In total, PlaNet won 28 of the 50 rounds with a median localization error of 1131.7 km, while the median human localization error was 2320.75 km,” say Weyand and co. “[This] small-scale experiment shows that PlaNet reaches superhuman performance at the task of geolocating Street View scenes.

An interesting question is how PlaNet performs so well without being able to use the cues that humans rely on, such as vegetation, architectural style, and so on. But Weyand and co say they know why: “We think PlaNet has an advantage over humans because it has seen many more places than any human can ever visit and has learned subtle cues of different scenes that are even hard for a well-traveled human to distinguish.

They go further and use the machine to locate images that do not have location cues, such as those taken indoors or of specific items. This is possible when images are part of albums that have all been taken at the same place. The machine simply looks through other images in the album to work out where they were taken and assumes the more specific image was taken in the same place.

That’s impressive work that shows deep neural nets flexing their muscles once again. Perhaps more impressive still is that the model uses a relatively small amount of memory unlike other approaches that use gigabytes of the stuff. “Our model uses only 377 MB, which even fits into the memory of a smartphone,” say Weyand and co.

That’s a tantalizing idea—the power of a superhuman neural network on a smartphone. It surely won’t be long now!

Ref: arxiv.org/abs/1602.05314 : PlaNet—Photo Geolocation with Convolutional Neural Networks

ORIGINAL: TechnoplogyReview
by Emerging Technology from the arXiv
February 24, 2016

JPMorgan Software Does in Seconds What Took Lawyers 360,000 Hours

By Hugo Angel,

New software does in seconds what took staff 360,000 hours Bank seeking to streamline systems, avoid redundancies

At JPMorgan Chase & Co., a learning machine is parsing financial deals that once kept legal teams busy for thousands of hours.

The program, called COIN, for Contract Intelligence, does the mind-numbing job of interpreting commercial-loan agreements that, until the project went online in June, consumed 360,000 hours of work each year by lawyers and loan officers. The software reviews documents in seconds, is less error-prone and never asks for vacation.

Attendees discuss software on Feb. 27, the eve of JPMorgan’s Investor Day.
Photographer: Kholood Eid/Bloomberg

While the financial industry has long touted its technological innovations, a new era of automation is now in overdrive as cheap computing power converges with fears of losing customers to startups. Made possible by investments in machine learning and a new private cloud network, COIN is just the start for the biggest U.S. bank. The firm recently set up technology hubs for teams specializing in big data, robotics and cloud infrastructure to find new sources of revenue, while reducing expenses and risks.

The push to automate mundane tasks and create new tools for bankers and clients — a growing part of the firm’s $9.6 billion technology budget — is a core theme as the company hosts its annual investor day on Tuesday.

Behind the strategy, overseen by Chief Operating Operating Officer Matt Zames and Chief Information Officer Dana Deasy, is an undercurrent of anxiety: Though JPMorgan emerged from the financial crisis as one of few big winners, its dominance is at risk unless it aggressively pursues new technologies, according to interviews with a half-dozen bank executives.


Redundant Software

That was the message Zames had for Deasy when he joined the firm from BP Plc in late 2013. The New York-based bank’s internal systems, an amalgam from decades of mergers, had too many redundant software programs that didn’t work together seamlessly.“Matt said, ‘Remember one thing above all else: We absolutely need to be the leaders in technology across financial services,’” Deasy said last week in an interview. “Everything we’ve done from that day forward stems from that meeting.

After visiting companies including Apple Inc. and Facebook Inc. three years ago to understand how their developers worked, the bank set out to create its own computing cloud called Gaia that went online last year. Machine learning and big-data efforts now reside on the private platform, which effectively has limitless capacity to support their thirst for processing power. The system already is helping the bank automate some coding activities and making its 20,000 developers more productive, saving money, Zames said. When needed, the firm can also tap into outside cloud services from Amazon.com Inc., Microsoft Corp. and International Business Machines Corp.

Tech SpendingJPMorgan will make some of its cloud-backed technology available to institutional clients later this year, allowing firms like BlackRock Inc. to access balances, research and trading tools. The move, which lets clients bypass salespeople and support staff for routine information, is similar to one Goldman Sachs Group Inc. announced in 2015.JPMorgan’s total technology budget for this year amounts to 9 percent of its projected revenue — double the industry average, according to Morgan Stanley analyst Betsy Graseck. The dollar figure has inched higher as JPMorgan bolsters cyber defenses after a 2014 data breach, which exposed the information of 83 million customers.

We have invested heavily in technology and marketing — and we are seeing strong returns,” JPMorgan said in a presentation Tuesday ahead of its investor day, noting that technology spending in its consumer bank totaled about $1 billion over the past two years.

Attendees inspect JPMorgan Markets software kiosk for Investors Day.
Photographer: Kholood Eid/Bloomberg

One-third of the company’s budget is for new initiatives, a figure Zames wants to take to 40 percent in a few years. He expects savings from automation and retiring old technology will let him plow even more money into new innovations.

Not all of those bets, which include several projects based on a distributed ledger, like blockchain, will pay off, which JPMorgan says is OK. One example executives are fond of mentioning: The firm built an electronic platform to help trade credit-default swaps that sits unused.

‘Can’t Wait’We’re willing to invest to stay ahead of the curve, even if in the final analysis some of that money will go to product or a service that wasn’t needed,Marianne Lake, the lender’s finance chief, told a conference audience in June. That’s “because we can’t wait to know what the outcome, the endgame, really looks like, because the environment is moving so fast.”As for COIN, the program has helped JPMorgan cut down on loan-servicing mistakes, most of which stemmed from human error in interpreting 12,000 new wholesale contracts per year, according to its designers.

JPMorgan is scouring for more ways to deploy the technology, which learns by ingesting data to identify patterns and relationships. The bank plans to use it for other types of complex legal filings like credit-default swaps and custody agreements. Someday, the firm may use it to help interpret regulations and analyze corporate communications.

Another program called X-Connect, which went into use in January, examines e-mails to help employees find colleagues who have the closest relationships with potential prospects and can arrange introductions.

Creating Bots
For simpler tasks, the bank has created bots to perform functions like granting access to software systems and responding to IT requests, such as resetting an employee’s password, Zames said. Bots are expected to handle 1.7 million access requests this year, doing the work of 140 people.

Matt Zames
Photographer: Kholood Eid/Bloomberg

While growing numbers of people in the industry worry such advancements might someday take their jobs, many Wall Street personnel are more focused on benefits. A survey of more than 3,200 financial professionals by recruiting firm Options Group last year found a majority expect new technology will improve their careers, for example by improving workplace performance.

Anything where you have back-office operations and humans kind of moving information from point A to point B that’s not automated is ripe for that,” Deasy said. “People always talk about this stuff as displacement. I talk about it as freeing people to work on higher-value things, which is why it’s such a terrific opportunity for the firm.

To help spur internal disruption, the company keeps tabs on 2,000 technology ventures, using about 100 in pilot programs that will eventually join the firm’s growing ecosystem of partners. For instance, the bank’s machine-learning software was built with Cloudera Inc., a software firm that JPMorgan first encountered in 2009.

We’re starting to see the real fruits of our labor,” Zames said. “This is not pie-in-the-sky stuff.

ORIGINAL:
Bloomberg

by Hugh Son
27 de febrero de 2017

Xnor.ai – Bringing Deep Learning AI to the Devices at the Edge of the Network

By Hugo Angel,

 

Photo  – The Xnor.ai Team

 

Today we announced our funding of Xnor.ai. We are excited to be working with Ali Farhadi, Mohammad Rastegari and their team on this new company. We are also looking forward to working with Paul Allen’s team at the Allen Institute for AI and in particular our good friend and CEO of AI2, Dr. Oren Etzioni who is joining the board of Xnor.ai. Machine Learning and AI have been a key investment theme for us for the past several years and bringing deep learning capabilities such as image and speech recognition to small devices is a huge challenge.

Mohammad and Ali and their team have developed a platform that enables low resource devices to perform tasks that usually require large farms of GPUs in cloud environments. This, we believe, has the opportunity to change how we think about certain types of deep learning use cases as they get extended from the core to the edge. Image and voice recognition are great examples. These are broad areas of use cases out in the world – usually with a mobile device, but right now they require the device to be connected to the internet so those large farms of GPUs can process all the information your device is capturing/sending and having the core transmit back the answer. If you could do that on your phone (while preserving battery life) it opens up a new world of options.

It is just these kinds of inventions that put the greater Seattle area at the center of the revolution in machine learning and AI that is upon us. Xnor.ai came out of the outstanding work the team was doing at the Allen Institute for Artificial Intelligence (AI2.) and Ali is a professor at the University of Washington. Between Microsoft, Amazon, the University of Washington and research institutes such as AI2, our region is leading the way as new types of intelligent applications takes shape. Madrona is energized to play our role as company builder and support for these amazing inventors and founders.

ORIGINAL: Madrona
By Matt McIlwain

AI acceleration startup Xnor.ai collects $2.6M in funding

I was excited by the promise of Xnor.ai and its technique that drastically reduces the computing power necessary to perform complex operations like computer vision. Seems I wasn’t the only one: the company, just officially spun off from the Allen Institute for AI (AI2), has attracted $2.6 million in seed funding from its parent company and Madrona Venture Group.

The specifics of the product and process you can learn about in detail in my previous post, but the gist is this: machine learning models for things like object and speech recognition are notoriously computation-heavy, making them difficult to implement on smaller, less powerful devices. Xnor.ai’s researchers use a bit of mathematical trickery to reduce that computing load by an order of magnitude or two — something it’s easy to see the benefit of.

Related Articles

McIlwain will join AI2 CEO Oren Etzioni on the board of Xnor.ai; Ali Farhadi, who led the original project, will be the company’s CEO, and Mohammad Rastegari is CTO.
The new company aims to facilitate commercial applications of its technology (it isn’t quite plug and play yet), but the research that led up to it is, like other AI2 work, open source.

 

AI2 Repository:  https://github.com/allenai/

ORIGINAL: TechCrunch
by
2017/02/03

Why Apple Joined Rivals Amazon, Google, Microsoft In AI Partnership

By Hugo Angel,

Apple CEO Tim Cook (Photo credit: David Paul Morris/Bloomberg)

Apple is pushing past its famous secrecy for the sake of artificial intelligence.

In December, the Cupertino tech giant quietly published its first AI research paper. Now, it’s joining the Partnership on AI to Benefit People and Society, an industry nonprofit group founded by some of its biggest rivals, including Microsoft, Google and Amazon.

On Friday, the partnership announced that Apple’s head of advanced development for SiriTom Gruber, is joining its board. Gruber has been at Apple since 2010 when the iPhone maker bought Siri, the company he cofounded and where he served as CTO.

“We’re glad to see the industry engaging on some of the larger opportunities and concerns created with the advance of machine learning and AI,” wrote Gruber in a statement on the nonprofit’s website. “We believe it’s beneficial to Apple, our customers, and the industry to play an active role in its development and look forward to collaborating with the group to help drive discussion on how to advance AI while protecting the privacy and security of consumers.”

Other members of the board include

  • Greg Corrado from Google’s DeepMind,
  • Ralf Herbrich from Amazon,
  • Eric Horvitz from Microsoft,
  • Yann Lecun from Facebook, and
  • Francesca Rossi from IBM.

Outside of large companies, the group announced it’s also adding members from the

  • American Civil Liberties Union,
  • OpenAI,
  • MacArthur Foundation,
  • Peterson Institute of International Economics,
  • Arizona State University and the
  • University of California, Berkeley.

The group was formally announced in September.

Board member Horvitz, who is director of Microsoft Research, said the members of the group started meeting with each other at various AI conferences. They were already close colleagues in the field and they thought they could start working together to discuss emerging challenges and opportunities in AI.

 “We believed there were a lot of things companies could do together on issues and challenges in the realm of AI and society,” Horvitz said in an interview. “We don’t see these as areas for competition but for rich cooperation.

The organization will work together to develop best practices and educate the public around AI. Horvitz said the group tackle, for example, critical areas like health care and transportation. The group will look at the potential for biases in AI — after some experiments have shown that the way researchers train the AI algorithms can lead to biases in gender and race. The nonprofit will also try to develop standards around human-machine collaboration, for example, to deal with questions like when should a self-driving car hand off control to the driver.

“I think there’s a realization that AI will touch society quite deeply in the coming years in powerful and nuanced ways,” Horitz said. “We think it’s really important to involve the public as well as experts. Some of these directions has no simple answer. It can’t come from a company. We need to have multiple constituents checking in.”

The AI community has been critical of Apple’s secrecy for several years secrecy has hurt the company’s recruiting efforts for AI talent. The company has been falling behind in some of the major advancements in AI, especially as intelligent voice assistants from Amazon and Google have started taking off with consumers.

Horvitz said the group had been in discussions with Apple since before its launch in September. But Apple wasn’t ready to formally join the group until now. “My own sense is that Apple was in the middle of their iOS 10 and iPhone 7 launches” and wasn’t ready to announce, he said. “We’ve always treated Apple as a founding member of the group.

I think Apple had a realization that to do the best AI research and to have access to the top minds in the field is the expectation of engaging openly with academic research communities,” Horitz said. “Other companies like Microsoft have discovered this over the years. We can be quite competitive and be open to sharing ideas when it comes to the core foundational science.

“It’s my hope that this partnership with Apple shows that the company has a rich engagement with people, society and stakeholders,” he said.

ORIGINAL: Forbes
Aaron TilleyFORBES STAFF
Jan 27, 2017

Top 10 Hot Artificial Intelligence (AI) Technologies

By Hugo Angel,

forrester-ai-technologiesThe market for artificial intelligence (AI) technologies is flourishing. Beyond the hype and the heightened media attention, the numerous startups and the internet giants racing to acquire them, there is a significant increase in investment and adoption by enterprises. A Narrative Science survey found last year that 38% of enterprises are already using AI, growing to 62% by 2018. Forrester Research predicted a greater than 300% increase in investment in artificial intelligence in 2017 compared with 2016. IDC estimated that the AI market will grow from $8 billion in 2016 to more than $47 billion in 2020.

Coined in 1955 to describe a new computer science sub-discipline, “Artificial Intelligence” today includes a variety of technologies and tools, some time-tested, others relatively new. To help make sense of what’s hot and what’s not, Forrester just published a TechRadar report on Artificial Intelligence (for application development professionals), a detailed analysis of 13 technologies enterprises should consider adopting to support human decision-making.

Based on Forrester’s analysis, here’s my list of the 10 hottest AI technologies:

  1. Natural Language Generation: Producing text from computer data. Currently used in customer service, report generation, and summarizing business intelligence insights. Sample vendors:
    • Attivio,
    • Automated Insights,
    • Cambridge Semantics,
    • Digital Reasoning,
    • Lucidworks,
    • Narrative Science,
    • SAS,
    • Yseop.
  2. Speech Recognition: Transcribe and transform human speech into format useful for computer applications. Currently used in interactive voice response systems and mobile applications. Sample vendors:
    • NICE,
    • Nuance Communications,
    • OpenText,
    • Verint Systems.
  3. Virtual Agents: “The current darling of the media,” says Forrester (I believe they refer to my evolving relationships with Alexa), from simple chatbots to advanced systems that can network with humans. Currently used in customer service and support and as a smart home manager. Sample vendors:
    • Amazon,
    • Apple,
    • Artificial Solutions,
    • Assist AI,
    • Creative Virtual,
    • Google,
    • IBM,
    • IPsoft,
    • Microsoft,
    • Satisfi.
  4. Machine Learning Platforms: Providing algorithms, APIs, development and training toolkits, data, as well as computing power to design, train, and deploy models into applications, processes, and other machines. Currently used in a wide range of enterprise applications, mostly `involving prediction or classification. Sample vendors:
    • Amazon,
    • Fractal Analytics,
    • Google,
    • H2O.ai,
    • Microsoft,
    • SAS,
    • Skytree.
  5. AI-optimized Hardware: Graphics processing units (GPU) and appliances specifically designed and architected to efficiently run AI-oriented computational jobs. Currently primarily making a difference in deep learning applications. Sample vendors:
    • Alluviate,
    • Cray,
    • Google,
    • IBM,
    • Intel,
    • Nvidia.
  6. Decision Management: Engines that insert rules and logic into AI systems and used for initial setup/training and ongoing maintenance and tuning. A mature technology, it is used in a wide variety of enterprise applications, assisting in or performing automated decision-making. Sample vendors:
    • Advanced Systems Concepts,
    • Informatica,
    • Maana,
    • Pegasystems,
    • UiPath.
  7. Deep Learning Platforms: A special type of machine learning consisting of artificial neural networks with multiple abstraction layers. Currently primarily used in pattern recognition and classification applications supported by very large data sets. Sample vendors:
    • Deep Instinct,
    • Ersatz Labs,
    • Fluid AI,
    • MathWorks,
    • Peltarion,
    • Saffron Technology,
    • Sentient Technologies.
  8. Biometrics: Enable more natural interactions between humans and machines, including but not limited to image and touch recognition, speech, and body language. Currently used primarily in market research. Sample vendors:
    • 3VR,
    • Affectiva,
    • Agnitio,
    • FaceFirst,
    • Sensory,
    • Synqera,
    • Tahzoo.
  9. Robotic Process Automation: Using scripts and other methods to automate human action to support efficient business processes. Currently used where it’s too expensive or inefficient for humans to execute a task or a process. Sample vendors:
    • Advanced Systems Concepts,
    • Automation Anywhere,
    • Blue Prism,
    • UiPath,
    • WorkFusion.
  10. Text Analytics and NLP: Natural language processing (NLP) uses and supports text analytics by facilitating the understanding of sentence structure and meaning, sentiment, and intent through statistical and machine learning methods. Currently used in fraud detection and security, a wide range of automated assistants, and applications for mining unstructured data. Sample vendors:
    • Basis Technology,
    • Coveo,
    • Expert System,
    • Indico,
    • Knime,
    • Lexalytics,
    • Linguamatics,
    • Mindbreeze,
    • Sinequa,
    • Stratifyd,
    • Synapsify.

There are certainly many business benefits gained from AI technologies today, but according to a survey Forrester conducted last year, there are also obstacles to AI adoption as expressed by companies with no plans of investing in AI:

There is no defined business case 42%
Not clear what AI can be used for 39%
Don’t have the required skills 33%
Need first to invest in modernizing data mgt platform 29%
Don’t have the budget 23%
Not certain what is needed for implementing an AI system 19%
AI systems are not proven 14%
Do not have the right processes or governance 13%
AI is a lot of hype with little substance 11%
Don’t own or have access to the required data 8%
Not sure what AI means 3%
Once enterprises overcome these obstacles, Forrester concludes, they stand to gain from AI driving accelerated transformation in customer-facing applications and developing an interconnected web of enterprise intelligence.

Follow me on Twitter @GilPress or Facebook or Google+

AI Software Learns to Make AI Software

By Hugo Angel,

ORIGINAL: MIT Tech Review

by Tom Simonite

January 18, 2017

Google and others think software that learns to learn could take over some work done by AI experts.

Progress in artificial intelligence causes some people to worry that software will take jobs such as driving trucks away from humans. Now leading researchers are finding that they can make software that can learn to do one of the trickiest parts of their own jobs—the task of designing machine-learning software.

In one experiment, researchers at the Google Brain artificial intelligence research group had software design a machine-learning system to take a test used to benchmark software that processes language. What it came up with surpassed previously published results from software designed by humans.

In recent months several other groups have also reported progress on getting learning software to make learning software. They include researchers at

If self-starting AI techniques become practical, they could increase the pace at which machine-learning software is implemented across the economy. Companies must currently pay a premium for machine-learning experts, who are in short supply.

Jeff Dean, who leads the Google Brain research group, mused last week that some of the work of such workers could be supplanted by software. He described what he termed “automated machine learning” as one of the most promising research avenues his team was exploring.

Currently the way you solve problems is you have expertise and data and computation,” said Dean, at the AI Frontiers conference in Santa Clara, California. “Can we eliminate the need for a lot of machine-learning expertise?

One set of experiments from Google’s DeepMind group suggests that what researchers are terming “learning to learn” could also help lessen the problem of machine-learning software needing to consume vast amounts of data on a specific task in order to perform it well.

The researchers challenged their software to create learning systems for collections of multiple different, but related, problems, such as navigating mazes. It came up with designs that showed an ability to generalize, and pick up new tasks with less additional training than would be usual.

The idea of creating software that learns to learn has been around for a while, but previous experiments didn’t produce results that rivaled what humans could come up with. “It’s exciting,” says Yoshua Bengio, a professor at the University of Montreal, who previously explored the idea in the 1990s.

Bengio says the more potent computing power now available, and the advent of a technique called deep learning, which has sparked recent excitement about AI, are what’s making the approach work. But he notes that so far it requires such extreme computing power that it’s not yet practical to think about lightening the load, or partially replacing, machine-learning experts.

Google Brain’s researchers describe using 800 high-powered graphics processors to power software that came up with designs for image recognition systems that rivaled the best designed by humans.

Otkrist Gupta, a researcher at the MIT Media Lab, believes that will change. He and MIT colleagues plan to open-source the software behind their own experiments, in which learning software designed deep-learning systems that matched human-crafted ones on standard tests for object recognition.

Gupta was inspired to work on the project by frustrating hours spent designing and testing machine-learning models. He thinks companies and researchers are well motivated to find ways to make automated machine learning practical.

Easing the burden on the data scientist is a big payoff,” he says. “It could make you more productive, make you better models, and make you free to explore higher-level ideas.

%d bloggers like this: