Category: AI


Scientists Have Created an Artificial Synapse That Can Learn Autonomously

By Hugo Angel,

Sergey Tarasov/Shutterstock
Developments and advances in artificial intelligence (AI) have been due in large part to technologies that mimic how the human brain works. In the world of information technology, such AI systems are called neural networks.
These contain algorithms that can be trained, among other things, to imitate how the brain recognises speech and images. However, running an Artificial Neural Network consumes a lot of time and energy.
Now, researchers from the National Centre for Scientific Research (CNRS) in Thales, the University of Bordeaux in Paris-Sud, and Evry have developed an artificial synapse called a memristor directly on a chip.
It paves the way for intelligent systems that required less time and energy to learn, and it can learn autonomously.
In the human brain, synapses work as connections between neurons. The connections are reinforced and learning is improved the more these synapses are stimulated.
The memristor works in a similar fashion. It’s made up of a thin ferroelectric layer (which can be spontaneously polarised) that is enclosed between two electrodes.
Using voltage pulses, their resistance can be adjusted, like biological neurons. The synaptic connection will be strong when resistance is low, and vice-versa.
Figure 1
(a) Sketch of pre- and post-neurons connected by a synapse. The synaptic transmission is modulated by the causality (Δt) of neuron spikes. (b) Sketch of the ferroelectric memristor where a ferroelectric tunnel barrier of BiFeO3 (BFO) is sandwiched between a bottom electrode of (Ca,Ce)MnO3 (CCMO) and a top submicron pillar of Pt/Co. YAO stands for YAlO3. (c) Single-pulse hysteresis loop of the ferroelectric memristor displaying clear voltage thresholds ( and ). (d) Measurements of STDP in the ferroelectric memristor. Modulation of the device conductance (ΔG) as a function of the delay (Δt) between pre- and post-synaptic spikes. Seven data sets were collected on the same device showing the reproducibility of the effect. The total length of each pre- and post-synaptic spike is 600 ns.
Source: Nature Communications
The memristor’s capacity for learning is based on this adjustable resistance.
AI systems have developed considerably in the past couple of years. Neural networks built with learning algorithms are now capable of performing tasks which synthetic systems previously could not do.
For instance, intelligent systems can now compose music, play games and beat human players, or do your taxes. Some can even identify suicidal behaviour, or differentiate between what is lawful and what isn’t.
This is all thanks to AI’s capacity to learn, the only limitation of which is the amount of time and effort it takes to consume the data that serve as its springboard.
With the memristor, this learning process can be greatly improved. Work continues on the memristor, particularly on exploring ways to optimise its function.
For starters, the researchers have successfully built a physical model to help predict how it functions.
Their work is published in the journal Nature Communications.
ORIGINAL: ScienceAlert
DOM GALEON, FUTURISM
7 APR 2017

Google DeepMind has built an AI machine that could learn as quickly as humans before long

By Hugo Angel,

Neural Episodic Control. Architecture of episodic memory module for a single action

Emerging Technology from the arXiv

Intelligent machines have humans in their sights.

Deep-learning machines already have superhuman skills when it comes to tasks such as

  • face recognition,
  • video-game playing, and
  • even the ancient Chinese game of Go.

So it’s easy to think that humans are already outgunned.

But not so fast. Intelligent machines still lag behind humans in one crucial area of performance: the speed at which they learn. When it comes to mastering classic video games, for example, the best deep-learning machines take some 200 hours of play to reach the same skill levels that humans achieve in just two hours.

So computer scientists would dearly love to have some way to speed up the rate at which machines learn.

Today, Alexander Pritzel and pals at Google’s DeepMind subsidiary in London claim to have done just that. These guys have built a deep-learning machine that is capable of rapidly assimilating new experiences and then acting on them. The result is a machine that learns significantly faster than others and has the potential to match humans in the not too distant future.

First, some background.

Deep learning uses layers of neural networks to look for patterns in data. When a single layer spots a pattern it recognizes, it sends this information to the next layer, which looks for patterns in this signal, and so on.

So in face recognition,

  • one layer might look for edges in an image,
  • the next layer for circular patterns of edges (the kind that eyes and mouths make), and
  • the next for triangular patterns such as those made by two eyes and a mouth.
  • When all this happens, the final output is an indication that a face has been spotted.

Of course, the devil is in the details. There are various systems of feedback to allow the system to learn by adjusting various internal parameters such as the strength of connections between layers. These parameters must change slowly, since a big change in one layer can catastrophically affect learning in the subsequent layers. That’s why deep neural networks need so much training and why it takes so long.

Pritzel and co have tackled this problem with a technique they call Neural Episodic Control. “Neural episodic control demonstrates dramatic improvements on the speed of learning for a wide range of environments,” they say. “Critically, our agent is able to rapidly latch onto highly successful strategies as soon as they are experienced, instead of waiting for many steps of optimisation.

The basic idea behind DeepMind’s approach is to copy the way humans and animals learn quickly. The general consensus is that humans can tackle situations in two different ways.

  • If the situation is familiar, our brains have already formed a model of it, which they use to work out how best to behave. This uses a part of the brain called the prefrontal cortex.
  • But when the situation is not familiar, our brains have to fall back on another strategy. This is thought to involve a much simpler test-and-remember approach involving the hippocampus. So we try something and remember the outcome of this episode. If it is successful, we try it again, and so on. But if it is not a successful episode, we try to avoid it in future.

This episodic approach suffices in the short term while our prefrontal brain learns. But it is soon outperformed by the prefrontal cortex and its model-based approach.

Pritzel and co have used this approach as their inspiration. Their new system has two approaches.

  • The first is a conventional deep-learning system that mimics the behaviur of the prefrontal cortex.
  • The second is more like the hippocampus. When the system tries something new, it remembers the outcome.

But crucially, it doesn’t try to learn what to remember. Instead, it remembers everything. “Our architecture does not try to learn when to write to memory, as this can be slow to learn and take a significant amount of time,” say Pritzel and co. “Instead, we elect to write all experiences to the memory, and allow it to grow very large compared to existing memory architectures.

They then use a set of strategies to read from this large memory quickly. The result is that the system can latch onto successful strategies much more quickly than conventional deep-learning systems.

They go on to demonstrate how well all this works by training their machine to play classic Atari video games, such as Breakout, Pong, and Space Invaders. (This is a playground that DeepMind has used to train many deep-learning machines.)

The team, which includes DeepMind cofounder Demis Hassibis, shows that neural episodic control vastly outperforms other deep-learning approaches in the speed at which it learns. “Our experiments show that neural episodic control requires an order of magnitude fewer interactions with the environment,” they say.

That’s impressive work with significant potential. The researchers say that an obvious extension of this work is to test their new approach on more complex 3-D environments.

It’ll be interesting to see what environments the team chooses and the impact this will have on the real world. We’ll look forward to seeing how that works out.

Ref: Neural Episodic Control : arxiv.org/abs/1703.01988

ORIGINAL: MIT Technology Review

The future of AI is neuromorphic. Meet the scientists building digital ‘brains’ for your phone

By Hugo Angel,

Neuromorphic chips are being designed to specifically mimic the human brain – and they could soon replace CPUs
BRAIN ACTIVITY MAP
Neuroscape Lab
AI services like Apple’s Siri and others operate by sending your queries to faraway data centers, which send back responses. The reason they rely on cloud-based computing is that today’s electronics don’t come with enough computing power to run the processing-heavy algorithms needed for machine learning. The typical CPUs most smartphones use could never handle a system like Siri on the device. But Dr. Chris Eliasmith, a theoretical neuroscientist and co-CEO of Canadian AI startup Applied Brain Research, is confident that a new type of chip is about to change that.
Many have suggested Moore’s law is ending and that means we won’t get ‘more compute’ cheaper using the same methods,” Eliasmith says. He’s betting on the proliferation of ‘neuromorphics’ — a type of computer chip that is not yet widely known but already being developed by several major chip makers.
Traditional CPUs process instructions based on “clocked time” – information is transmitted at regular intervals, as if managed by a metronome. By packing in digital equivalents of neurons, neuromorphics communicate in parallel (and without the rigidity of clocked time) using “spikes” – bursts of electric current that can be sent whenever needed. Just like our own brains, the chip’s neurons communicate by processing incoming flows of electricity – each neuron able to determine from the incoming spike whether to send current out to the next neuron.
What makes this a big deal is that these chips require far less power to process AI algorithms. For example, one neuromorphic chip made by IBM contains five times as many transistors as a standard Intel processor, yet consumes only 70 milliwatts of power. An Intel processor would use anywhere from 35 to 140 watts, or up to 2000 times more power.
Eliasmith points out that neuromorphics aren’t new and that their designs have been around since the 80s. Back then, however, the designs required specific algorithms be baked directly into the chip. That meant you’d need one chip for detecting motion, and a different one for detecting sound. None of the chips acted as a general processor in the way that our own cortex does.
This was partly because there hasn’t been any way for programmers to design algorithms that can do much with a general purpose chip. So even as these brain-like chips were being developed, building algorithms for them has remained a challenge.
 
Eliasmith and his team are keenly focused on building tools that would allow a community of programmers to deploy AI algorithms on these new cortical chips.
Central to these efforts is Nengo, a compiler that developers can use to build their own algorithms for AI applications that will operate on general purpose neuromorphic hardware. Compilers are a software tool that programmers use to write code, and that translate that code into the complex instructions that get hardware to actually do something. What makes Nengo useful is its use of the familiar Python programming language – known for it’s intuitive syntax – and its ability to put the algorithms on many different hardware platforms, including neuromorphic chips. Pretty soon, anyone with an understanding of Python could be building sophisticated neural nets made for neuromorphic hardware.
Things like vision systems, speech systems, motion control, and adaptive robotic controllers have already been built with Nengo,Peter Suma, a trained computer scientist and the other CEO of Applied Brain Research, tells me.
Perhaps the most impressive system built using the compiler is Spaun, a project that in 2012 earned international praise for being the most complex brain model ever simulated on a computer. Spaun demonstrated that computers could be made to interact fluidly with the environment, and perform human-like cognitive tasks like recognizing images and controlling a robot arm that writes down what it’s sees. The machine wasn’t perfect, but it was a stunning demonstration that computers could one day blur the line between human and machine cognition. Recently, by using neuromorphics, most of Spaun has been run 9000x faster, using less energy than it would on conventional CPUs – and by the end of 2017, all of Spaun will be running on Neuromorphic hardware.
Eliasmith won NSERC’s John C. Polyani award for that project — Canada’s highest recognition for a breakthrough scientific achievement – and once Suma came across the research, the pair joined forces to commercialize these tools.
While Spaun shows us a way towards one day building fluidly intelligent reasoning systems, in the nearer term neuromorphics will enable many types of context aware AIs,” says Suma. Suma points out that while today’s AIs like Siri remain offline until explicitly called into action, we’ll soon have artificial agents that are ‘always on’ and ever-present in our lives.
Imagine a SIRI that listens and sees all of your conversations and interactions. You’ll be able to ask it for things like – “Who did I have that conversation about doing the launch for our new product in Tokyo?” or “What was that idea for my wife’s birthday gift that Melissa suggested?,” he says.
When I raised concerns that some company might then have an uninterrupted window into even the most intimate parts of my life, I’m reminded that because the AI would be processed locally on the device, there’s no need for that information to touch a server owned by a big company. And for Eliasmith, this ‘always on’ component is a necessary step towards true machine cognition. “The most fundamental difference between most available AI systems of today and the biological intelligent systems we are used to, is the fact that the latter always operate in real-time. Bodies and brains are built to work with the physics of the world,” he says.
Already, major efforts across the IT industry are heating up to get their AI services into the hands of users. Companies like Apple, Facebook, Amazon, and even Samsung, are developing conversational assistants they hope will one day become digital helpers.
ORIGINAL: Wired
Monday 6 March 2017

Google Unveils Neural Network with “Superhuman” Ability to Determine the Location of Almost Any Image

By Hugo Angel,

Guessing the location of a randomly chosen Street View image is hard, even for well-traveled humans. But Google’s latest artificial-intelligence machine manages it with relative ease.
Here’s a tricky task. Pick a photograph from the Web at random. Now try to work out where it was taken using only the image itself. If the image shows a famous building or landmark, such as the Eiffel Tower or Niagara Falls, the task is straightforward. But the job becomes significantly harder when the image lacks specific location cues or is taken indoors or shows a pet or food or some other detail.Nevertheless, humans are surprisingly good at this task. To help, they bring to bear all kinds of knowledge about the world such as the type and language of signs on display, the types of vegetation, architectural styles, the direction of traffic, and so on. Humans spend a lifetime picking up these kinds of geolocation cues.So it’s easy to think that machines would struggle with this task. And indeed, they have.

Today, that changes thanks to the work of Tobias Weyand, a computer vision specialist at Google, and a couple of pals. These guys have trained a deep-learning machine to work out the location of almost any photo using only the pixels it contains.

Their new machine significantly outperforms humans and can even use a clever trick to determine the location of indoor images and pictures of specific things such as pets, food, and so on that have no location cues.

Their approach is straightforward, at least in the world of machine learning.

  • Weyand and co begin by dividing the world into a grid consisting of over 26,000 squares of varying size that depend on the number of images taken in that location.
    So big cities, which are the subjects of many images, have a more fine-grained grid structure than more remote regions where photographs are less common. Indeed, the Google team ignored areas like oceans and the polar regions, where few photographs have been taken.

 

  • Next, the team created a database of geolocated images from the Web and used the location data to determine the grid square in which each image was taken. This data set is huge, consisting of 126 million images along with their accompanying Exif location data.
  • Weyand and co used 91 million of these images to teach a powerful neural network to work out the grid location using only the image itself. Their idea is to input an image into this neural net and get as the output a particular grid location or a set of likely candidates. 
  • They then validated the neural network using the remaining 34 million images in the data set.
  • Finally they tested the network—which they call PlaNet—in a number of different ways to see how well it works.

The results make for interesting reading. To measure the accuracy of their machine, they fed it 2.3 million geotagged images from Flickr to see whether it could correctly determine their location. “PlaNet is able to localize 3.6 percent of the images at street-level accuracy and 10.1 percent at city-level accuracy,” say Weyand and co. What’s more, the machine determines the country of origin in a further 28.4 percent of the photos and the continent in 48.0 percent of them.

That’s pretty good. But to show just how good, Weyand and co put PlaNet through its paces in a test against 10 well-traveled humans. For the test, they used an online game that presents a player with a random view taken from Google Street View and asks him or her to pinpoint its location on a map of the world.

Anyone can play at www.geoguessr.com. Give it a try—it’s a lot of fun and more tricky than it sounds.

GeoGuesser Screen Capture Example

Needless to say, PlaNet trounced the humans. “In total, PlaNet won 28 of the 50 rounds with a median localization error of 1131.7 km, while the median human localization error was 2320.75 km,” say Weyand and co. “[This] small-scale experiment shows that PlaNet reaches superhuman performance at the task of geolocating Street View scenes.

An interesting question is how PlaNet performs so well without being able to use the cues that humans rely on, such as vegetation, architectural style, and so on. But Weyand and co say they know why: “We think PlaNet has an advantage over humans because it has seen many more places than any human can ever visit and has learned subtle cues of different scenes that are even hard for a well-traveled human to distinguish.

They go further and use the machine to locate images that do not have location cues, such as those taken indoors or of specific items. This is possible when images are part of albums that have all been taken at the same place. The machine simply looks through other images in the album to work out where they were taken and assumes the more specific image was taken in the same place.

That’s impressive work that shows deep neural nets flexing their muscles once again. Perhaps more impressive still is that the model uses a relatively small amount of memory unlike other approaches that use gigabytes of the stuff. “Our model uses only 377 MB, which even fits into the memory of a smartphone,” say Weyand and co.

That’s a tantalizing idea—the power of a superhuman neural network on a smartphone. It surely won’t be long now!

Ref: arxiv.org/abs/1602.05314 : PlaNet—Photo Geolocation with Convolutional Neural Networks

ORIGINAL: TechnoplogyReview
by Emerging Technology from the arXiv
February 24, 2016

JPMorgan Software Does in Seconds What Took Lawyers 360,000 Hours

By Hugo Angel,

New software does in seconds what took staff 360,000 hours Bank seeking to streamline systems, avoid redundancies

At JPMorgan Chase & Co., a learning machine is parsing financial deals that once kept legal teams busy for thousands of hours.

The program, called COIN, for Contract Intelligence, does the mind-numbing job of interpreting commercial-loan agreements that, until the project went online in June, consumed 360,000 hours of work each year by lawyers and loan officers. The software reviews documents in seconds, is less error-prone and never asks for vacation.

Attendees discuss software on Feb. 27, the eve of JPMorgan’s Investor Day.
Photographer: Kholood Eid/Bloomberg

While the financial industry has long touted its technological innovations, a new era of automation is now in overdrive as cheap computing power converges with fears of losing customers to startups. Made possible by investments in machine learning and a new private cloud network, COIN is just the start for the biggest U.S. bank. The firm recently set up technology hubs for teams specializing in big data, robotics and cloud infrastructure to find new sources of revenue, while reducing expenses and risks.

The push to automate mundane tasks and create new tools for bankers and clients — a growing part of the firm’s $9.6 billion technology budget — is a core theme as the company hosts its annual investor day on Tuesday.

Behind the strategy, overseen by Chief Operating Operating Officer Matt Zames and Chief Information Officer Dana Deasy, is an undercurrent of anxiety: Though JPMorgan emerged from the financial crisis as one of few big winners, its dominance is at risk unless it aggressively pursues new technologies, according to interviews with a half-dozen bank executives.


Redundant Software

That was the message Zames had for Deasy when he joined the firm from BP Plc in late 2013. The New York-based bank’s internal systems, an amalgam from decades of mergers, had too many redundant software programs that didn’t work together seamlessly.“Matt said, ‘Remember one thing above all else: We absolutely need to be the leaders in technology across financial services,’” Deasy said last week in an interview. “Everything we’ve done from that day forward stems from that meeting.

After visiting companies including Apple Inc. and Facebook Inc. three years ago to understand how their developers worked, the bank set out to create its own computing cloud called Gaia that went online last year. Machine learning and big-data efforts now reside on the private platform, which effectively has limitless capacity to support their thirst for processing power. The system already is helping the bank automate some coding activities and making its 20,000 developers more productive, saving money, Zames said. When needed, the firm can also tap into outside cloud services from Amazon.com Inc., Microsoft Corp. and International Business Machines Corp.

Tech SpendingJPMorgan will make some of its cloud-backed technology available to institutional clients later this year, allowing firms like BlackRock Inc. to access balances, research and trading tools. The move, which lets clients bypass salespeople and support staff for routine information, is similar to one Goldman Sachs Group Inc. announced in 2015.JPMorgan’s total technology budget for this year amounts to 9 percent of its projected revenue — double the industry average, according to Morgan Stanley analyst Betsy Graseck. The dollar figure has inched higher as JPMorgan bolsters cyber defenses after a 2014 data breach, which exposed the information of 83 million customers.

We have invested heavily in technology and marketing — and we are seeing strong returns,” JPMorgan said in a presentation Tuesday ahead of its investor day, noting that technology spending in its consumer bank totaled about $1 billion over the past two years.

Attendees inspect JPMorgan Markets software kiosk for Investors Day.
Photographer: Kholood Eid/Bloomberg

One-third of the company’s budget is for new initiatives, a figure Zames wants to take to 40 percent in a few years. He expects savings from automation and retiring old technology will let him plow even more money into new innovations.

Not all of those bets, which include several projects based on a distributed ledger, like blockchain, will pay off, which JPMorgan says is OK. One example executives are fond of mentioning: The firm built an electronic platform to help trade credit-default swaps that sits unused.

‘Can’t Wait’We’re willing to invest to stay ahead of the curve, even if in the final analysis some of that money will go to product or a service that wasn’t needed,Marianne Lake, the lender’s finance chief, told a conference audience in June. That’s “because we can’t wait to know what the outcome, the endgame, really looks like, because the environment is moving so fast.”As for COIN, the program has helped JPMorgan cut down on loan-servicing mistakes, most of which stemmed from human error in interpreting 12,000 new wholesale contracts per year, according to its designers.

JPMorgan is scouring for more ways to deploy the technology, which learns by ingesting data to identify patterns and relationships. The bank plans to use it for other types of complex legal filings like credit-default swaps and custody agreements. Someday, the firm may use it to help interpret regulations and analyze corporate communications.

Another program called X-Connect, which went into use in January, examines e-mails to help employees find colleagues who have the closest relationships with potential prospects and can arrange introductions.

Creating Bots
For simpler tasks, the bank has created bots to perform functions like granting access to software systems and responding to IT requests, such as resetting an employee’s password, Zames said. Bots are expected to handle 1.7 million access requests this year, doing the work of 140 people.

Matt Zames
Photographer: Kholood Eid/Bloomberg

While growing numbers of people in the industry worry such advancements might someday take their jobs, many Wall Street personnel are more focused on benefits. A survey of more than 3,200 financial professionals by recruiting firm Options Group last year found a majority expect new technology will improve their careers, for example by improving workplace performance.

Anything where you have back-office operations and humans kind of moving information from point A to point B that’s not automated is ripe for that,” Deasy said. “People always talk about this stuff as displacement. I talk about it as freeing people to work on higher-value things, which is why it’s such a terrific opportunity for the firm.

To help spur internal disruption, the company keeps tabs on 2,000 technology ventures, using about 100 in pilot programs that will eventually join the firm’s growing ecosystem of partners. For instance, the bank’s machine-learning software was built with Cloudera Inc., a software firm that JPMorgan first encountered in 2009.

We’re starting to see the real fruits of our labor,” Zames said. “This is not pie-in-the-sky stuff.

ORIGINAL:
Bloomberg

by Hugh Son
27 de febrero de 2017

Xnor.ai – Bringing Deep Learning AI to the Devices at the Edge of the Network

By Hugo Angel,

 

Photo  – The Xnor.ai Team

 

Today we announced our funding of Xnor.ai. We are excited to be working with Ali Farhadi, Mohammad Rastegari and their team on this new company. We are also looking forward to working with Paul Allen’s team at the Allen Institute for AI and in particular our good friend and CEO of AI2, Dr. Oren Etzioni who is joining the board of Xnor.ai. Machine Learning and AI have been a key investment theme for us for the past several years and bringing deep learning capabilities such as image and speech recognition to small devices is a huge challenge.

Mohammad and Ali and their team have developed a platform that enables low resource devices to perform tasks that usually require large farms of GPUs in cloud environments. This, we believe, has the opportunity to change how we think about certain types of deep learning use cases as they get extended from the core to the edge. Image and voice recognition are great examples. These are broad areas of use cases out in the world – usually with a mobile device, but right now they require the device to be connected to the internet so those large farms of GPUs can process all the information your device is capturing/sending and having the core transmit back the answer. If you could do that on your phone (while preserving battery life) it opens up a new world of options.

It is just these kinds of inventions that put the greater Seattle area at the center of the revolution in machine learning and AI that is upon us. Xnor.ai came out of the outstanding work the team was doing at the Allen Institute for Artificial Intelligence (AI2.) and Ali is a professor at the University of Washington. Between Microsoft, Amazon, the University of Washington and research institutes such as AI2, our region is leading the way as new types of intelligent applications takes shape. Madrona is energized to play our role as company builder and support for these amazing inventors and founders.

ORIGINAL: Madrona
By Matt McIlwain

AI acceleration startup Xnor.ai collects $2.6M in funding

I was excited by the promise of Xnor.ai and its technique that drastically reduces the computing power necessary to perform complex operations like computer vision. Seems I wasn’t the only one: the company, just officially spun off from the Allen Institute for AI (AI2), has attracted $2.6 million in seed funding from its parent company and Madrona Venture Group.

The specifics of the product and process you can learn about in detail in my previous post, but the gist is this: machine learning models for things like object and speech recognition are notoriously computation-heavy, making them difficult to implement on smaller, less powerful devices. Xnor.ai’s researchers use a bit of mathematical trickery to reduce that computing load by an order of magnitude or two — something it’s easy to see the benefit of.

Related Articles

McIlwain will join AI2 CEO Oren Etzioni on the board of Xnor.ai; Ali Farhadi, who led the original project, will be the company’s CEO, and Mohammad Rastegari is CTO.
The new company aims to facilitate commercial applications of its technology (it isn’t quite plug and play yet), but the research that led up to it is, like other AI2 work, open source.

 

AI2 Repository:  https://github.com/allenai/

ORIGINAL: TechCrunch
by
2017/02/03

Why Apple Joined Rivals Amazon, Google, Microsoft In AI Partnership

By Hugo Angel,

Apple CEO Tim Cook (Photo credit: David Paul Morris/Bloomberg)

Apple is pushing past its famous secrecy for the sake of artificial intelligence.

In December, the Cupertino tech giant quietly published its first AI research paper. Now, it’s joining the Partnership on AI to Benefit People and Society, an industry nonprofit group founded by some of its biggest rivals, including Microsoft, Google and Amazon.

On Friday, the partnership announced that Apple’s head of advanced development for SiriTom Gruber, is joining its board. Gruber has been at Apple since 2010 when the iPhone maker bought Siri, the company he cofounded and where he served as CTO.

“We’re glad to see the industry engaging on some of the larger opportunities and concerns created with the advance of machine learning and AI,” wrote Gruber in a statement on the nonprofit’s website. “We believe it’s beneficial to Apple, our customers, and the industry to play an active role in its development and look forward to collaborating with the group to help drive discussion on how to advance AI while protecting the privacy and security of consumers.”

Other members of the board include

  • Greg Corrado from Google’s DeepMind,
  • Ralf Herbrich from Amazon,
  • Eric Horvitz from Microsoft,
  • Yann Lecun from Facebook, and
  • Francesca Rossi from IBM.

Outside of large companies, the group announced it’s also adding members from the

  • American Civil Liberties Union,
  • OpenAI,
  • MacArthur Foundation,
  • Peterson Institute of International Economics,
  • Arizona State University and the
  • University of California, Berkeley.

The group was formally announced in September.

Board member Horvitz, who is director of Microsoft Research, said the members of the group started meeting with each other at various AI conferences. They were already close colleagues in the field and they thought they could start working together to discuss emerging challenges and opportunities in AI.

 “We believed there were a lot of things companies could do together on issues and challenges in the realm of AI and society,” Horvitz said in an interview. “We don’t see these as areas for competition but for rich cooperation.

The organization will work together to develop best practices and educate the public around AI. Horvitz said the group tackle, for example, critical areas like health care and transportation. The group will look at the potential for biases in AI — after some experiments have shown that the way researchers train the AI algorithms can lead to biases in gender and race. The nonprofit will also try to develop standards around human-machine collaboration, for example, to deal with questions like when should a self-driving car hand off control to the driver.

“I think there’s a realization that AI will touch society quite deeply in the coming years in powerful and nuanced ways,” Horitz said. “We think it’s really important to involve the public as well as experts. Some of these directions has no simple answer. It can’t come from a company. We need to have multiple constituents checking in.”

The AI community has been critical of Apple’s secrecy for several years secrecy has hurt the company’s recruiting efforts for AI talent. The company has been falling behind in some of the major advancements in AI, especially as intelligent voice assistants from Amazon and Google have started taking off with consumers.

Horvitz said the group had been in discussions with Apple since before its launch in September. But Apple wasn’t ready to formally join the group until now. “My own sense is that Apple was in the middle of their iOS 10 and iPhone 7 launches” and wasn’t ready to announce, he said. “We’ve always treated Apple as a founding member of the group.

I think Apple had a realization that to do the best AI research and to have access to the top minds in the field is the expectation of engaging openly with academic research communities,” Horitz said. “Other companies like Microsoft have discovered this over the years. We can be quite competitive and be open to sharing ideas when it comes to the core foundational science.

“It’s my hope that this partnership with Apple shows that the company has a rich engagement with people, society and stakeholders,” he said.

ORIGINAL: Forbes
Aaron TilleyFORBES STAFF
Jan 27, 2017

Top 10 Hot Artificial Intelligence (AI) Technologies

By Hugo Angel,

forrester-ai-technologiesThe market for artificial intelligence (AI) technologies is flourishing. Beyond the hype and the heightened media attention, the numerous startups and the internet giants racing to acquire them, there is a significant increase in investment and adoption by enterprises. A Narrative Science survey found last year that 38% of enterprises are already using AI, growing to 62% by 2018. Forrester Research predicted a greater than 300% increase in investment in artificial intelligence in 2017 compared with 2016. IDC estimated that the AI market will grow from $8 billion in 2016 to more than $47 billion in 2020.

Coined in 1955 to describe a new computer science sub-discipline, “Artificial Intelligence” today includes a variety of technologies and tools, some time-tested, others relatively new. To help make sense of what’s hot and what’s not, Forrester just published a TechRadar report on Artificial Intelligence (for application development professionals), a detailed analysis of 13 technologies enterprises should consider adopting to support human decision-making.

Based on Forrester’s analysis, here’s my list of the 10 hottest AI technologies:

  1. Natural Language Generation: Producing text from computer data. Currently used in customer service, report generation, and summarizing business intelligence insights. Sample vendors:
    • Attivio,
    • Automated Insights,
    • Cambridge Semantics,
    • Digital Reasoning,
    • Lucidworks,
    • Narrative Science,
    • SAS,
    • Yseop.
  2. Speech Recognition: Transcribe and transform human speech into format useful for computer applications. Currently used in interactive voice response systems and mobile applications. Sample vendors:
    • NICE,
    • Nuance Communications,
    • OpenText,
    • Verint Systems.
  3. Virtual Agents: “The current darling of the media,” says Forrester (I believe they refer to my evolving relationships with Alexa), from simple chatbots to advanced systems that can network with humans. Currently used in customer service and support and as a smart home manager. Sample vendors:
    • Amazon,
    • Apple,
    • Artificial Solutions,
    • Assist AI,
    • Creative Virtual,
    • Google,
    • IBM,
    • IPsoft,
    • Microsoft,
    • Satisfi.
  4. Machine Learning Platforms: Providing algorithms, APIs, development and training toolkits, data, as well as computing power to design, train, and deploy models into applications, processes, and other machines. Currently used in a wide range of enterprise applications, mostly `involving prediction or classification. Sample vendors:
    • Amazon,
    • Fractal Analytics,
    • Google,
    • H2O.ai,
    • Microsoft,
    • SAS,
    • Skytree.
  5. AI-optimized Hardware: Graphics processing units (GPU) and appliances specifically designed and architected to efficiently run AI-oriented computational jobs. Currently primarily making a difference in deep learning applications. Sample vendors:
    • Alluviate,
    • Cray,
    • Google,
    • IBM,
    • Intel,
    • Nvidia.
  6. Decision Management: Engines that insert rules and logic into AI systems and used for initial setup/training and ongoing maintenance and tuning. A mature technology, it is used in a wide variety of enterprise applications, assisting in or performing automated decision-making. Sample vendors:
    • Advanced Systems Concepts,
    • Informatica,
    • Maana,
    • Pegasystems,
    • UiPath.
  7. Deep Learning Platforms: A special type of machine learning consisting of artificial neural networks with multiple abstraction layers. Currently primarily used in pattern recognition and classification applications supported by very large data sets. Sample vendors:
    • Deep Instinct,
    • Ersatz Labs,
    • Fluid AI,
    • MathWorks,
    • Peltarion,
    • Saffron Technology,
    • Sentient Technologies.
  8. Biometrics: Enable more natural interactions between humans and machines, including but not limited to image and touch recognition, speech, and body language. Currently used primarily in market research. Sample vendors:
    • 3VR,
    • Affectiva,
    • Agnitio,
    • FaceFirst,
    • Sensory,
    • Synqera,
    • Tahzoo.
  9. Robotic Process Automation: Using scripts and other methods to automate human action to support efficient business processes. Currently used where it’s too expensive or inefficient for humans to execute a task or a process. Sample vendors:
    • Advanced Systems Concepts,
    • Automation Anywhere,
    • Blue Prism,
    • UiPath,
    • WorkFusion.
  10. Text Analytics and NLP: Natural language processing (NLP) uses and supports text analytics by facilitating the understanding of sentence structure and meaning, sentiment, and intent through statistical and machine learning methods. Currently used in fraud detection and security, a wide range of automated assistants, and applications for mining unstructured data. Sample vendors:
    • Basis Technology,
    • Coveo,
    • Expert System,
    • Indico,
    • Knime,
    • Lexalytics,
    • Linguamatics,
    • Mindbreeze,
    • Sinequa,
    • Stratifyd,
    • Synapsify.

There are certainly many business benefits gained from AI technologies today, but according to a survey Forrester conducted last year, there are also obstacles to AI adoption as expressed by companies with no plans of investing in AI:

There is no defined business case 42%
Not clear what AI can be used for 39%
Don’t have the required skills 33%
Need first to invest in modernizing data mgt platform 29%
Don’t have the budget 23%
Not certain what is needed for implementing an AI system 19%
AI systems are not proven 14%
Do not have the right processes or governance 13%
AI is a lot of hype with little substance 11%
Don’t own or have access to the required data 8%
Not sure what AI means 3%
Once enterprises overcome these obstacles, Forrester concludes, they stand to gain from AI driving accelerated transformation in customer-facing applications and developing an interconnected web of enterprise intelligence.

Follow me on Twitter @GilPress or Facebook or Google+

AI Software Learns to Make AI Software

By Hugo Angel,

ORIGINAL: MIT Tech Review

by Tom Simonite

January 18, 2017

Google and others think software that learns to learn could take over some work done by AI experts.

Progress in artificial intelligence causes some people to worry that software will take jobs such as driving trucks away from humans. Now leading researchers are finding that they can make software that can learn to do one of the trickiest parts of their own jobs—the task of designing machine-learning software.

In one experiment, researchers at the Google Brain artificial intelligence research group had software design a machine-learning system to take a test used to benchmark software that processes language. What it came up with surpassed previously published results from software designed by humans.

In recent months several other groups have also reported progress on getting learning software to make learning software. They include researchers at

If self-starting AI techniques become practical, they could increase the pace at which machine-learning software is implemented across the economy. Companies must currently pay a premium for machine-learning experts, who are in short supply.

Jeff Dean, who leads the Google Brain research group, mused last week that some of the work of such workers could be supplanted by software. He described what he termed “automated machine learning” as one of the most promising research avenues his team was exploring.

Currently the way you solve problems is you have expertise and data and computation,” said Dean, at the AI Frontiers conference in Santa Clara, California. “Can we eliminate the need for a lot of machine-learning expertise?

One set of experiments from Google’s DeepMind group suggests that what researchers are terming “learning to learn” could also help lessen the problem of machine-learning software needing to consume vast amounts of data on a specific task in order to perform it well.

The researchers challenged their software to create learning systems for collections of multiple different, but related, problems, such as navigating mazes. It came up with designs that showed an ability to generalize, and pick up new tasks with less additional training than would be usual.

The idea of creating software that learns to learn has been around for a while, but previous experiments didn’t produce results that rivaled what humans could come up with. “It’s exciting,” says Yoshua Bengio, a professor at the University of Montreal, who previously explored the idea in the 1990s.

Bengio says the more potent computing power now available, and the advent of a technique called deep learning, which has sparked recent excitement about AI, are what’s making the approach work. But he notes that so far it requires such extreme computing power that it’s not yet practical to think about lightening the load, or partially replacing, machine-learning experts.

Google Brain’s researchers describe using 800 high-powered graphics processors to power software that came up with designs for image recognition systems that rivaled the best designed by humans.

Otkrist Gupta, a researcher at the MIT Media Lab, believes that will change. He and MIT colleagues plan to open-source the software behind their own experiments, in which learning software designed deep-learning systems that matched human-crafted ones on standard tests for object recognition.

Gupta was inspired to work on the project by frustrating hours spent designing and testing machine-learning models. He thinks companies and researchers are well motivated to find ways to make automated machine learning practical.

Easing the burden on the data scientist is a big payoff,” he says. “It could make you more productive, make you better models, and make you free to explore higher-level ideas.

Deep Learning AI Listens to Machines For Signs of Trouble

By Hugo Angel,

ORIGINAL: Spectrum IEEE
By Jeremy Hsu, spectrum.ieee.org
December 27th, 2016
Image: 3DSignals

 

Driving your car until it breaks down on the road is never anyone’s favorite way to learn the need for routine maintenance. But preventive or scheduled maintenance checks often miss many of the problems that can come up. An Israeli startup has come up with a better idea: Use artificial intelligence to listen for early warning signs that a car might be nearing a breakdown.

The service of 3DSignals, a startup based in Kefar Sava, Israel, relies on the artificial intelligence technique known as deep learning to understand the noise patterns of troubled machines and predict problems in advance. 3DSignals has already begun talking with leading European automakers about possibly using the deep learning service to detect possible trouble both in auto factory machinery and in the cars themselves. The startup has even chatted with companies about using their service to automatically detect problems in future taxi fleets of driverless cars.

Deep learning usually refers to software algorithms known as artificial neural networks. These neural networks can learn to become better at specific tasks by filtering relevant data through multiple (deep) layers of artificial neurons. Many companies such as Google and Facebook have used deep learning to develop AI systems that

Many tech giants have also applied deep learning to make their services become better at automatically recognizing the spoken sounds of different human languages. But few companies have bothered with using deep learning to develop AI that’s good at listening to other acoustic signals such as the sounds of machines or music. That’s where 3DSignals hopes it can become a big player with its deep learning focus on more general sound patterns, Lavi explains.

I think most of the world is occupied with deep learning on images. This is by far the most popular application and the most recent. But part of the industry is doing deep learning on acoustics focused on speech recognition and conversation. I think we are probably in the very small group of companies doing acoustics which is more general. This is my aim, to be the world leader in general acoustics deep learning.

For each client, 3DSignals installs ultrasonic microphones that can detect sounds ranging up to 100 kilohertz (human hearing range is between 20 hertz and 20 kilohertz). The startup’s “Internet of Things” service connects the microphones to a computing device that can process some of the data and then upload the information to an online network where the deep learning algorithms do their work. Clients can always check the status of their machines by using any Web-connected device such as a smartphone or tablet.

The first clients for 3DSignals include heavy industry companies operating machinery such as circular cutting blades in mills or hydroelectric turbines in power plants. These companies started out by purchasing the first tier of the 3DSignals service that does not use deep learning. Instead, this first tier of service uses software that relies on basic physics modeling of certain machine parts—such as circular cutting saws—to predict when some parts may start to wear out. That allows the clients to begin getting value from day one.

The second tier of the service uses a deep learning algorithm and the sounds coming from the microphones to help detect strange or unusual noises from the machines. The deep learning algorithms train on sound patterns that can signal general problems with the machines. But only the third tier of the service, also using deep learning, can classify the sounds as indicating specific types of problems. Before this can happen, though, the clients need to help train the deep learning algorithm by first labeling certain sound patterns as belonging to specific types of problems.

After a while, we can not only say when problem type A happens, but we can say before it happens, you’re going to have problem type A in five hours,” Lavi says. “Some problems don’t happen instantly; there’s a deterioration.

When trained, the 3DSignals deep learning algorithms are able to identify predict specific problems in advance with 98 percent accuracy. But the current clients using the 3DSignals system have not yet begun taking advantage of this classification capability; they are still building their training datasets by having people manually label specific sound signatures as belonging to specific problems.

The one-year-old startup has just 15 employees, but it has grown fairly fast and raised $3.3 million so far from investors such as Dov Moran, the Israeli entrepreneur credited with being one of the first to invent USB flash drives. Lavi and his fellow co-founders are already eying several big markets that include automobiles and the energy sector beyond hydroelectric power plants. A series A funding round to attract venture capital is planned for sometime in 2017.

If all goes well, 3DSignals could expand its lead in the growing market for providing “predictive maintenance” to factories, power plants, and car owners. The impending arrival of driverless cars may put even more responsibility on the metaphorical shoulders of a deep learning AI that could listen for problems while the human passengers tune out from the driving experience. On top of all this, 3DSignals has the chance to pioneer the advancement of deep learning in listening to general sounds. Not bad for a small startup.

“It’s important for us to be specialists in general acoustic deep learning, because the research literature does not cover it,” Lavi says.

The Current State of Machine Intelligence 3.0

By Hugo Angel,

ORIGINAL: O’Reilly 

(originally published by O’Reilly here, this year in collaboration with my amazing partner James Cham! If you’re interested in enterprise implications of this chart please refer to Harvard Business Review’s The Competitive Landscape for Machine Intelligence)
Almost a year ago, we published our now-annual landscape of machine intelligence companies, and goodness have we seen a lot of activity since then. This year’s landscape has a third more companies than our first one did two years ago, and it feels even more futile to try to be comprehensive, since this just scratches the surface of all of the activity out there.
As has been the case for the last couple of years, our fund still obsesses over “problem first” machine intelligence—we’ve invested in 35 machine intelligence companies solving 35 meaningful problems in areas from security to recruiting to software development. (Our fund focuses on the future of work, so there are some machine intelligence domains where we invest more than others.)
At the same time, the hype around machine intelligence methods continues to grow: the words “deep learning” now equally represent a series of meaningful breakthroughs (wonderful) but also a hyped phrase like “big data” (not so good!). We care about whether a founder uses the right method to solve a problem, not the fanciest one. We favor those who apply technology thoughtfully.
What’s the biggest change in the last year? We are getting inbound inquiries from a different mix of people. For v1.0, we heard almost exclusively from founders and academics. Then came a healthy mix of investors, both private and public. Now overwhelmingly we have heard from existing companies trying to figure out how to transform their businesses using machine intelligence.
For the first time, a “one stop shop” of the machine intelligence stack is coming into view—even if it’s a year or two off from being neatly formalized. The maturing of that stack might explain why more established companies are more focused on building legitimate machine intelligence capabilities. Anyone who has their wits about them is still going to be making initial build-and-buy decisions, so we figured an early attempt at laying out these technologies is better than no attempt.
Ready player world
Many of the most impressive looking feats we’ve seen have been in the gaming world, from DeepMind beating Atari classics and the world’s best at Go, to the OpenAI gym, which allows anyone to train intelligent agents across an array of gaming environments.
The gaming world offers a perfect place to start machine intelligence work (e.g., constrained environments, explicit rewards, easy-to-compare results, looks impressive)—especially for reinforcement learning. And it is much easier to have a self-driving car agent go a trillion miles in a simulated environment than on actual roads. Now we’re seeing the techniques used to conquer the gaming world moving to the real world. A newsworthy example of game-tested technology entering the real world was when DeepMind used neural networks to make Google’s data centers more efficient. This begs questions: What else in the world looks like a game? Or what else in the world can we reconfigure to make it look more like a game?
Early attempts are intriguing. Developers are dodging meter maids (brilliant—a modern day Paper Boy), categorizing cucumbers, sorting trash, and recreating the memories of loved ones as conversational bots. Otto’s self-driving trucks delivering beer on their first commercial ride even seems like a bonus level from Grand Theft Auto. We’re excited to see what new creative applications come in the next year.
Why even bot-her?
Ah, the great chatbot explosion of 2016, for better or worse—we liken it to the mobile app explosion we saw with the launch of iOS and Android. The dominant platforms (in the machine intelligence case, Facebook, Slack, Kik) race to get developers to build on their platforms. That means we’ll get some excellent bots but also many terrible ones—the joys of public experimentation.
The danger here, unlike the mobile app explosion (where we lacked expectations for what these widgets could actually do), is that we assume anything with a conversation interface will converse with us at near-human level. Most do not. This is going to lead to disillusionment over the course of the next year but it will clean itself up fairly quickly thereafter.
When our fund looks at this emerging field, we divide each technology into two components: the conversational interface itself and the “agent” behind the scenes that’s learning from data and transacting on a user’s behalf. While you certainly can’t drop the ball on the interface, we spend almost all our time thinking about that behind-the-scenes agent and whether it is actually solving a meaningful problem.
We get a lot of questions about whether there will be “one bot to rule them all.” To be honest, as with many areas at our fund, we disagree on this. We certainly believe there will not be one agent to rule them all, even if there is one interface to rule them all. For the time being, bots will be idiot savants: stellar for very specific applications.
We’ve written a bit about this, and the framework we use to think about how agents will evolve is a CEO and her support staff. Many Fortune 500 CEOs employ a scheduler, handler, a research team, a copy editor, a speechwriter, a personal shopper, a driver, and a professional coach. Each of these people performs a dramatically different function and has access to very different data to do their job. The bot / agent ecosystem will have a similar separation of responsibilities with very clear winners, and they will divide fairly cleanly along these lines. (Note that some CEO’s have a chief of staff who coordinates among all these functions, so perhaps we will see examples of “one interface to rule them all.”)
You can also see, in our landscape, some of the corporate functions machine intelligence will re-invent (most often in interfaces other than conversational bots).
On to 11111000001
Successful use of machine intelligence at a large organization is surprisingly binary, like flipping a stubborn light switch. It’s hard to do, but once machine intelligence is enabled, an organization sees everything through the lens of its potential. Organizations like Google, Facebook, Apple, Microsoft, Amazon, Uber, and Bloomberg (our sole investor) bet heavily on machine intelligence and have its capabilities pervasive throughout all of their products.
Other companies are struggling to figure out what to do, as many boardrooms did on “what to do about the Internet” in 1997. Why is this so difficult for companies to wrap their heads around? Machine intelligence is different from traditional software. Unlike with big data, where you could buy a new capability, machine intelligence depends on deeper organizational and process changes. Companies need to decide whether they will trust machine intelligence analysis for one-off decisions or if they will embed often-inscrutable machine intelligence models in core processes. Teams need to figure out how to test newfound capabilities, and applications need to change so they offer more than a system of record; they also need to coach employees and learn from the data they enter.
Unlike traditional hard-coded software, machine intelligence gives only probabilistic outputs. We want to ask machine intelligence to make subjective decisions based on imperfect information (eerily like what we trust our colleagues to do?). As a result, this new machine intelligence software will make mistakes, just like we do, and we’ll need to be thoughtful about when to trust it and when not to.
The idea of this new machine trust is daunting and makes machine intelligence harder to adopt than traditional software. We’ve had a few people tell us that the biggest predictor of whether a company will successfully adopt machine intelligence is whether they have a C-Suite executive with an advanced math degree. These executives understand it isn’t magic—it is just (hard) math.
Machine intelligence business models are going to be different from licensed and subscription software, but we don’t know how. Unlike traditional software, we still lack frameworks for management to decide where to deploy machine intelligence. Economists like Ajay Agrawal, Joshua Gans, and Avi Goldfarb have taken the first steps toward helping managers understand the economics of machine intelligence and predict where it will be most effective. But there is still a lot of work to be done.
In the next few years, the danger here isn’t what we see in dystopian sci-fi movies. The real danger of machine intelligence is that executives will make bad decisions about what machine intelligence capabilities to build.
Peter Pan’s never-never land
We’ve been wondering about the path to grow into a large machine intelligence company. Unsurprisingly, there have been many machine intelligence acquisitions (Nervana by Intel, Magic Pony by Twitter, Turi by Apple, Metamind by Salesforce, Otto by Uber, Cruise by GM, SalesPredict by Ebay, Viv by Samsung). Many of these happened fairly early in a company’s life and at quite a high price. Why is that?
Established companies struggle to understand machine intelligence technology, so it’s painful to sell to them, and the market for buyers who can use this technology in a self-service way is small. Then, if you do understand how this technology can supercharge your organization, you realize it’s so valuable that you want to hoard it. Businesses are saying to machine intelligence companies, “forget you selling this technology to others, I’m going to buy the whole thing.”
This absence of a market today makes it difficult for a machine intelligence startup, especially horizontal technology providers, to “grow up”—hence the Peter Pans. Companies we see successfully entering a long-term trajectory can package their technology as a new problem-specific application for enterprise or simply transform an industry themselves as a new entrant (love this). We flagged a few of the industry categories where we believe startups might “go the distance” in this year’s landscape.
Inspirational machine intelligence
Once we do figure it out, machine intelligence can solve much more interesting problems than traditional software. We’re thrilled to see so many smart people applying machine intelligence for good.
Established players like Conservation Metrics and Vulcan Conservation have been using deep learning to protect endangered animal species; the ever-inspiring team at Thorn is constantly coming up with creative algorithmic techniques to protect our children from online exploitation. The philanthropic arms of the tech titans joined in, enabling nonprofits with free storage, compute, and even developer time. Google partnered with nonprofits to found Global Fishing Watch to detect illegal fishing activity using satellite data in near real time, satellite intelligence startup Orbital Insight (in which we are investors) partnered with Global Forest Watch to detect illegal logging and other causes of global forest degradation. Startups are getting into the action, too. The Creative Destruction Lab machine intelligence accelerator (with whom we work closely) has companies working on problems like earlier diseasedetection and injury prevention. One area where we have seen some activity but would love to see more is machine intelligence to assist the elderly.
In talking to many people using machine intelligence for good, they all cite the critical role of open source technologies. In the last year, we’ve seen the launch of OpenAI, which offers everyone access to world class research and environments, and better and better releases of TensorFlow and Keras. Non-profits are always trying to do more with less, and machine intelligence has allowed them to extend the scope of their missions without extending budget. Algorithms allow non-profits to inexpensively scale what would not be affordable to do with people.
We also saw growth in universities and corporate think tanks, where new centers like USC’s Center for AI in Society, Berkeley’s Center for Human Compatible AI, and the multiple-corporation Partnership on AI study the ways in which machine intelligence can help humanity. The White House even got into the act: after a series of workshops around the U.S., they published a 48-page report outlining their recommendations for applying machine intelligence to safely and fairly address broad social problems.
On a lighter note, we’ve also heard whispers of more artisanal versions of machine intelligence. Folks are doing things like using computer vision algorithms to help them choose the best cocoa beans for high-grade chocolate, write poetry, cook steaks, and generate musicals.
Curious minds want to know. If you’re working on a unique or important application of machine intelligence we’d love to hear from you.
Looking forward
We see all this activity only continuing to accelerate. The world will give us more open sourced and commercially available machine intelligence building blocks, there will be more data, there will be more people interested in learning these methods, and there will always be problems worth solving. We still need ways of explaining the difference between machine intelligence and traditional software, and we’re working on that. The value of code is different from data, but what about the value of the model that code improves based on that data?
Once we understand machine intelligence deeply, we might look back on the era of traditional software and think it was just a prologue to what’s happening now. We look forward to seeing what the next year brings.
A massive thank you to the Bloomberg Beta team, David Klein, Adam Gibson, Ajay Agrawal, Alexandra Suich, Angela Tranyens, Anthony Goldblum, Avi Goldfarb, Beau Cronin, Ben Lorica, Chris Nicholson, Doug Fulop, Dror Berman, Dylan Tweney, Gary Kazantsev, Gideon Mann, Gordon Ritter, Jack Clark, John Lilly, Jon Lehr, Joshua Gans, Lauren Barless, Matt Turck, Matthew Granade, Mickey Graham, Nick Adams, Roger Magoulas, Sean Gourley, Shruti Gandhi, Steve Jurvetson, Vijay Sundaram, Zavain Dar, and for the help and fascinating conversations that led to this year’s report!
Landscape designed by Heidi Skinner.
Disclosure: Bloomberg Beta is an investor in Alation, Arimo, Aviso, Brightfunnel, Context Relevant, Deep Genomics, Diffbot, Digital Genius, Domino Data Labs, Drawbridge, Gigster, Gradescope, Graphistry, Gridspace, Howdy, Kaggle, Kindred.ai, Mavrx, Motiva, PopUpArchive, Primer, Sapho, Shield.AI, Textio, and Tule.
 
The Current State of Machine Intelligence 2.0
A year ago, I published my original attempt at mapping the machine intelligence ecosystem. So much has happened since. I spent the last 12 months geeking out on every company and nibble of information I can find, chatting with hundreds of academics, entrepreneurs, and investors about machine intelligence. This year, given the explosion of activity, my focus is on highlighting areas of innovation, rather than on trying to be comprehensive. Figure 1 showcases the new landscape of machine intelligence as we enter 2016:
Despite the noisy hype, which sometimes distracts, machine intelligence is already being used in several valuable ways. Machine intelligence already helps us get the important business information we need more quickly, monitors critical systems, feeds our population more efficiently, reduces the cost of health care, detects disease earlier, and so on.
The two biggest changes I’ve noted since I did this analysis last year are (1) the emergence of autonomous systems in both the physical and virtual world and (2) startups shifting away from building broad technology platforms to focusing on solving specific business problems.
Reflections on the landscape
With the focus moving from “machine intelligence as magic box” to delivering real value immediately, there are more ways to bring a machine intelligence startup to market. (There are as many ways to go to market as there are business problems to solve. I lay out many of the optionshere.)Most of these machine intelligence startups take well-worn machine intelligence techniques, some more than a decade old, and apply them to new data sets and workflows. It’s still true that big companies, with their massive data sets and contact with their customers, have inherent advantages — though startups are finding a way to enter.
Achieving autonomy
In last year’s roundup, the focus was almost exclusively on machine intelligence in the virtual world. This time we’re seeing it in the physical world, in the many flavors of autonomous systems: self-driving cars, autopilot drones, robots that can perform dynamic tasks without every action being hard coded. It’s still very early days — most of these systems are just barely useful, though we expect that to change quickly.
These physical systems are emerging because they meld many now-maturing research avenues in machine intelligence. Computer vision, the combination of deep learning and reinforcement learning, natural language interfaces, and question-answering systems are all building blocks to make a physical system autonomous and interactive. Building these autonomous systems today is as much about integrating these methods as inventing new ones.
The new (in)human touch
The virtual world is becoming more autonomous, too. Virtual agents, sometimes called bots, use conversational interfaces (think of Her, without the charm). Some of these virtual agents are entirely automated, others are a “human-in-the-loop” system, where algorithms take “machine-like” subtasks and a human adds creativity or execution. (In some, the human is training the bot while she or he works.) The user interacts with the system by either typing in natural language or speaking, and the agent responds in kind.
These services sometimes give customers confusing experiences, like mine the other day when I needed to contact customer service about my cell phone. I didn’t want to talk to anyone, so I opted for online chat. It was the most “human” customer service experience of my life, so weirdly perfect I found myself wondering whether I was chatting with a person, a bot, or some hybrid. Then I wondered if it even mattered. I had a fantastic experience and my issue was resolved. I felt gratitude to whatever it was on the other end, even if it was a bot.
On one hand, these agents can act utterly professional, helping us with customer support, research, project management, scheduling, and e-commerce transactions. On the other hand, they can be quite personal and maybe we are getting closer to Her — with Microsoft’s romantic chatbotXiaoice, automated emotional support is already here.
As these technologies warm up, they could transform new areas like education, psychiatry, and elder care, working alongside human beings to close the gap in care for students, patients, and the elderly.
50 shades of grey markets
At least I make myself laugh. 😉
Many machine intelligence technologies will transform the business world by starting in regulatory grey areas. On the short list: health care (automated diagnostics, early disease detection based on genomics, algorithmic drug discovery); agriculture (sensor- and vision-based intelligence systems, autonomous farming vehicles); transportation and logistics (self-driving cars, drone systems, sensor-based fleet management); and financial services (advanced credit decisioning).
To overcome the difficulties of entering grey markets, we’re seeing some unusual strategies:
Startups are making a global arbitrage (e.g., health care companies going to market in emerging markets, drone companies experimenting in the least regulated countries).
The “fly under the radar” strategy. Some startups are being very careful to stay on the safest side of the grey area, keep a low profile, and avoid the regulatory discussion as long as possible.
Big companies like Google, Apple, and IBM are seeking out these opportunities because they have the resources to be patient and are the most likely to be able to effect regulatory change — their ability to affect regulation is one of their advantages.
Startups are considering beefing up funding earlier than they would have, to fight inevitable legal battles and face regulatory hurdles sooner.
What’s your (business) problem?
A year ago, enterprises were struggling to make heads or tails of machine intelligence services (some of the most confusing were in the “platform” section of this landscape). When I spoke to potential enterprise customers, I often heard things like, “these companies are trying to sell me snake oil” or, “they can’t even explain to me what they do.”
The corporates wanted to know what current business problems these technologies could solve. They didn’t care about the technology itself. The machine intelligence companies, on the other hand, just wanted to talk about their algorithms and how their platform could solve hundreds of problems (this was often true, but that’s not the point!).
Two things have happened that are helping to create a more productive middle ground:
Enterprises have invested heavily in becoming “machine intelligence literate.” I’ve had roughly 100 companies reach out to get perspective on how they should think about machine intelligence. Their questions have been thoughtful, they’ve been changing their organizations to make use of these new technologies, and many different roles across the organization care about this topic (from CEOs to technical leads to product managers).
Many machine intelligence companies have figured out that they need to speak the language of solving a business problem. They are packaging solutions to specific business problems as separate products and branding them that way. They often work alongside a company to create a unique solution instead of just selling the technology itself, being one part educator and one part executor. Once businesses learn what new questions can be answered with machine intelligence, these startups may make a more traditional technology sale.
The great verticalization
I remember reading Who Says Elephants Can’t Dance and being blown away by the ability of a technology icon like IBM to risk it all. (This was one of the reasons I went to work for them out of college.) Now IBM seems poised to try another risk-it-all transformation — moving from a horizontal technology provider to directly transforming a vertical. And why shouldn’t Watson try to be a doctor or a concierge? It’s a brave attempt.
It’s not just IBM: you could probably make an entire machine intelligence landscape just of Google projects. (If anyone takes a stab, I’d love to see it!)
Your money is nice, but tell me more about your data
In the machine intelligence world, founders are selling their companies, as I suggested last year — but it’s about more than just money. I’ve heard from founders that they are only interested in an acquisition if the acquirer has the right data set to make their product work. We’re hearing things like, “I’m not taking conversations but, given our product, if X came calling it’d be hard to turn down.” “X” is most often Slack (!), Google, Facebook, Twitter in these conversations — the companies that have the data.
(Eh)-I
Until recently, there’s been one secret in machine intelligence talent:Canada!During the “AI winter,” when this technology fell out of favor in the 80s and 90s, the Canadian government was one of a few entities funding AI research. This support sustained the formidable trio of Geoffrey Hinton,Yoshua Bengio, and Yann LeCun, the godfathers of deep learning.
Canada continues to be central to the machine intelligence frontier. As an unapologetically proud Canadian, it’s been a pleasure to work with groups like AICML to commercialize advanced research, the Machine Learning Creative Destruction Lab to support startups, and to bring the machine intelligence world together at events like this one.
So what now?
Machine intelligence is even more of a story than last year, in large companies as well as startups. In the next year, the practical side of these technologies will flourish. Most new entrants will avoid generic technology solutions, and instead have a specific business purpose to which to put machine intelligence.
I can’t wait to see more combinations of the practical and eccentric. A few years ago, a company like Orbital Insight would have seemed farfetched — wait, you’re going to use satellites and computer vision algorithms to tell me what the construction growth rate is in China!? — and now it feels familiar.
Similarly, researchers are doing things that make us stop and say, “Wait, really?” They are tackling important problems we may not have imagined were possible, like creating fairy godmother drones to help the elderly, computer vision that detects the subtle signs of PTSD, autonomous surgical robots that remove cancerous lesions, and fixing airplane WiFi (just kidding, not even machine intelligence can do that).
Overall, agents will become more eloquent, autonomous systems more pervasive, machine intelligence more…intelligent. I expect more magic in the years to come.
Many thanks to those who helped me with this! Special thanks to Adam Spector, Ajay Agrawal, Angela Tran Kingyens, Beau Cronin, Chris Michel, Chris Nicholson, Dan Strickland, David Beyer, David Klein, Doug Fulop, Dror Berman, Jack Clark, James Cham, James Rattner, Jeffrey Chung, Jon Lehr, Karin Klein, Lauren Barless, Lynda Ting, Matt Turck, Mike Dauber, Morgan Polotan, Nick Adams, Pete Skomoroch, Roy Bahat, Sean Gourley, Shruti Gandhi, Zavain Dar, and Heidi Skinner (who designed this graphic).
 
Disclosure: Bloomberg Beta is an investor in Alation, Adatao, Aviso, BrightFunnel, Context Relevant, Deep Genomics, Diffbot, Domino Data Lab, Gigster, Graphistry, Howdy, Kaggle, Mavrx, Orbital Insight, Primer, Sapho, Textio, and Tule.
Machine Intelligence in the Real World
(this pieces was originally posted on Tech Crunch) .
I’ve been laser-focused on machine intelligence in the past few years. I’ve talked to hundreds of entrepreneurs, researchers and investors about helping machines make us smarter.
In the months since I shared my landscape of machine intelligence companies, folks keep asking me what I think of them — as if they’re all doing more or less the same thing. (I’m guessing this is how people talked about “dot coms” in 1997.)
On average, people seem most concerned about how to interact with these technologies once they are out in the wild. This post will focus on how these companies go to market, not on the methods they use.
In an attempt to explain the differences between how these companies go to market, I found myself using (admittedly colorful) nicknames. It ended up being useful, so I took a moment to spell them out in more detail so, in case you run into one or need a handy way to describe yours, you have the vernacular.
The categories aren’t airtight — this is a complex space — but this framework helps our fund (which invests in companies that make work better) be more thoughtful about how we think about and interact with machine intelligence companies.
“Panopticons” Collect A Broad Dataset
Machine intelligence starts with the data computers analyze, so the companies I call “panopticons” are assembling enormous, important new datasets. Defensible businesses tend to be global in nature. “Global” is very literal in the case of a company like Planet Labs, which has satellites physically orbiting the earth. Or it’s more metaphorical, in the case of a company like Premise, which is crowdsourcing data from many countries.
With many of these new datasets we can automatically get answers to questions we have struggled to answer before. There are massive barriers to entry because it’s difficult to amass a global dataset of significance.
However, it’s important to ask whether there is a “good enough” dataset that might provide a cheaper alternative, since data license businesses are at risk of being commoditized. Companies approaching this space should feel confident that either (1) no one else can or will collect a “good enough” alternative, or (2) they can successfully capture the intelligence layer on top of their own dataset and own the end user.
Examples include Planet Labs, Premise and Diffbot.
“Lasers” Collect A Focused Dataset
The companies I like to call “lasers” are also building new datasets, but in niches, to solve industry-specific problems with laser-like focus. Successful companies in this space provide more than just the dataset — they also must own the algorithms and user interface. They focus on narrower initial uses and must provide more value than just data to win customers.
The products immediately help users answer specific questions like, “how much should I water my crops?” or “which applicants are eligible for loans?” This category may spawn many, many companies — a hundred or more — because companies in it can produce business value right away.
With these technologies, many industries will be able to make decisions in a data-driven way for the first time. The power for good here is enormous: We’ve seen these technologies help us feed the world more efficiently, improve medical diagnostics, aid in conservation projects and provide credit to those in the world that didn’t have access to it before.
But to succeed, these companies need to find a single “killer” (meant in the benevolent way) use case to solve, and solve that problem in a way that makes the user’s life simpler, not more complex.
Examples include Tule Technologies, Enlitic, InVenture, Conservation Metrics, Red Bird, Mavrx and Watson Health.
“Alchemists” Promise To Turn Your Data Into Gold
These companies have a simple pitch: Let me work with your data, and I will return gold. Rather than creating their own datasets, they use novel algorithms to enrich and draw insights from their customers’ data. They come in three forms:
Self-service API-based solutions.
Service providers who work on top of their customers’ existing stacks.
Full-stack solutions that deliver their own hardware-optimized stacks.
Because the alchemists see across an array of data types, they’re likely to get early insight into powerful applications of machine intelligence. If they go directly to customers to solve problems in a hands-on way (i.e., with consulting services), they often become trusted partners.
But be careful. This industry is nascent, and those using an API-based approach may struggle to scale as revenue sources can only go as far as the still-small user base. Many of the self-service companies have moved toward a more hands-on model to address this problem (and those people-heavy consulting services can sometimes be harder to scale).
Examples include Nervana Systems, Context Relevant, IBM Watson, Metamind, AlchemyAPI (acquired by IBM Watson), Skymind, Lucid.ai and Citrine.
“Gateways” Create New Use Cases From Specific Data Types
These companies allow enterprises to unlock insights from a type of data they had trouble dealing with before (e.g., image, audio, video, genomic data). They don’t collect their own data, but rather work with client data and/or a third-party data provider. Unlike the Alchemists, who tend to do analysis across an array of data types and use cases, these are specialists.
What’s most exciting here is that this is genuinely new intelligence. Enterprises have generally had this data, but they either weren’t storing it or didn’t have the ability to interpret it economically. All of that “lost” data can now be used.
Still, beware the “so what” problem. Just because we have the methods to extract new insights doesn’t make them valuable. We’ve seen companies that begin with the problem they want to solve, and others blinded by the magic of the method. The latter category struggles to get funding.
Examples include Clarifai, Gridspace, Orbital Insight, Descartes Labs, Deep Genomics and Atomwise.
“Magic Wands” Seamlessly Fix A Workflow
These are SaaS tools that make work more effective, not just by extracting insights from the data you provide but by seamlessly integrating those insights into your daily workflow, creating a level of machine intelligence assistance that feels like “magic.” They are similar to the Lasers in that they have an interface that helps the user solve a specific problem — but they tend to rely on a user’s or enterprise’s data rather than creating their own new dataset from scratch.
For example, Textio is a text editor that recommends improvements to job descriptions as you type. With it, I can go from a 40th percentile job description to a 90th percentile one in just a few minutes, all thanks to a beautifully presented machine learning algorithm.
I believe that in five years we all will be using these tools across different use cases. They make the user look like an instant expert by codifying lessons found in domain-specific data. They can aggregate intelligence and silently bake it into products. We expect this space to heat up, and can’t wait to see more Magic Wands.
The risk is that by relying on such tools, humans will lose expertise (in the same way that the autopilot created the risk that pilots’ core skills may decay). To offset this, makers of these products should create UI in a way that will actually fortify the user’s knowledge rather than replace it (e.g., educating the user during the process of making a recommendation or using a double-blind interface).
Examples include Textio, RelateIQ (acquired by Salesforce), InboxVudu, Sigopt and The Grid
“Navigators” Create Autonomous Systems For The Physical World
Machine intelligence plays a huge role in enabling autonomous systems like self-driving cars, drones and robots to augment processes in warehouses, agriculture and elderly care. This category is a mix of early stage companies and large established companies like Google, Apple, Uber and Amazon.
Such technologies give us the ability to rethink transportation and logistics entirely, especially in emerging market countries that lack robust physical infrastructure. We also can use them to complete tasks that were historically very dangerous for humans.
Before committing to this kind of technology, companies should feel confident that they can raise large amounts of capital and recruit the best minds in some of the most sought-after fields. Many of these problems require experts across varied specialties, like hardware, robotics, vision and audio. They also will have to deal with steep regulatory hurdles (e.g., self-driving car regulations).
Examples include Blue River Technologies, Airware, Clearpath Robotics, Kiva Systems (acquired by Amazon), 3DR, Skycatch, Cruise Automation and the self-driving car groups at Google, Uber, Apple and Tesla.
“Agents” Create Cyborgs And Bots To Help With Virtual Tasks
Sometimes the best way to use machine intelligence is to pair it with human intelligence. Cyborgs and bots are similar in that they help you complete tasks, but the difference is a cyborg appears as if it’s a human (it blends human and machine intelligence behind the scenes, has a proper name and attempts to interact like a person would), whereas a bot is explicitly non-human and relies on you to provide the human-level guidance to instruct it what to do.
Cyborgs most often complete complex tasks, like customer service via real-time chat or meeting scheduling via email (e.g., Clara from Clara Labs or Amy from x.ai). Bots tend to help you perform basic research, complete online transactions and help your team stay on top of tasks (e.g., Howdy, the project management bot).
In both cases, this is the perfect blending of humans and machines: The computers take the transactional grunt work pieces of the task and interact with us for the higher-level decision-making and creativity.
Cyborg-based companies start as mostly manual services and, over time, become more machine-driven as technology matures. The risk is whether they can make that transition quickly enough. For both cyborgs and bots, privacy and security will be an ongoing concern, as we trust more and more of our data (e.g., calendars, email, documents, credit cards) to them.
Examples include Clara, x.ai, Facebook M, Digital Genius, Kasisto and Howdy.
“Pioneers” Are Very Smart
Some machine intelligence companies begin life as academic projects. When the teams — professors and graduate students with years of experience in the field — discover they have something marketable, they (or their universities) spin them out into companies.
Aggregating a team like that is, in itself, a viable market strategy, because there are so few people with 8-10 years of experience in this field. Their brains are so valuable that investors are willing to take the risk on the basis of the team alone — even if the business models still need some work.
In fact, there are many extremely important problems to solve that don’t line up with short-term use cases. These teams are the ones solving the problems that seem impossible, and they are among the few who can potentially make them possible!
This approach can work brilliantly if the team has a problem they are truly devoted to working on, but it is tough to keep the team together if they are banding together for the sake of solidarity and the prospect of an acqui-hire. They also need funders who are aligned with their longer-term vision.
Examples include DeepMind (acquired by Google), DNN Research (acquired by Google), Numenta, Vicarious, NNaiSense and Curious AI.
As you can see, it’s clear that machine intelligence is a very active space. There are many companies out there that may not fit into one of these categories, but these are the ones we see most often.
The obvious question for all of these categories is which are most attractive for investment? Individual startups are outliers by definition, so it’s hard to make it black and white, and we’re so excited about this space that it’s really just different degrees of optimism. That said, I’m particularly excited about the Lasers and Magic Wands, because they can turn new types of data into actionable intelligence right now, and because they can take advantage of well-worn SaaS techniques.
More on these to come. Stay tuned.
Disclosure: Bloomberg Beta is an investor in Diffbot, Tule Technologies, Mavrx, Gridspace, Orbital Insight, Textio, Howdy and several other machine intelligence companies that are not mentioned in this article.
The Current State of Machine Intelligence
I spent the last three months learning about every artificial intelligence, machine learning, or data related startup I could find — my current list has 2,529 of them to be exact. Yes, I should find better things to do with my evenings and weekends but until then…
Why do this?
A few years ago, investors and startups were chasing “big data” (I helped put together a landscape on that industry). Now we’re seeing a similar explosion of companies calling themselves artificial intelligence, machine learning, or somesuch — collectively I call these “machine intelligence” (I’ll get into the definitions in a second). Our fund, Bloomberg Beta, which is focused on the future of work, has been investing in these approaches. I created this landscape to start to put startups into context. I’m a thesis-oriented investor and it’s much easier to identify crowded areas and see white space once the landscape has some sort of taxonomy.
What is “machine intelligence,” anyway?
I mean “machine intelligence” as a unifying term for what others call machine learning and artificial intelligence. (Some others have used the term before, without quite describing it or understanding how laden this field has been with debates over descriptions.)
I would have preferred to avoid a different label but when I tried either “artificial intelligence” or “machine learning” both proved to too narrow: when I called it “artificial intelligence” too many people were distracted by whether certain companies were “true AI,” and when I called it “machine learning,” many thought I wasn’t doing justice to the more “AI-esque” like the various flavors of deep learning. People have immediately grasped “machine intelligence” so here we are. ☺
Computers are learning to think, read, and write. They’re also picking up human sensory function, with the ability to see and hear (arguably to touch, taste, and smell, though those have been of a lesser focus). Machine intelligence technologies cut across a vast array of problem types (from classification and clustering to natural language processing and computer vision) and methods (from support vector machines to deep belief networks). All of these technologies are reflected on this landscape.
What this landscape doesn’t include, however important, is “big data” technologies. Some have used this term interchangeably with machine learning and artificial intelligence, but I want to focus on the intelligence methods rather than data, storage, and computation pieces of the puzzle for this landscape (though of course data technologies enable machine intelligence).
Which companies are on the landscape?
I considered thousands of companies, so while the chart is crowded it’s still a small subset of the overall ecosystem. “Admissions rates” to the chart were fairly in line with those of Yale or Harvard, and perhaps equally arbitrary. ☺
I tried to pick companies that used machine intelligence methods as a defining part of their technology. Many of these companies clearly belong in multiple areas but for the sake of simplicity I tried to keep companies in their primary area and categorized them by the language they use to describe themselves (instead of quibbling over whether a company used “NLP” accurately in its self-description).
If you want to get a sense for innovations at the heart of machine intelligence, focus on the core technologies layer. Some of these companies have APIs that power other applications, some sell their platforms directly into enterprise, some are at the stage of cryptic demos, and some are so stealthy that all we have is a few sentences to describe them.
The most exciting part for me was seeing how much is happening the the application space. These companies separated nicely into those that reinvent the enterprise, industries, and ourselves.
If I were looking to build a company right now, I’d use this landscape to help figure out what core and supporting technologies I could package into a novel industry application. Everyone likes solving the sexy problems but there are an incredible amount of ‘unsexy’ industry use cases that have massive market opportunities and powerful enabling technologies that are begging to be used for creative applications (e.g., Watson Developer Cloud, AlchemyAPI).
Reflections on the landscape:
We’ve seen a few great articles recently outlining why machine intelligence is experiencing a resurgence, documenting the enabling factors of this resurgence. (Kevin Kelly, for example chalks it up to cheap parallel computing, large datasets, and better algorithms.) I focused on understanding the ecosystem on a company-by-company level and drawing implications from that.
Yes, it’s true, machine intelligence is transforming the enterprise, industries and humans alike.
On a high level it’s easy to understand why machine intelligence is important, but it wasn’t until I laid out what many of these companies are actually doing that I started to grok how much it is already transforming everything around us. As Kevin Kelly more provocatively put it, “the business plans of the next 10,000 startups are easy to forecast: Take X and add AI”. In many cases you don’t even need the X — machine intelligence will certainly transform existing industries, but will also likely create entirely new ones.
Machine intelligence is enabling applications we already expect like automated assistants (Siri), adorable robots (Jibo), and identifying people in images (like the highly effective but unfortunately named DeepFace). However, it’s also doing the unexpected: protecting children from sex trafficking, reducing the chemical content in the lettuce we eat, helping us buy shoes online that fit our feet precisely, and destroying 80’s classic video games.
Many companies will be acquired.
I was surprised to find that over 10% of the eligible (non-public) companies on the slide have been acquired. It was in stark contrast to big data landscape we created, which had very few acquisitions at the time.No jaw will drop when I reveal that Google is the number one acquirer, though there were more than 15 different acquirers just for the companies on this chart. My guess is that by the end of 2015 almost another 10% will be acquired. For thoughts on which specific ones will get snapped up in the next year you’ll have to twist my arm…
Big companies have a disproportionate advantage, especially those that build consumer products.
The giants in search (Google, Baidu), social networks (Facebook, LinkedIn, Pinterest), content (Netflix, Yahoo!), mobile (Apple) and e-commerce (Amazon) are in an incredible position. They have massive datasets and constant consumer interactions that enable tight feedback loops for their algorithms (and these factors combine to create powerful network effects) — and they have the most to gain from the low hanging fruit that machine intelligence bears.
Best-in-class personalization and recommendation algorithms have enabled these companies’ success (it’s both impressive and disconcerting that Facebook recommends you add the person you had a crush on in college and Netflix tees up that perfect guilty pleasure sitcom). Now they are all competing in a new battlefield: the move to mobile. Winning mobile will require lots of machine intelligence: state of the art natural language interfaces (like Apple’s Siri), visual search (like Amazon’s “FireFly”), and dynamic question answering technology that tells you the answer instead of providing a menu of links (all of the search companies are wrestling with this).Large enterprise companies (IBM and Microsoft) have also made incredible strides in the field, though they don’t have the same human-facing requirements so are focusing their attention more on knowledge representation tasks on large industry datasets, like IBM Watson’s application to assist doctors with diagnoses.
The talent’s in the New (AI)vy League.
In the last 20 years, most of the best minds in machine intelligence (especially the ‘hardcore AI’ types) worked in academia. They developed new machine intelligence methods, but there were few real world applications that could drive business value.
Now that real world applications of more complex machine intelligence methods like deep belief nets and hierarchical neural networks are starting to solve real world problems, we’re seeing academic talent move to corporate settings. Facebook recruited NYU professors Yann LeCun and Rob Fergus to their AI Lab, Google hired University of Toronto’s Geoffrey Hinton, Baidu wooed Andrew Ng. It’s important to note that they all still give back significantly to the academic community (one of LeCun’s lab mandates is to work on core research to give back to the community, Hinton spends half of his time teaching, Ng has made machine intelligence more accessible through Coursera) but it is clear that a lot of the intellectual horsepower is moving away from academia.
For aspiring minds in the space, these corporate labs not only offer lucrative salaries and access to the “godfathers” of the industry, but, the most important ingredient: data. These labs offer talent access to datasets they could never get otherwise (the ImageNet dataset is fantastic, but can’t compare to what Facebook, Google, and Baidu have in house).
As a result, we’ll likely see corporations become the home of many of the most important innovations in machine intelligence and recruit many of the graduate students and postdocs that would have otherwise stayed in academia.
There will be a peace dividend.
Big companies have an inherent advantage and it’s likely that the ones who will win the machine intelligence race will be even more powerful than they are today. However, the good news for the rest of the world is that the core technology they develop will rapidly spill into other areas, both via departing talent and published research.
Similar to the big data revolution, which was sparked by the release of Google’s BigTable and BigQuery papers, we will see corporations release equally groundbreaking new technologies into the community. Those innovations will be adapted to new industries and use cases that the Googles of the world don’t have the DNA or desire to tackle.
Opportunities for entrepreneurs:
“My company does deep learning for X”
Few words will make you more popular in 2015. That is, if you can credibly say them.Deep learning is a particularly popular method in the machine intelligence field that has been getting a lot of attention. Google, Facebook, and Baidu have achieved excellent results with the method for vision and language based tasks and startups like Enlitic have shown promising results as well.
Yes, it will be an overused buzzword with excitement ahead of results and business models, but unlike the hundreds of companies that say they do “big data”, it’s much easier to cut to the chase in terms of verifying credibility here if you’re paying attention.The most exciting part about the deep learning method is that when applied with the appropriate levels of care and feeding, it can replace some of the intuition that comes from domain expertise with automatically-learned features. The hope is that, in many cases, it will allow us to fundamentally rethink what a best-in-class solution is.
As an investor who is curious about the quirkier applications of data and machine intelligence, I can’t wait to see what creative problems deep learning practitioners try to solve. I completely agree with Jeff Hawkins when he says a lot of the killer applications of these types of technologies will sneak up on us. I fully intend to keep an open mind.
“Acquihire as a business model”
People say that data scientists are unicorns in short supply. The talent crunch in machine intelligence will make it look like we had a glut of data scientists. In the data field, many people had industry experience over the past decade. Most hardcore machine intelligence work has only been in academia. We won’t be able to grow this talent overnight.
This shortage of talent is a boon for founders who actually understand machine intelligence. A lot of companies in the space will get seed funding because there are early signs that the acquihire price for a machine intelligence expert is north of 5x that of a normal technical acquihire (take, for example Deep Mind, where price per technical head was somewhere between $5–10M, if we choose to consider it in the acquihire category). I’ve had multiple friends ask me, only semi-jokingly, “Shivon, should I just round up all of my smartest friends in the AI world and call it a company?” To be honest, I’m not sure what to tell them. (At Bloomberg Beta, we’d rather back companies building for the long term, but that doesn’t mean this won’t be a lucrative strategy for many enterprising founders.)
A good demo is disproportionately valuable in machine intelligence
I remember watching Watson play Jeopardy. When it struggled at the beginning I felt really sad for it. When it started trouncing its competitors I remember cheering it on as if it were the Toronto Maple Leafs in the Stanley Cup finals (disclaimers: (1) I was an IBMer at the time so was biased towards my team (2) the Maple Leafs have not made the finals during my lifetime — yet — so that was purely a hypothetical).
Why do these awe-inspiring demos matter? The last wave of technology companies to IPO didn’t have demos that most of us would watch, so why should machine intelligence companies? The last wave of companies were very computer-like: database companies, enterprise applications, and the like. Sure, I’d like to see a 10x more performant database, but most people wouldn’t care. Machine intelligence wins and loses on demos because 1) the technology is very human, enough to inspire shock and awe, 2) business models tend to take a while to form, so they need more funding for longer period of time to get them there, 3) they are fantastic acquisition bait.Watson beat the world’s best humans at trivia, even if it thought Toronto was a US city. DeepMind blew people away by beating video games. Vicarious took on CAPTCHA. There are a few companies still in stealth that promise to impress beyond that, and I can’t wait to see if they get there.
Demo or not, I’d love to talk to anyone using machine intelligence to change the world. There’s no industry too unsexy, no problem too geeky. I’d love to be there to help so don’t be shy.I hope this landscape chart sparks a conversation. The goal to is make this a living document and I want to know if there are companies or categories missing. I welcome feedback and would like to put together a dynamic visualization where I can add more companies and dimensions to the data (methods used, data types, end users, investment to date, location, etc.) so that folks can interact with it to better explore the space.
Questions and comments: Please email me. Thank you to Andrew Paprocki, Aria Haghighi, Beau Cronin, Ben Lorica, Doug Fulop, David Andrzejewski, Eric Berlow, Eric Jonas, Gary Kazantsev, Gideon Mann, Greg Smithies, Heidi Skinner, Jack Clark, Jon Lehr, Kurt Keutzer, Lauren Barless, Pete Skomoroch, Pete Warden, Roger Magoulas, Sean Gourley, Stephen Purpura, Wes McKinney, Zach Bogue, the Quid team, and the Bloomberg Beta team for your ever-helpful perspectives!
Disclaimer: Bloomberg Beta is an investor in Adatao, Alation, Aviso, Context Relevant, Mavrx, Newsle, Orbital Insights, Pop Up Archive, and two others on the chart that are still undisclosed. We’re also investors in a few other machine intelligence companies that aren’t focusing on areas that were a fit for this landscape, so we left them off.
For the full resolution version of the landscape please click here.

The Competitive Landscape for Machine Intelligence

By Hugo Angel,

da6nci_mi-landscape-3_7
Three years ago, our venture capital firm began studying startups in artificial intelligence. AI felt misunderstood, burdened by expectations from science fiction, and so for the last two years we’ve tried to capture the most-important startups in the space in a one-page landscape. (We prefer the more neutral term “machine intelligence” over “AI.”)
In past years, we heard mostly from startup founders and academics — people who pay attention to early, far-reaching trends in technology. But this year was different. This year we’ve heard more from Fortune 500 executives with questions about machine intelligence than from startup founders.
These executives are asking themselves what to do. Over the past year, machine intelligence has exploded, with $5 billion in venture investment, a few big acquisitions, and hundreds of thousands of people reading our earlier research. As with the internet in the 1990s, executives are realizing that this new technology could change everything, but nobody knows exactly how or when.
If this year’s landscape shows anything, it’s that the impact of machine intelligence is already here. Almost every industry is already being affected, from agriculture to transportation. Every employee can use machine intelligence to become more productive with tools that exist today. Companies have at their disposal, for the first time, the full set of building blocks to begin embedding machine intelligence in their businesses.
And unlike with the internet, where latecomers often bested those who were first to market, the companies that get started immediately with machine intelligence could enjoy a lasting advantage.
So what should the Fortune 500 and other companies be doing to get started?
Make Talent More Productive
One way to immediately begin getting the value of machine intelligence is to support your talent with readily available machine intelligence productivity tools. Some of the earliest wins have been productivity tools tuned to specific areas of knowledge work — what we call “Enterprise Functions” in our landscape. With these tools, every employee can get some of the powers previously available only to CEOs.
These tools can aid with monitoring and predicting (e.g., companies like Clari forecasting client-by-client sales to help prioritize deals) and with coaching and training (Textio’s* predictive text-editing platform to help employees write more-effective documents).
Find Entirely New Sources of Data
The next step is to use machine intelligence to realize value from new sources of data, which we highlight in the “Enterprise Intelligence” section of the landscape. These new sources are now accessible because machine intelligence software can rapidly review enormous amounts of data in a way that would have been too difficult and expensive for people to do.
Imagine if you could afford to have someone listen to every audio recording of your salespeople and predict their performance, or have a team look at every satellite image taken from space and determine what macroeconomic indicators could be gleaned from them. These data sources might already be owned by your company (e.g., transcripts of customer service conversations or sensor data predicting outages and required maintenance), or they might be newly available in the outside world (data on the open web providing competitive information).
Rethink How You Build Software
Let’s say you’ve tried some new productivity tools and started to mine new sources of data for insight. The next frontier in capturing machine intelligence’s value is building a lasting competitive advantage based on this new kind of software.
But machine intelligence is not just about better software; it requires entirely new processes and a different mindset. Machine intelligence is a new discipline for managers to learn, one that demands a new class of software talent and a new organizational structure.
Most IT groups think in terms of applications and data. New machine intelligence IT groups will think about applications, data, and models. Think of software as the combination of code, data, and a model. “Model” here means business rules, like rules for approving loans or adjusting power consumption in data centers. In traditional software, programmers created these rules by hand. Today machine intelligence can use data and new algorithms to generate a model too complex for any human programmer to write.
With traditional software, the model changes only when programmers explicitly rewrite it. With machine intelligence, companies can create models that evolve much more regularly, allowing you to build a lasting advantage that strengthens over time as the model “learns.”
Think of these models as narrowly focused employees with great memories and not-so-great social skills — idiot savants. They can predict how best to grow the business, make customers happier, or cut costs. But they’ll often fail miserably if you try to apply them to something new, or, worse, they may degrade invisibly as your business and data change.
All of this means that the discipline of creating machine intelligence software differs from traditional software, and companies need to staff accordingly. Luckily, though finding the right talent may be hard, the tools that developers need to build this software is readily available.
How robotics and machine learning are changing business.

For the first time, there is a maturing “Stack” (see our landscape) of building blocks that companies can use to practice the new discipline of machine intelligence. Many of these tools are available as free, open-source libraries from technology companies such as

  • Google (TensorFlow),
  • Microsoft (CNTK), or
  • Amazon (DSSTNE).

Others make it easier for data scientists to collaborate(see “Data Science”) and manage machine intelligence models (“Machine Learning”).

If your CEO is struggling to answer the question of how machine intelligence will change your industry, take a look at the range of markets in our landscape. The startups in these sections give a sense of how different industries may be altered. Machine intelligence’s first useful applications in an industry tend to use data that previously had lain dormant. Health care is a prime example: We’re seeing predictive models that run on patient data and computer vision that diagnoses disease from medical images and gleans lifesaving insights from genomic data. Next up will be finance, transportation, and agriculture because of the volume of data available and their sheer economic value.
Your company will still need to decide how much to trust these models and how much power to grant them in making business decisions. In some cases the risk of an error will be too great to justify the speed and new capabilities. Your company will also need to decide how often and with how much oversight to revise your models. But the companies that decide to invest in the right models and successfully embed machine intelligence in their organization will improve by default as their models learn from experience.
Economists have long wondered why the so-called computing revolution has failed to deliver productivity gains. Machine intelligence will finally realize computing’s promise. The C-suites and boardrooms that recognize that fact first — and transform their ways of working accordingly — will outrun and outlast their competitors.
*The authors’ fund has invested in this company.
Shivon Zilis is a partner and founding member of Bloomberg Beta, which invests heavily in the future of work. She focuses on early-stage data and machine intelligence investments.
James Cham is a Partner at Bloomberg Beta where he invests in data-centric and machine learning-related companies.