Category: Algorithms


Researchers take major step forward in Artificial Intelligence

By Hugo Angel,

The long-standing dream of using Artificial Intelligence (AI) to build an artificial brain has taken a significant step forward, as a team led by Professor Newton Howard from the University of Oxford has successfully prototyped a nanoscale, AI-powered, artificial brain in the form factor of a high-bandwidth neural implant.
Professor Newton Howard (pictured above and below) holding parts of the implant device
In collaboration with INTENT LTD, Qualcomm Corporation, Intel Corporation, Georgetown University and the Brain Sciences Foundation, Professor Howard’s Oxford Computational Neuroscience Lab in the Nuffield Department of Surgical Sciences has developed the proprietary algorithms and the optoelectronics required for the device. Rodents’ testing is on target to begin very soon.
This achievement caps over a decade of research by Professor Howard at MIT’s Synthetic Intelligence Lab and the University of Oxford, work that resulted in several issued US patents on the technologies and algorithms that power the device, 
  • the Fundamental Code Unit of the Brain (FCU)
  • the Brain Code (BC) and the Biological Co-Processor (BCP) 

are the latest advanced foundations for any eventual merger between biological intelligence and human intelligence. Ni2o (pronounced “Nitoo”) is the entity that Professor Howard licensed to further develop, market and promote these technologies.

The Biological Co-Processor is unique in that it uses advanced nanotechnology, optogenetics and deep machine learning to intelligently map internal events, such as neural spiking activity, to external physiological, linguistic and behavioral expression. The implant contains over a million carbon nanotubes, each of which is 10,000 times smaller than the width of a human hair. Carbon nanotubes provide a natural, high-bandwidth interface as they conduct heat, light and electricity instantaneously updating the neural laces. They adhere to neuronal constructs and even promote neural growth. Qualcomm team leader Rudy Beraha commented, ‘Although the prototype unit shown today is tethered to external power, a commercial Brain Co-Processor unit will be wireless and inductively powered, enabling it to be administered with a minimally-invasive procedures.
The device uses a combination of methods to write to the brain, including 
  • pulsed electricity
  • light and 
  • various molecules that simulate or inhibit the activation of specific neuronal groups
These can be targeted to stimulate a desired response, such as releasing chemicals in patients suffering from a neurological disorder or imbalance. The BCP is designed as a fully integrated system to use the brain’s own internal systems and chemistries to pattern and mimic healthy brain behavior, an approach that stands in stark contrast to the current state of the art, which is to simply apply mild electrocution to problematic regions of the brain. 
Therapeutic uses
The Biological Co-Processor promises to provide relief for millions of patients suffering from neurological, psychiatric and psychological disorders as well as degenerative diseases. Initial therapeutic uses will likely be for patients with traumatic brain injuries and neurodegenerative disorders, such as Alzheimer’s, as the BCP will strengthen the weak, shortening connections responsible for lost memories and skills. Once implanted, the device provides a closed-loop, self-learning platform able to both determine and administer the perfect balance of pharmaceutical, electroceutical, genomeceutical and optoceutical therapies.
Dr Richard Wirt, a Senior Fellow at Intel Corporation and Co-Founder of INTENT, the company’s partner of Ni2o bringing BCP to market, commented on the device, saying, ‘In the immediate timeframe, this device will have many benefits for researchers, as it could be used to replicate an entire brain image, synchronously mapping internal and external expressions of human response. Over the long term, the potential therapeutic benefits are unlimited.
The brain controls all organs and systems in the body, so the cure to nearly every disease resides there.- Professor Newton Howard
Rather than simply disrupting neural circuits, the machine learning systems within the BCP are designed to interpret these signals and intelligently read and write to the surrounding neurons. These capabilities could be used to reestablish any degenerative or trauma-induced damage and perhaps write these memories and skills to other, healthier areas of the brain. 
One day, these capabilities could also be used in healthy patients to radically augment human ability and proactively improve health. As Professor Howard points out: ‘The brain controls all organs and systems in the body, so the cure to nearly every disease resides there.‘ Speaking more broadly, Professor Howard sees the merging of man with machine as our inevitable destiny, claiming it to be ‘the next step on the blueprint that the author of it all built into our natural architecture.
With the resurgence of neuroscience and AI enhancing machine learning, there has been renewed interest in brain implants. This past March, Elon Musk and Bryan Johnson independently announced that they are focusing and investing in for the brain/computer interface domain. 
When asked about these new competitors, Professor Howard said he is happy to see all these new startups and established names getting into the field – he only wonders what took them so long, stating: ‘I would like to see us all working together, as we have already established a mathematical foundation and software framework to solve so many of the challenges they will be facing. We could all get there faster if we could work together – after all, the patient is the priority.
© 2017 Nuffield Department of Surgical Sciences, John Radcliffe Hospital, Headington, Oxford, OX3 9DU
ORIGINAL: NDS Oxford
2 June 2017 

The future of AI is neuromorphic. Meet the scientists building digital ‘brains’ for your phone

By Hugo Angel,

Neuromorphic chips are being designed to specifically mimic the human brain – and they could soon replace CPUs
BRAIN ACTIVITY MAP
Neuroscape Lab
AI services like Apple’s Siri and others operate by sending your queries to faraway data centers, which send back responses. The reason they rely on cloud-based computing is that today’s electronics don’t come with enough computing power to run the processing-heavy algorithms needed for machine learning. The typical CPUs most smartphones use could never handle a system like Siri on the device. But Dr. Chris Eliasmith, a theoretical neuroscientist and co-CEO of Canadian AI startup Applied Brain Research, is confident that a new type of chip is about to change that.
Many have suggested Moore’s law is ending and that means we won’t get ‘more compute’ cheaper using the same methods,” Eliasmith says. He’s betting on the proliferation of ‘neuromorphics’ — a type of computer chip that is not yet widely known but already being developed by several major chip makers.
Traditional CPUs process instructions based on “clocked time” – information is transmitted at regular intervals, as if managed by a metronome. By packing in digital equivalents of neurons, neuromorphics communicate in parallel (and without the rigidity of clocked time) using “spikes” – bursts of electric current that can be sent whenever needed. Just like our own brains, the chip’s neurons communicate by processing incoming flows of electricity – each neuron able to determine from the incoming spike whether to send current out to the next neuron.
What makes this a big deal is that these chips require far less power to process AI algorithms. For example, one neuromorphic chip made by IBM contains five times as many transistors as a standard Intel processor, yet consumes only 70 milliwatts of power. An Intel processor would use anywhere from 35 to 140 watts, or up to 2000 times more power.
Eliasmith points out that neuromorphics aren’t new and that their designs have been around since the 80s. Back then, however, the designs required specific algorithms be baked directly into the chip. That meant you’d need one chip for detecting motion, and a different one for detecting sound. None of the chips acted as a general processor in the way that our own cortex does.
This was partly because there hasn’t been any way for programmers to design algorithms that can do much with a general purpose chip. So even as these brain-like chips were being developed, building algorithms for them has remained a challenge.
 
Eliasmith and his team are keenly focused on building tools that would allow a community of programmers to deploy AI algorithms on these new cortical chips.
Central to these efforts is Nengo, a compiler that developers can use to build their own algorithms for AI applications that will operate on general purpose neuromorphic hardware. Compilers are a software tool that programmers use to write code, and that translate that code into the complex instructions that get hardware to actually do something. What makes Nengo useful is its use of the familiar Python programming language – known for it’s intuitive syntax – and its ability to put the algorithms on many different hardware platforms, including neuromorphic chips. Pretty soon, anyone with an understanding of Python could be building sophisticated neural nets made for neuromorphic hardware.
Things like vision systems, speech systems, motion control, and adaptive robotic controllers have already been built with Nengo,Peter Suma, a trained computer scientist and the other CEO of Applied Brain Research, tells me.
Perhaps the most impressive system built using the compiler is Spaun, a project that in 2012 earned international praise for being the most complex brain model ever simulated on a computer. Spaun demonstrated that computers could be made to interact fluidly with the environment, and perform human-like cognitive tasks like recognizing images and controlling a robot arm that writes down what it’s sees. The machine wasn’t perfect, but it was a stunning demonstration that computers could one day blur the line between human and machine cognition. Recently, by using neuromorphics, most of Spaun has been run 9000x faster, using less energy than it would on conventional CPUs – and by the end of 2017, all of Spaun will be running on Neuromorphic hardware.
Eliasmith won NSERC’s John C. Polyani award for that project — Canada’s highest recognition for a breakthrough scientific achievement – and once Suma came across the research, the pair joined forces to commercialize these tools.
While Spaun shows us a way towards one day building fluidly intelligent reasoning systems, in the nearer term neuromorphics will enable many types of context aware AIs,” says Suma. Suma points out that while today’s AIs like Siri remain offline until explicitly called into action, we’ll soon have artificial agents that are ‘always on’ and ever-present in our lives.
Imagine a SIRI that listens and sees all of your conversations and interactions. You’ll be able to ask it for things like – “Who did I have that conversation about doing the launch for our new product in Tokyo?” or “What was that idea for my wife’s birthday gift that Melissa suggested?,” he says.
When I raised concerns that some company might then have an uninterrupted window into even the most intimate parts of my life, I’m reminded that because the AI would be processed locally on the device, there’s no need for that information to touch a server owned by a big company. And for Eliasmith, this ‘always on’ component is a necessary step towards true machine cognition. “The most fundamental difference between most available AI systems of today and the biological intelligent systems we are used to, is the fact that the latter always operate in real-time. Bodies and brains are built to work with the physics of the world,” he says.
Already, major efforts across the IT industry are heating up to get their AI services into the hands of users. Companies like Apple, Facebook, Amazon, and even Samsung, are developing conversational assistants they hope will one day become digital helpers.
ORIGINAL: Wired
Monday 6 March 2017

Xnor.ai – Bringing Deep Learning AI to the Devices at the Edge of the Network

By Hugo Angel,

 

Photo  – The Xnor.ai Team

 

Today we announced our funding of Xnor.ai. We are excited to be working with Ali Farhadi, Mohammad Rastegari and their team on this new company. We are also looking forward to working with Paul Allen’s team at the Allen Institute for AI and in particular our good friend and CEO of AI2, Dr. Oren Etzioni who is joining the board of Xnor.ai. Machine Learning and AI have been a key investment theme for us for the past several years and bringing deep learning capabilities such as image and speech recognition to small devices is a huge challenge.

Mohammad and Ali and their team have developed a platform that enables low resource devices to perform tasks that usually require large farms of GPUs in cloud environments. This, we believe, has the opportunity to change how we think about certain types of deep learning use cases as they get extended from the core to the edge. Image and voice recognition are great examples. These are broad areas of use cases out in the world – usually with a mobile device, but right now they require the device to be connected to the internet so those large farms of GPUs can process all the information your device is capturing/sending and having the core transmit back the answer. If you could do that on your phone (while preserving battery life) it opens up a new world of options.

It is just these kinds of inventions that put the greater Seattle area at the center of the revolution in machine learning and AI that is upon us. Xnor.ai came out of the outstanding work the team was doing at the Allen Institute for Artificial Intelligence (AI2.) and Ali is a professor at the University of Washington. Between Microsoft, Amazon, the University of Washington and research institutes such as AI2, our region is leading the way as new types of intelligent applications takes shape. Madrona is energized to play our role as company builder and support for these amazing inventors and founders.

ORIGINAL: Madrona
By Matt McIlwain

AI acceleration startup Xnor.ai collects $2.6M in funding

I was excited by the promise of Xnor.ai and its technique that drastically reduces the computing power necessary to perform complex operations like computer vision. Seems I wasn’t the only one: the company, just officially spun off from the Allen Institute for AI (AI2), has attracted $2.6 million in seed funding from its parent company and Madrona Venture Group.

The specifics of the product and process you can learn about in detail in my previous post, but the gist is this: machine learning models for things like object and speech recognition are notoriously computation-heavy, making them difficult to implement on smaller, less powerful devices. Xnor.ai’s researchers use a bit of mathematical trickery to reduce that computing load by an order of magnitude or two — something it’s easy to see the benefit of.

Related Articles

McIlwain will join AI2 CEO Oren Etzioni on the board of Xnor.ai; Ali Farhadi, who led the original project, will be the company’s CEO, and Mohammad Rastegari is CTO.
The new company aims to facilitate commercial applications of its technology (it isn’t quite plug and play yet), but the research that led up to it is, like other AI2 work, open source.

 

AI2 Repository:  https://github.com/allenai/

ORIGINAL: TechCrunch
by
2017/02/03

Deep Learning With Python & Tensorflow – PyConSG 2016

By Hugo Angel,

ORIGINAL: Pycon.SG
Jul 5, 2016
Speaker: Ian Lewis
Description
Python has lots of scientific, data analysis, and machine learning libraries. But there are many problems when starting out on a machine learning project. Which library do you use? How can you use a model that has been trained in your production app? In this talk I will discuss how you can use TensorFlow to create Deep Learning applications and how to deploy them into production.
Abstract
Python has lots of scientific, data analysis, and machine learning libraries. But there are many problems when starting out on a machine learning project. Which library do you use? How do they compare to each other? How can you use a model that has been trained in your production application?
TensorFlow is a new Open-Source framework created at Google for building Deep Learning applications. Tensorflow allows you to construct easy to understand data flow graphs in Python which form a mathematical and logical pipeline. Creating data flow graphs allow easier visualization of complicated algorithms as well as running the training operations over multiple hardware GPUs in parallel.
In this talk I will discuss how you can use TensorFlow to create Deep Learning applications. I will discuss how it compares to other Python machine learning libraries like Theano or Chainer. Finally, I will discuss how trained TensorFlow models could be deployed into a production system using TensorFlow Serve.
Event Page: https://pycon.sg
Produced by Engineers.SG

IBM, Local Motors debut Olli, the first Watson-powered self-driving vehicle

By Hugo Angel,

Olli hits the road in the Washington, D.C. area and later this year in Miami-Dade County and Las Vegas.
Local Motors CEO and co-founder John B. Rogers, Jr. with “Olli” & IBM, June 15, 2016.Rich Riggins/Feature Photo Service for IBM

IBM, along with the Arizona-based manufacturer Local Motors, debuted the first-ever driverless vehicle to use the Watson cognitive computing platform. Dubbed “Olli,” the electric vehicle was unveiled at Local Motors’ new facility in National Harbor, Maryland, just outside of Washington, D.C.

Olli, which can carry up to 12 passengers, taps into four Watson APIs (

  • Speech to Text, 
  • Natural Language Classifier, 
  • Entity Extraction and 
  • Text to Speech

) to interact with its riders. It can answer questions like “Can I bring my children on board?” and respond to basic operational commands like, “Take me to the closest Mexican restaurant.” Olli can also give vehicle diagnostics, answering questions like, “Why are you stopping?

Olli learns from data produced by more than 30 sensors embedded throughout the vehicle, which will added and adjusted to meet passenger needs and local preferences.
While Olli is the first self-driving vehicle to use IBM Watson Internet of Things (IoT), this isn’t Watson’s first foray into the automotive industry. IBM launched its IoT for Automotive unit in September of last year, and in March, IBM and Honda announced a deal for Watson technology and analytics to be used in the automaker’s Formula One (F1) cars and pits.
IBM demonstrated its commitment to IoT in March of last year, when it announced it was spending $3B over four years to establish a separate IoT business unit, whch later became the Watson IoT business unit.
IBM says that starting Thursday, Olli will be used on public roads locally in Washington, D.C. and will be used in Miami-Dade County and Las Vegas later this year. Miami-Dade County is exploring a pilot program that would deploy several autonomous vehicles to shuttle people around Miami.
ORIGINAL: ZDnet
By Stephanie Condon for Between the Lines
June 16, 2016

Inside OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free

By Hugo Angel,

 MICHAL CZERWONKA/REDUX
THE FRIDAY AFTERNOON news dump, a grand tradition observed by politicians and capitalists alike, is usually supposed to hide bad news. So it was a little weird that Elon Musk, founder of electric car maker Tesla, and Sam Altman, president of famed tech incubator Y Combinator, unveiled their new artificial intelligence company at the tail end of a weeklong AI conference in Montreal this past December.
But there was a reason they revealed OpenAI at that late hour. It wasn’t that no one was looking. It was that everyone was looking. When some of Silicon Valley’s most powerful companies caught wind of the project, they began offering tremendous amounts of money to OpenAI’s freshly assembled cadre of artificial intelligence researchers, intent on keeping these big thinkers for themselves. The last-minute offers—some made at the conference itself—were large enough to force Musk and Altman to delay the announcement of the new startup. “The amount of money was borderline crazy,” says Wojciech Zaremba, a researcher who was joining OpenAI after internships at both Google and Facebook and was among those who received big offers at the eleventh hour.
How many dollars is “borderline crazy”? 
Two years ago, as the market for the latest machine learning technology really started to heat up, Microsoft Research vice president Peter Lee said that the cost of a top AI researcher had eclipsed the cost of a top quarterback prospect in the National Football League—and he meant under regular circumstances, not when two of the most famous entrepreneurs in Silicon Valley were trying to poach your top talent. Zaremba says that as OpenAI was coming together, he was offered two or three times his market value.
OpenAI didn’t match those offers. But it offered something else: the chance to explore research aimed solely at the future instead of products and quarterly earnings, and to eventually share most—if not all—of this research with anyone who wants it. That’s right: Musk, Altman, and company aim to give away what may become the 21st century’s most transformative technology—and give it away for free.
Ilya Sutskever.
CHRISTIE HEMM KLOK/WIRED
Zaremba says those borderline crazy offers actually turned him off—despite his enormous respect for companies like Google and Facebook. He felt like the money was at least as much of an effort to prevent the creation of OpenAI as a play to win his services, and it pushed him even further towards the startup’s magnanimous mission. “I realized,” Zaremba says, “that OpenAI was the best place to be.
That’s the irony at the heart of this story: even as the world’s biggest tech companies try to hold onto their researchers with the same fierceness that NFL teams try to hold onto their star quarterbacks, the researchers themselves just want to share. In the rarefied world of AI research, the brightest minds aren’t driven by—or at least not only by—the next product cycle or profit margin. They want to make AI better, and making AI better doesn’t happen when you keep your latest findings to yourself.
OpenAI is a billion-dollar effort to push AI as far as it will go.
This morning, OpenAI will release its first batch of AI software, a toolkit for building artificially intelligent systems by way of a technology called reinforcement learning—one of the key technologies that, among other things, drove the creation of AlphaGo, the Google AI that shocked the world by mastering the ancient game of Go. With this toolkit, you can build systems that simulate a new breed of robot, play Atari games, and, yes, master the game of Go.
But game-playing is just the beginning. OpenAI is a billion-dollar effort to push AI as far as it will go. In both how the company came together and what it plans to do, you can see the next great wave of innovation forming. We’re a long way from knowing whether OpenAI itself becomes the main agent for that change. But the forces that drove the creation of this rather unusual startup show that the new breed of AI will not only remake technology, but remake the way we build technology.
AI Everywhere
Silicon Valley is not exactly averse to hyperbole. It’s always wise to meet bold-sounding claims with skepticism. But in the field of AI, the change is real. Inside places like Google and Facebook, a technology called deep learning is already helping Internet services identify faces in photos, recognize commands spoken into smartphones, and respond to Internet search queries. And this same technology can drive so many other tasks of the future. It can help machines understand natural language—the natural way that we humans talk and write. It can create a new breed of robot, giving automatons the power to not only perform tasks but learn them on the fly. And some believe it can eventually give machines something close to common sense—the ability to truly think like a human.
 MICHAL CZERWONKA/REDUX
THE FRIDAY AFTERNOON news dump, a grand tradition observed by politicians and capitalists alike, is usually supposed to hide bad news. So it was a little weird that Elon Musk, founder of electric car maker Tesla, and Sam Altman, president of famed tech incubator Y Combinator, unveiled their new artificial intelligence company at the tail end of a weeklong AI conference in Montreal this past December.
But there was a reason they revealed OpenAI at that late hour. It wasn’t that no one was looking. It was that everyone was looking. When some of Silicon Valley’s most powerful companies caught wind of the project, they began offering tremendous amounts of money to OpenAI’s freshly assembled cadre of artificial intelligence researchers, intent on keeping these big thinkers for themselves. The last-minute offers—some made at the conference itself—were large enough to force Musk and Altman to delay the announcement of the new startup. “The amount of money was borderline crazy,” says Wojciech Zaremba, a researcher who was joining OpenAI after internships at both Google and Facebook and was among those who received big offers at the eleventh hour.
How many dollars is “borderline crazy”? 
Two years ago, as the market for the latest machine learning technology really started to heat up, Microsoft Research vice president Peter Lee said that the cost of a top AI researcher had eclipsed the cost of a top quarterback prospect in the National Football League—and he meant under regular circumstances, not when two of the most famous entrepreneurs in Silicon Valley were trying to poach your top talent. Zaremba says that as OpenAI was coming together, he was offered two or three times his market value.
OpenAI didn’t match those offers. But it offered something else: the chance to explore research aimed solely at the future instead of products and quarterly earnings, and to eventually share most—if not all—of this research with anyone who wants it. That’s right: Musk, Altman, and company aim to give away what may become the 21st century’s most transformative technology—and give it away for free.
Ilya Sutskever.
CHRISTIE HEMM KLOK/WIRED
Zaremba says those borderline crazy offers actually turned him off—despite his enormous respect for companies like Google and Facebook. He felt like the money was at least as much of an effort to prevent the creation of OpenAI as a play to win his services, and it pushed him even further towards the startup’s magnanimous mission. “I realized,” Zaremba says, “that OpenAI was the best place to be.
That’s the irony at the heart of this story: even as the world’s biggest tech companies try to hold onto their researchers with the same fierceness that NFL teams try to hold onto their star quarterbacks, the researchers themselves just want to share. In the rarefied world of AI research, the brightest minds aren’t driven by—or at least not only by—the next product cycle or profit margin. They want to make AI better, and making AI better doesn’t happen when you keep your latest findings to yourself.
OpenAI is a billion-dollar effort to push AI as far as it will go.
This morning, OpenAI will release its first batch of AI software, a toolkit for building artificially intelligent systems by way of a technology called reinforcement learning—one of the key technologies that, among other things, drove the creation of AlphaGo, the Google AI that shocked the world by mastering the ancient game of Go. With this toolkit, you can build systems that simulate a new breed of robot, play Atari games, and, yes, master the game of Go.
But game-playing is just the beginning. OpenAI is a billion-dollar effort to push AI as far as it will go. In both how the company came together and what it plans to do, you can see the next great wave of innovation forming. We’re a long way from knowing whether OpenAI itself becomes the main agent for that change. But the forces that drove the creation of this rather unusual startup show that the new breed of AI will not only remake technology, but remake the way we build technology.
AI Everywhere
Silicon Valley is not exactly averse to hyperbole. It’s always wise to meet bold-sounding claims with skepticism. But in the field of AI, the change is real. Inside places like Google and Facebook, a technology called deep learning is already helping Internet services identify faces in photos, recognize commands spoken into smartphones, and respond to Internet search queries. And this same technology can drive so many other tasks of the future. It can help machines understand natural language—the natural way that we humans talk and write. It can create a new breed of robot, giving automatons the power to not only perform tasks but learn them on the fly. And some believe it can eventually give machines something close to common sense—the ability to truly think like a human.
But along with such promise comes deep anxiety. Musk and Altman worry that if people can build AI that can do great things, then they can build AI that can do awful things, too. They’re not alone in their fear of robot overlords, but perhaps counterintuitively, Musk and Altman also think that the best way to battle malicious AI is not to restrict access to artificial intelligence but expand it. That’s part of what has attracted a team of young, hyper-intelligent idealists to their new project.
OpenAI began one evening last summer in a private room at Silicon Valley’s Rosewood Hotel—an upscale, urban, ranch-style hotel that sits, literally, at the center of the venture capital world along Sand Hill Road in Menlo Park, California. Elon Musk was having dinner with Ilya Sutskever, who was then working on the Google Brain, the company’s sweeping effort to build deep neural networks—artificially intelligent systems that can learn to perform tasks by analyzing massive amounts of digital data, including everything from recognizing photos to writing email messages to, well, carrying on a conversation. Sutskever was one of the top thinkers on the project. But even bigger ideas were in play.
Sam Altman, whose Y Combinator helped bootstrap companies like Airbnb, Dropbox, and Coinbase, had brokered the meeting, bringing together several AI researchers and a young but experienced company builder named Greg Brockman, previously the chief technology officer at high-profile Silicon Valley digital payments startup called Stripe, another Y Combinator company. It was an eclectic group. But they all shared a goal: to create a new kind of AI lab, one that would operate outside the control not only of Google, but of anyone else. “The best thing that I could imagine doing,” Brockman says, “was moving humanity closer to building real AI in a safe way.
Musk is one of the loudest voices warning that we humans could one day lose control of systems powerful enough to learn on their own.
Musk was there because he’s an old friend of Altman’s—and because AI is crucial to the future of his various businesses and, well, the future as a whole. Tesla needs AI for its inevitable self-driving cars. SpaceX, Musk’s other company, will need it to put people in space and keep them alive once they’re there. But Musk is also one of the loudest voices warning that we humans could one day lose control of systems powerful enough to learn on their own.
The trouble was: so many of the people most qualified to solve all those problems were already working for Google (and Facebook and Microsoft and Baidu and Twitter). And no one at the dinner was quite sure that these thinkers could be lured to a new startup, even if Musk and Altman were behind it. But one key player was at least open to the idea of jumping ship. “I felt there were risks involved,” Sutskever says. “But I also felt it would be a very interesting thing to try.

Breaking the Cycle
Emboldened by the conversation with Musk, Altman, and others at the Rosewood, Brockman soon resolved to build the lab they all envisioned. Taking on the project full-time, he approached Yoshua Bengio, a computer scientist at the University of Montreal and one of founding fathers of the deep learning movement. The field’s other two pioneers—Geoff Hinton and Yann LeCun—are now at Google and Facebook, respectively, but Bengio is committed to life in the world of academia, largely outside the aims of industry. He drew up a list of the best researchers in the field, and over the next several weeks, Brockman reached out to as many on the list as he could, along with several others.
Greg Brockman,
one of OpenAI’s founding fathers and
its chief technology officer.
CHRISTIE HEMM KLOK/WIRED
Many of these researchers liked the idea, but they were also wary of making the leap. In an effort to break the cycle, Brockman picked the ten researchers he wanted the most and invited them to spend a Saturday getting wined, dined, and cajoled at a winery in Napa Valley. For Brockman, even the drive into Napa served as a catalyst for the project. “An underrated way to bring people together are these times where there is no way to speed up getting to where you’re going,” he says. “You have to get there, and you have to talk.” And once they reached the wine country, that vibe remained. “It was one of those days where you could tell the chemistry was there,” Brockman says. Or as Sutskever puts it: “the wine was secondary to the talk.”
RELATED STORIES



By the end of the day, Brockman asked all ten researchers to join the lab, and he gave them three weeks to think about it. By the deadline, nine of them were in. And they stayed in, despite those big offers from the giants of Silicon Valley. “They did make it very compelling for me to stay, so it wasn’t an easy decision,” Sutskever says of Google, his former employer. “But in the end, I decided to go with OpenAI, partly of because of the very strong group of people and, to a very large extent, because of its mission.”
The deep learning movement began with academics. It’s only recently that companies like Google and Facebook and Microsoft have pushed into the field, as advances in raw computing power have made deep neural networks a reality, not just a theoretical possibility. People like Hinton and LeCun left academia for Google and Facebook because of the enormous resources inside these companies. But they remain intent on collaborating with other thinkers. Indeed, as LeCun explains, deep learning research requires this free flow of ideas. “When you do research in secret,” he says, “you fall behind.”
As a result, big companies now share a lot of their AI research. That’s a real change, especially for Google, which has long kept the tech at the heart of its online empiresecret. Recently, Google open sourced the software engine that drives its neural networks. But it still retains the inside track in the race to the future. Brockman, Altman, and Musk aim to push the notion of openness further still, saying they don’t want one or two large corporations controlling the future of artificial intelligence.
The Limits of Openness
All of which sounds great. But for all of OpenAI’s idealism, the researchers may find themselves facing some of the same compromises they had to make at their old jobs. Openness has its limits. And the long-term vision for AI isn’t the only interest in play. OpenAI is not a charity. Musk’s companies that could benefit greatly the startup’s work, and so could many of the companies backed by Altman’s Y Combinator. “There are certainly some competing objectives,” LeCun says. “It’s a non-profit, but then there is a very close link with Y Combinator. And people are paid as if they are working in the industry.”
According to Brockman, the lab doesn’t pay the same astronomical salaries that AI researchers are now getting at places like Google and Facebook. But he says the lab does want to “pay them well,” and it’s offering to compensate researchers with stock options, first in Y Combinator and perhaps later in SpaceX (which, unlike Tesla, is still a private company).

Brockman insists that OpenAI won’t give special treatment to its sister companies.
Nonetheless, Brockman insists that OpenAI won’t give special treatment to its sister companies. OpenAI is a research outfit, he says, not a consulting firm. But when pressed, he acknowledges that OpenAI’s idealistic vision has its limits. The company may not open source everything it produces, though it will aim to share most of its research eventually, either through research papers or Internet services. “Doing all your research in the open is not necessarily the best way to go. You want to nurture an idea, see where it goes, and then publish it,” Brockman says. “We will produce lot of open source code. But we will also have a lot of stuff that we are not quite ready to release.
Both Sutskever and Brockman also add that OpenAI could go so far as to patent some of its work. “We won’t patent anything in the near term,” Brockman says. “But we’re open to changing tactics in the long term, if we find it’s the best thing for the world.” For instance, he says, OpenAI could engage in pre-emptive patenting, a tactic that seeks to prevent others from securing patents.
But to some, patents suggest a profit motive—or at least a weaker commitment to open source than OpenAI’s founders have espoused. “That’s what the patent system is about,” says Oren Etzioni, head of the Allen Institute for Artificial Intelligence. “This makes me wonder where they’re really going.

The Super-Intelligence Problem
When Musk and Altman unveiled OpenAI, they also painted the project as a way to neutralize the threat of a malicious artificial super-intelligence. Of course, that super-intelligence could arise out of the tech OpenAI creates, but they insist that any threat would be mitigated because the technology would be usable by everyone. “We think its far more likely that many, many AIs will work to stop the occasional bad actors,” Altman says.
But not everyone in the field buys this. Nick Bostrom, the Oxford philosopher who, like Musk, has warned against the dangers of AI, points out that if you share research without restriction, bad actors could grab it before anyone has ensured that it’s safe. “If you have a button that could do bad things to the world,” Bostrom says, “you don’t want to give it to everyone.” If, on the other hand, OpenAI decides to hold back research to keep it from the bad guys, Bostrom wonders how it’s different from a Google or a Facebook.
If you share research without restriction, bad actors could grab it before anyone has ensured that it’s safe.
He does say that the not-for-profit status of OpenAI could change things—though not necessarily. The real power of the project, he says, is that it can indeed provide a check for the likes of Google and Facebook. “It can reduce the probability that super-intelligence would be monopolized,” he says. “It can remove one possible reason why some entity or group would have radically better AI than everyone else.
But as the philosopher explains in a new paper, the primary effect of an outfit like OpenAI—an outfit intent on freely sharing its work—is that it accelerates the progress of artificial intelligence, at least in the short term. And it may speed progress in the long term as well, provided that it, for altruistic reasons, “opts for a higher level of openness than would be commercially optimal.
It might still be plausible that a philanthropically motivated R&D funder would speed progress more by pursuing open science,” he says.


Like Xerox PARC
In early January, Brockman’s nine AI researchers met up at his apartment in San Francisco’s Mission District. The project was so new that they didn’t even have white boards. (Can you imagine?) They bought a few that day and got down to work.
Brockman says OpenAI will begin by exploring reinforcement learning, a way for machines to learn tasks by repeating them over and over again and tracking which methods produce the best results. But the other primary goal is what’s called unsupervised learning—creating machines that can truly learn on their own, without a human hand to guide them. Today, deep learning is driven by carefully labeled data. If you want to teach a neural network to recognize cat photos, you must feed it a certain number of examples—and these examples must be labeled as cat photos. The learning is supervised by human labelers. But like many others researchers, OpenAI aims to create neural nets that can learn without carefully labeled data.
If you have really good unsupervised learning, machines would be able to learn from all this knowledge on the Internet—just like humans learn by looking around—or reading books,” Brockman says.
He envisions OpenAI as the modern incarnation of Xerox PARC, the tech research lab that thrived in the 1970s. Just as PARC’s largely open and unfettered research gave rise to everything from the graphical user interface to the laser printer to object-oriented programing, Brockman and crew seek to delve even deeper into what we once considered science fiction. PARC was owned by, yes, Xerox, but it fed so many other companies, most notably Apple, because people like Steve Jobs were privy to its research. At OpenAI, Brockman wants to make everyone privy to its research.
This month, hoping to push this dynamic as far as it will go, Brockman and company snagged several other notable researchers, including Ian Goodfellow, another former senior researcher on the Google Brain team. “The thing that was really special about PARC is that they got a bunch of smart people together and let them go where they want,” Brockman says. “You want a shared vision, without central control.”
Giving up control is the essence of the open source ideal. If enough people apply themselves to a collective goal, the end result will trounce anything you concoct in secret. But if AI becomes as powerful as promised, the equation changes. We’ll have to ensure that new AIs adhere to the same egalitarian ideals that led to their creation in the first place. Musk, Altman, and Brockman are placing their faith in the wisdom of the crowd. But if they’re right, one day that crowd won’t be entirely human.
ORIGINAL: Wired

CADE METZ BUSINESS 
04.27.16 

: justify;”>

But along with such promise comes deep anxiety. Musk and AlInside OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free
AI, Elon Musk, Open Source, OpenAI, Reinforcement Learning, Software Kit,

 MICHAL CZERWONKA/REDUX
THE FRIDAY AFTERNOON news dump, a grand tradition observed by politicians and capitalists alike, is usually supposed to hide bad news. So it was a little weird that Elon Musk, founder of electric car maker Tesla, and Sam Altman, president of famed tech incubator Y Combinator, unveiled their new artificial intelligence company at the tail end of a weeklong AI conference in Montreal this past December.
But there was a reason they revealed OpenAI at that late hour. It wasn’t that no one was looking. It was that everyone was looking. When some of Silicon Valley’s most powerful companies caught wind of the project, they began offering tremendous amounts of money to OpenAI’s freshly assembled cadre of artificial intelligence researchers, intent on keeping these big thinkers for themselves. The last-minute offers—some made at the conference itself—were large enough to force Musk and Altman to delay the announcement of the new startup. “The amount of money was borderline crazy,” says Wojciech Zaremba, a researcher who was joining OpenAI after internships at both Google and Facebook and was among those who received big offers at the eleventh hour.
How many dollars is “borderline crazy”? 
Two years ago, as the market for the latest machine learning technology really started to heat up, Microsoft Research vice president Peter Lee said that the cost of a top AI researcher had eclipsed the cost of a top quarterback prospect in the National Football League—and he meant under regular circumstances, not when two of the most famous entrepreneurs in Silicon Valley were trying to poach your top talent. Zaremba says that as OpenAI was coming together, he was offered two or three times his market value.
OpenAI didn’t match those offers. But it offered something else: the chance to explore research aimed solely at the future instead of products and quarterly earnings, and to eventually share most—if not all—of this research with anyone who wants it. That’s right: Musk, Altman, and company aim to give away what may become the 21st century’s most transformative technology—and give it away for free.
Ilya Sutskever.
CHRISTIE HEMM KLOK/WIRED
Zaremba says those borderline crazy offers actually turned him off—despite his enormous respect for companies like Google and Facebook. He felt like the money was at least as much of an effort to prevent the creation of OpenAI as a play to win his services, and it pushed him even further towards the startup’s magnanimous mission. “I realized,” Zaremba says, “that OpenAI was the best place to be.
That’s the irony at the heart of this story: even as the world’s biggest tech companies try to hold onto their researchers with the same fierceness that NFL teams try to hold onto their star quarterbacks, the researchers themselves just want to share. In the rarefied world of AI research, the brightest minds aren’t driven by—or at least not only by—the next product cycle or profit margin. They want to make AI better, and making AI better doesn’t happen when you keep your latest findings to yourself.
OpenAI is a billion-dollar effort to push AI as far as it will go.
This morning, OpenAI will release its first batch of AI software, a toolkit for building artificially intelligent systems by way of a technology called reinforcement learning—one of the key technologies that, among other things, drove the creation of AlphaGo, the Google AI that shocked the world by mastering the ancient game of Go. With this toolkit, you can build systems that simulate a new breed of robot, play Atari games, and, yes, master the game of Go.
But game-playing is just the beginning. OpenAI is a billion-dollar effort to push AI as far as it will go. In both how the company came together and what it plans to do, you can see the next great wave of innovation forming. We’re a long way from knowing whether OpenAI itself becomes the main agent for that change. But the forces that drove the creation of this rather unusual startup show that the new breed of AI will not only remake technology, but remake the way we build technology.
AI Everywhere
Silicon Valley is not exactly averse to hyperbole. It’s always wise to meet bold-sounding claims with skepticism. But in the field of AI, the change is real. Inside places like Google and Facebook, a technology called deep learning is already helping Internet services identify faces in photos, recognize commands spoken into smartphones, and respond to Internet search queries. And this same technology can drive so many other tasks of the future. It can help machines understand natural language—the natural way that we humans talk and write. It can create a new breed of robot, giving automatons the power to not only perform tasks but learn them on the fly. And some believe it can eventually give machines something close to common sense—the ability to truly think like a human.
But along with such promise comes deep anxiety. Musk and Altman worry that if people can build AI that can do great things, then they can build AI that can do awful things, too. They’re not alone in their fear of robot overlords, but perhaps counterintuitively, Musk and Altman also think that the best way to battle malicious AI is not to restrict access to artificial intelligence but expand it. That’s part of what has attracted a team of young, hyper-intelligent idealists to their new project.
OpenAI began one evening last summer in a private room at Silicon Valley’s Rosewood Hotel—an upscale, urban, ranch-style hotel that sits, literally, at the center of the venture capital world along Sand Hill Road in Menlo Park, California. Elon Musk was having dinner with Ilya Sutskever, who was then working on the Google Brain, the company’s sweeping effort to build deep neural networks—artificially intelligent systems that can learn to perform tasks by analyzing massive amounts of digital data, including everything from recognizing photos to writing email messages to, well, carrying on a conversation. Sutskever was one of the top thinkers on the project. But even bigger ideas were in play.
Sam Altman, whose Y Combinator helped bootstrap companies like Airbnb, Dropbox, and Coinbase, had brokered the meeting, bringing together several AI researchers and a young but experienced company builder named Greg Brockman, previously the chief technology officer at high-profile Silicon Valley digital payments startup called Stripe, another Y Combinator company. It was an eclectic group. But they all shared a goal: to create a new kind of AI lab, one that would operate outside the control not only of Google, but of anyone else. “The best thing that I could imagine doing,” Brockman says, “was moving humanity closer to building real AI in a safe way.
Musk is one of the loudest voices warning that we humans could one day lose control of systems powerful enough to learn on their own.
Musk was there because he’s an old friend of Altman’s—and because AI is crucial to the future of his various businesses and, well, the future as a whole. Tesla needs AI for its inevitable self-driving cars. SpaceX, Musk’s other company, will need it to put people in space and keep them alive once they’re there. But Musk is also one of the loudest voices warning that we humans could one day lose control of systems powerful enough to learn on their own.
The trouble was: so many of the people most qualified to solve all those problems were already working for Google (and Facebook and Microsoft and Baidu and Twitter). And no one at the dinner was quite sure that these thinkers could be lured to a new startup, even if Musk and Altman were behind it. But one key player was at least open to the idea of jumping ship. “I felt there were risks involved,” Sutskever says. “But I also felt it would be a very interesting thing to try.

Breaking the Cycle
Emboldened by the conversation with Musk, Altman, and others at the Rosewood, Brockman soon resolved to build the lab they all envisioned. Taking on the project full-time, he approached Yoshua Bengio, a computer scientist at the University of Montreal and one of founding fathers of the deep learning movement. The field’s other two pioneers—Geoff Hinton and Yann LeCun—are now at Google and Facebook, respectively, but Bengio is committed to life in the world of academia, largely outside the aims of industry. He drew up a list of the best researchers in the field, and over the next several weeks, Brockman reached out to as many on the list as he could, along with several others.
Greg Brockman,
one of OpenAI’s founding fathers and
its chief technology officer.
CHRISTIE HEMM KLOK/WIRED
Many of these researchers liked the idea, but they were also wary of making the leap. In an effort to break the cycle, Brockman picked the ten researchers he wanted the most and invited them to spend a Saturday getting wined, dined, and cajoled at a winery in Napa Valley. For Brockman, even the drive into Napa served as a catalyst for the project. “An underrated way to bring people together are these times where there is no way to speed up getting to where you’re going,” he says. “You have to get there, and you have to talk.” And once they reached the wine country, that vibe remained. “It was one of those days where you could tell the chemistry was there,” Brockman says. Or as Sutskever puts it: “the wine was secondary to the talk.”
RELATED STORIES



By the end of the day, Brockman asked all ten researchers to join the lab, and he gave them three weeks to think about it. By the deadline, nine of them were in. And they stayed in, despite those big offers from the giants of Silicon Valley. “They did make it very compelling for me to stay, so it wasn’t an easy decision,” Sutskever says of Google, his former employer. “But in the end, I decided to go with OpenAI, partly of because of the very strong group of people and, to a very large extent, because of its mission.”
The deep learning movement began with academics. It’s only recently that companies like Google and Facebook and Microsoft have pushed into the field, as advances in raw computing power have made deep neural networks a reality, not just a theoretical possibility. People like Hinton and LeCun left academia for Google and Facebook because of the enormous resources inside these companies. But they remain intent on collaborating with other thinkers. Indeed, as LeCun explains, deep learning research requires this free flow of ideas. “When you do research in secret,” he says, “you fall behind.”
As a result, big companies now share a lot of their AI research. That’s a real change, especially for Google, which has long kept the tech at the heart of its online empiresecret. Recently, Google open sourced the software engine that drives its neural networks. But it still retains the inside track in the race to the future. Brockman, Altman, and Musk aim to push the notion of openness further still, saying they don’t want one or two large corporations controlling the future of artificial intelligence.
The Limits of Openness
All of which sounds great. But for all of OpenAI’s idealism, the researchers may find themselves facing some of the same compromises they had to make at their old jobs. Openness has its limits. And the long-term vision for AI isn’t the only interest in play. OpenAI is not a charity. Musk’s companies that could benefit greatly the startup’s work, and so could many of the companies backed by Altman’s Y Combinator. “There are certainly some competing objectives,” LeCun says. “It’s a non-profit, but then there is a very close link with Y Combinator. And people are paid as if they are working in the industry.”
According to Brockman, the lab doesn’t pay the same astronomical salaries that AI researchers are now getting at places like Google and Facebook. But he says the lab does want to “pay them well,” and it’s offering to compensate researchers with stock options, first in Y Combinator and perhaps later in SpaceX (which, unlike Tesla, is still a private company).

Brockman insists that OpenAI won’t give special treatment to its sister companies.
Nonetheless, Brockman insists that OpenAI won’t give special treatment to its sister companies. OpenAI is a research outfit, he says, not a consulting firm. But when pressed, he acknowledges that OpenAI’s idealistic vision has its limits. The company may not open source everything it produces, though it will aim to share most of its research eventually, either through research papers or Internet services. “Doing all your research in the open is not necessarily the best way to go. You want to nurture an idea, see where it goes, and then publish it,” Brockman says. “We will produce lot of open source code. But we will also have a lot of stuff that we are not quite ready to release.
Both Sutskever and Brockman also add that OpenAI could go so far as to patent some of its work. “We won’t patent anything in the near term,” Brockman says. “But we’re open to changing tactics in the long term, if we find it’s the best thing for the world.” For instance, he says, OpenAI could engage in pre-emptive patenting, a tactic that seeks to prevent others from securing patents.
But to some, patents suggest a profit motive—or at least a weaker commitment to open source than OpenAI’s founders have espoused. “That’s what the patent system is about,” says Oren Etzioni, head of the Allen Institute for Artificial Intelligence. “This makes me wonder where they’re really going.

The Super-Intelligence Problem
When Musk and Altman unveiled OpenAI, they also painted the project as a way to neutralize the threat of a malicious artificial super-intelligence. Of course, that super-intelligence could arise out of the tech OpenAI creates, but they insist that any threat would be mitigated because the technology would be usable by everyone. “We think its far more likely that many, many AIs will work to stop the occasional bad actors,” Altman says.
But not everyone in the field buys this. Nick Bostrom, the Oxford philosopher who, like Musk, has warned against the dangers of AI, points out that if you share research without restriction, bad actors could grab it before anyone has ensured that it’s safe. “If you have a button that could do bad things to the world,” Bostrom says, “you don’t want to give it to everyone.” If, on the other hand, OpenAI decides to hold back research to keep it from the bad guys, Bostrom wonders how it’s different from a Google or a Facebook.
If you share research without restriction, bad actors could grab it before anyone has ensured that it’s safe.
He does say that the not-for-profit status of OpenAI could change things—though not necessarily. The real power of the project, he says, is that it can indeed provide a check for the likes of Google and Facebook. “It can reduce the probability that super-intelligence would be monopolized,” he says. “It can remove one possible reason why some entity or group would have radically better AI than everyone else.
But as the philosopher explains in a new paper, the primary effect of an outfit like OpenAI—an outfit intent on freely sharing its work—is that it accelerates the progress of artificial intelligence, at least in the short term. And it may speed progress in the long term as well, provided that it, for altruistic reasons, “opts for a higher level of openness than would be commercially optimal.
It might still be plausible that a philanthropically motivated R&D funder would speed progress more by pursuing open science,” he says.


Like Xerox PARC
In early January, Brockman’s nine AI researchers met up at his apartment in San Francisco’s Mission District. The project was so new that they didn’t even have white boards. (Can you imagine?) They bought a few that day and got down to work.
Brockman says OpenAI will begin by exploring reinforcement learning, a way for machines to learn tasks by repeating them over and over again and tracking which methods produce the best results. But the other primary goal is what’s called unsupervised learning—creating machines that can truly learn on their own, without a human hand to guide them. Today, deep learning is driven by carefully labeled data. If you want to teach a neural network to recognize cat photos, you must feed it a certain number of examples—and these examples must be labeled as cat photos. The learning is supervised by human labelers. But like many others researchers, OpenAI aims to create neural nets that can learn without carefully labeled data.
If you have really good unsupervised learning, machines would be able to learn from all this knowledge on the Internet—just like humans learn by looking around—or reading books,” Brockman says.
He envisions OpenAI as the modern incarnation of Xerox PARC, the tech research lab that thrived in the 1970s. Just as PARC’s largely open and unfettered research gave rise to everything from the graphical user interface to the laser printer to object-oriented programing, Brockman and crew seek to delve even deeper into what we once considered science fiction. PARC was owned by, yes, Xerox, but it fed so many other companies, most notably Apple, because people like Steve Jobs were privy to its research. At OpenAI, Brockman wants to make everyone privy to its research.
This month, hoping to push this dynamic as far as it will go, Brockman and company snagged several other notable researchers, including Ian Goodfellow, another former senior researcher on the Google Brain team. “The thing that was really special about PARC is that they got a bunch of smart people together and let them go where they want,” Brockman says. “You want a shared vision, without central control.”
Giving up control is the essence of the open source ideal. If enough people apply themselves to a collective goal, the end result will trounce anything you concoct in secret. But if AI becomes as powerful as promised, the equation changes. We’ll have to ensure that new AIs adhere to the same egalitarian ideals that led to their creation in the first place. Musk, Altman, and Brockman are placing their faith in the wisdom of the crowd. But if they’re right, one day that crowd won’t be entirely human.
ORIGINAL: Wired

CADE METZ BUSINESS 
04.27.16 

Inside OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free
AI, Elon Musk, Open Source, OpenAI, Reinforcement Learning, Software Kit,

 MICHAL CZERWONKA/REDUX
THE FRIDAY AFTERNOON news dump, a grand tradition observed by politicians and capitalists alike, is usually supposed to hide bad news. So it was a little weird that Elon Musk, founder of electric car maker Tesla, and Sam Altman, president of famed tech incubator Y Combinator, unveiled their new artificial intelligence company at the tail end of a weeklong AI conference in Montreal this past December.
But there was a reason they revealed OpenAI at that late hour. It wasn’t that no one was looking. It was that everyone was looking. When some of Silicon Valley’s most powerful companies caught wind of the project, they began offering tremendous amounts of money to OpenAI’s freshly assembled cadre of artificial intelligence researchers, intent on keeping these big thinkers for themselves. The last-minute offers—some made at the conference itself—were large enough to force Musk and Altman to delay the announcement of the new startup. “The amount of money was borderline crazy,” says Wojciech Zaremba, a researcher who was joining OpenAI after internships at both Google and Facebook and was among those who received big offers at the eleventh hour.
How many dollars is “borderline crazy”? 
Two years ago, as the market for the latest machine learning technology really started to heat up, Microsoft Research vice president Peter Lee said that the cost of a top AI researcher had eclipsed the cost of a top quarterback prospect in the National Football League—and he meant under regular circumstances, not when two of the most famous entrepreneurs in Silicon Valley were trying to poach your top talent. Zaremba says that as OpenAI was coming together, he was offered two or three times his market value.
OpenAI didn’t match those offers. But it offered something else: the chance to explore research aimed solely at the future instead of products and quarterly earnings, and to eventually share most—if not all—of this research with anyone who wants it. That’s right: Musk, Altman, and company aim to give away what may become the 21st century’s most transformative technology—and give it away for free.
Ilya Sutskever.
CHRISTIE HEMM KLOK/WIRED
Zaremba says those borderline crazy offers actually turned him off—despite his enormous respect for companies like Google and Facebook. He felt like the money was at least as much of an effort to prevent the creation of OpenAI as a play to win his services, and it pushed him even further towards the startup’s magnanimous mission. “I realized,” Zaremba says, “that OpenAI was the best place to be.
That’s the irony at the heart of this story: even as the world’s biggest tech companies try to hold onto their researchers with the same fierceness that NFL teams try to hold onto their star quarterbacks, the researchers themselves just want to share. In the rarefied world of AI research, the brightest minds aren’t driven by—or at least not only by—the next product cycle or profit margin. They want to make AI better, and making AI better doesn’t happen when you keep your latest findings to yourself.
OpenAI is a billion-dollar effort to push AI as far as it will go.
This morning, OpenAI will release its first batch of AI software, a toolkit for building artificially intelligent systems by way of a technology called reinforcement learning—one of the key technologies that, among other things, drove the creation of AlphaGo, the Google AI that shocked the world by mastering the ancient game of Go. With this toolkit, you can build systems that simulate a new breed of robot, play Atari games, and, yes, master the game of Go.
But game-playing is just the beginning. OpenAI is a billion-dollar effort to push AI as far as it will go. In both how the company came together and what it plans to do, you can see the next great wave of innovation forming. We’re a long way from knowing whether OpenAI itself becomes the main agent for that change. But the forces that drove the creation of this rather unusual startup show that the new breed of AI will not only remake technology, but remake the way we build technology.
AI Everywhere
Silicon Valley is not exactly averse to hyperbole. It’s always wise to meet bold-sounding claims with skepticism. But in the field of AI, the change is real. Inside places like Google and Facebook, a technology called deep learning is already helping Internet services identify faces in photos, recognize commands spoken into smartphones, and respond to Internet search queries. And this same technology can drive so many other tasks of the future. It can help machines understand natural language—the natural way that we humans talk and write. It can create a new breed of robot, giving automatons the power to not only perform tasks but learn them on the fly. And some believe it can eventually give machines something close to common sense—the ability to truly think like a human.
But along with such promise comes deep anxiety. Musk and Altman worry that if people can build AI that can do great things, then they can build AI that can do awful things, too. They’re not alone in their fear of robot overlords, but perhaps counterintuitively, Musk and Altman also think that the best way to battle malicious AI is not to restrict access to artificial intelligence but expand it. That’s part of what has attracted a team of young, hyper-intelligent idealists to their new project.
OpenAI began one evening last summer in a private room at Silicon Valley’s Rosewood Hotel—an upscale, urban, ranch-style hotel that sits, literally, at the center of the venture capital world along Sand Hill Road in Menlo Park, California. Elon Musk was having dinner with Ilya Sutskever, who was then working on the Google Brain, the company’s sweeping effort to build deep neural networks—artificially intelligent systems that can learn to perform tasks by analyzing massive amounts of digital data, including everything from recognizing photos to writing email messages to, well, carrying on a conversation. Sutskever was one of the top thinkers on the project. But even bigger ideas were in play.
Sam Altman, whose Y Combinator helped bootstrap companies like Airbnb, Dropbox, and Coinbase, had brokered the meeting, bringing together several AI researchers and a young but experienced company builder named Greg Brockman, previously the chief technology officer at high-profile Silicon Valley digital payments startup called Stripe, another Y Combinator company. It was an eclectic group. But they all shared a goal: to create a new kind of AI lab, one that would operate outside the control not only of Google, but of anyone else. “The best thing that I could imagine doing,” Brockman says, “was moving humanity closer to building real AI in a safe way.
Musk is one of the loudest voices warning that we humans could one day lose control of systems powerful enough to learn on their own.
Musk was there because he’s an old friend of Altman’s—and because AI is crucial to the future of his various businesses and, well, the future as a whole. Tesla needs AI for its inevitable self-driving cars. SpaceX, Musk’s other company, will need it to put people in space and keep them alive once they’re there. But Musk is also one of the loudest voices warning that we humans could one day lose control of systems powerful enough to learn on their own.
The trouble was: so many of the people most qualified to solve all those problems were already working for Google (and Facebook and Microsoft and Baidu and Twitter). And no one at the dinner was quite sure that these thinkers could be lured to a new startup, even if Musk and Altman were behind it. But one key player was at least open to the idea of jumping ship. “I felt there were risks involved,” Sutskever says. “But I also felt it would be a very interesting thing to try.

Breaking the Cycle
Emboldened by the conversation with Musk, Altman, and others at the Rosewood, Brockman soon resolved to build the lab they all envisioned. Taking on the project full-time, he approached Yoshua Bengio, a computer scientist at the University of Montreal and one of founding fathers of the deep learning movement. The field’s other two pioneers—Geoff Hinton and Yann LeCun—are now at Google and Facebook, respectively, but Bengio is committed to life in the world of academia, largely outside the aims of industry. He drew up a list of the best researchers in the field, and over the next several weeks, Brockman reached out to as many on the list as he could, along with several others.
Greg Brockman,
one of OpenAI’s founding fathers and
its chief technology officer.
CHRISTIE HEMM KLOK/WIRED
Many of these researchers liked the idea, but they were also wary of making the leap. In an effort to break the cycle, Brockman picked the ten researchers he wanted the most and invited them to spend a Saturday getting wined, dined, and cajoled at a winery in Napa Valley. For Brockman, even the drive into Napa served as a catalyst for the project. “An underrated way to bring people together are these times where there is no way to speed up getting to where you’re going,” he says. “You have to get there, and you have to talk.” And once they reached the wine country, that vibe remained. “It was one of those days where you could tell the chemistry was there,” Brockman says. Or as Sutskever puts it: “the wine was secondary to the talk.”
RELATED STORIES



By the end of the day, Brockman asked all ten researchers to join the lab, and he gave them three weeks to think about it. By the deadline, nine of them were in. And they stayed in, despite those big offers from the giants of Silicon Valley. “They did make it very compelling for me to stay, so it wasn’t an easy decision,” Sutskever says of Google, his former employer. “But in the end, I decided to go with OpenAI, partly of because of the very strong group of people and, to a very large extent, because of its mission.”
The deep learning movement began with academics. It’s only recently that companies like Google and Facebook and Microsoft have pushed into the field, as advances in raw computing power have made deep neural networks a reality, not just a theoretical possibility. People like Hinton and LeCun left academia for Google and Facebook because of the enormous resources inside these companies. But they remain intent on collaborating with other thinkers. Indeed, as LeCun explains, deep learning research requires this free flow of ideas. “When you do research in secret,” he says, “you fall behind.”
As a result, big companies now share a lot of their AI research. That’s a real change, especially for Google, which has long kept the tech at the heart of its online empiresecret. Recently, Google open sourced the software engine that drives its neural networks. But it still retains the inside track in the race to the future. Brockman, Altman, and Musk aim to push the notion of openness further still, saying they don’t want one or two large corporations controlling the future of artificial intelligence.
The Limits of Openness
All of which sounds great. But for all of OpenAI’s idealism, the researchers may find themselves facing some of the same compromises they had to make at their old jobs. Openness has its limits. And the long-term vision for AI isn’t the only interest in play. OpenAI is not a charity. Musk’s companies that could benefit greatly the startup’s work, and so could many of the companies backed by Altman’s Y Combinator. “There are certainly some competing objectives,” LeCun says. “It’s a non-profit, but then there is a very close link with Y Combinator. And people are paid as if they are working in the industry.”
According to Brockman, the lab doesn’t pay the same astronomical salaries that AI researchers are now getting at places like Google and Facebook. But he says the lab does want to “pay them well,” and it’s offering to compensate researchers with stock options, first in Y Combinator and perhaps later in SpaceX (which, unlike Tesla, is still a private company).

Brockman insists that OpenAI won’t give special treatment to its sister companies.
Nonetheless, Brockman insists that OpenAI won’t give special treatment to its sister companies. OpenAI is a research outfit, he says, not a consulting firm. But when pressed, he acknowledges that OpenAI’s idealistic vision has its limits. The company may not open source everything it produces, though it will aim to share most of its research eventually, either through research papers or Internet services. “Doing all your research in the open is not necessarily the best way to go. You want to nurture an idea, see where it goes, and then publish it,” Brockman says. “We will produce lot of open source code. But we will also have a lot of stuff that we are not quite ready to release.
Both Sutskever and Brockman also add that OpenAI could go so far as to patent some of its work. “We won’t patent anything in the near term,” Brockman says. “But we’re open to changing tactics in the long term, if we find it’s the best thing for the world.” For instance, he says, OpenAI could engage in pre-emptive patenting, a tactic that seeks to prevent others from securing patents.
But to some, patents suggest a profit motive—or at least a weaker commitment to open source than OpenAI’s founders have espoused. “That’s what the patent system is about,” says Oren Etzioni, head of the Allen Institute for Artificial Intelligence. “This makes me wonder where they’re really going.

The Super-Intelligence Problem
When Musk and Altman unveiled OpenAI, they also painted the project as a way to neutralize the threat of a malicious artificial super-intelligence. Of course, that super-intelligence could arise out of the tech OpenAI creates, but they insist that any threat would be mitigated because the technology would be usable by everyone. “We think its far more likely that many, many AIs will work to stop the occasional bad actors,” Altman says.
But not everyone in the field buys this. Nick Bostrom, the Oxford philosopher who, like Musk, has warned against the dangers of AI, points out that if you share research without restriction, bad actors could grab it before anyone has ensured that it’s safe. “If you have a button that could do bad things to the world,” Bostrom says, “you don’t want to give it to everyone.” If, on the other hand, OpenAI decides to hold back research to keep it from the bad guys, Bostrom wonders how it’s different from a Google or a Facebook.
If you share research without restriction, bad actors could grab it before anyone has ensured that it’s safe.
He does say that the not-for-profit status of OpenAI could change things—though not necessarily. The real power of the project, he says, is that it can indeed provide a check for the likes of Google and Facebook. “It can reduce the probability that super-intelligence would be monopolized,” he says. “It can remove one possible reason why some entity or group would have radically better AI than everyone else.
But as the philosopher explains in a new paper, the primary effect of an outfit like OpenAI—an outfit intent on freely sharing its work—is that it accelerates the progress of artificial intelligence, at least in the short term. And it may speed progress in the long term as well, provided that it, for altruistic reasons, “opts for a higher level of openness than would be commercially optimal.
It might still be plausible that a philanthropically motivated R&D funder would speed progress more by pursuing open science,” he says.


Like Xerox PARC
In early January, Brockman’s nine AI researchers met up at his apartment in San Francisco’s Mission District. The project was so new that they didn’t even have white boards. (Can you imagine?) They bought a few that day and got down to work.
Brockman says OpenAI will begin by exploring reinforcement learning, a way for machines to learn tasks by repeating them over and over again and tracking which methods produce the best results. But the other primary goal is what’s called unsupervised learning—creating machines that can truly learn on their own, without a human hand to guide them. Today, deep learning is driven by carefully labeled data. If you want to teach a neural network to recognize cat photos, you must feed it a certain number of examples—and these examples must be labeled as cat photos. The learning is supervised by human labelers. But like many others researchers, OpenAI aims to create neural nets that can learn without carefully labeled data.
If you have really good unsupervised learning, machines would be able to learn from all this knowledge on the Internet—just like humans learn by looking around—or reading books,” Brockman says.
He envisions OpenAI as the modern incarnation of Xerox PARC, the tech research lab that thrived in the 1970s. Just as PARC’s largely open and unfettered research gave rise to everything from the graphical user interface to the laser printer to object-oriented programing, Brockman and crew seek to delve even deeper into what we once considered science fiction. PARC was owned by, yes, Xerox, but it fed so many other companies, most notably Apple, because people like Steve Jobs were privy to its research. At OpenAI, Brockman wants to make everyone privy to its research.
This month, hoping to push this dynamic as far as it will go, Brockman and company snagged several other notable researchers, including Ian Goodfellow, another former senior researcher on the Google Brain team. “The thing that was really special about PARC is that they got a bunch of smart people together and let them go where they want,” Brockman says. “You want a shared vision, without central control.”
Giving up control is the essence of the open source ideal. If enough people apply themselves to a collective goal, the end result will trounce anything you concoct in secret. But if AI becomes as powerful as promised, the equation changes. We’ll have to ensure that new AIs adhere to the same egalitarian ideals that led to their creation in the first place. Musk, Altman, and Brockman are placing their faith in the wisdom of the crowd. But if they’re right, one day that crowd won’t be entirely human.
ORIGINAL: Wired

CADE METZ BUSINESS 
04.27.16 

tman worry that if people can build AI that can do great things, then they can build AI that can do awful things, too. They’re not alone in their fear of robot overlords, but perhaps counterintuitively, Musk and Altman also think that the best way to battle malicious AI is not to restrict access to artificial intelligence but expand it. That’s part of what has attracted a team of young, hyper-intelligent idealists to their new project.

OpenAI began one evening last summer in a private room at Silicon Valley’s Rosewood Hotel—an upscale, urban, ranch-style hotel that sits, literally, at the center of the venture capital world along Sand Hill Road in Menlo Park, California. Elon Musk was having dinner with Ilya Sutskever, who was then working on the Google Brain, the company’s sweeping effort to build deep neural networks—artificially intelligent systems that can learn to perform tasks by analyzing massive amounts of digital data, including everything from recognizing photos to writing email messages to, well, carrying on a conversation. Sutskever was one of the top thinkers on the project. But even bigger ideas were in play.
Sam Altman, whose Y Combinator helped bootstrap companies like Airbnb, Dropbox, and Coinbase, had brokered the meeting, bringing together several AI researchers and a young but experienced company builder named Greg Brockman, previously the chief technology officer at high-profile Silicon Valley digital payments startup called Stripe, another Y Combinator company. It was an eclectic group. But they all shared a goal: to create a new kind of AI lab, one that would operate outside the control not only of Google, but of anyone else. “The best thing that I could imagine doing,” Brockman says, “was moving humanity closer to building real AI in a safe way.
Musk is one of the loudest voices warning that we humans could one day lose control of systems powerful enough to learn on their own.
Musk was there because he’s an old friend of Altman’s—and because AI is crucial to the future of his various businesses and, well, the future as a whole. Tesla needs AI for its inevitable self-driving cars. SpaceX, Musk’s other company, will need it to put people in space and keep them alive once they’re there. But Musk is also one of the loudest voices warning that we humans could one day lose control of systems powerful enough to learn on their own.
The trouble was: so many of the people most qualified to solve all those problems were already working for Google (and Facebook and Microsoft and Baidu and Twitter). And no one at the dinner was quite sure that these thinkers could be lured to a new startup, even if Musk and Altman were behind it. But one key player was at least open to the idea of jumping ship. “I felt there were risks involved,” Sutskever says. “But I also felt it would be a very interesting thing to try.

Breaking the Cycle
Emboldened by the conversation with Musk, Altman, and others at the Rosewood, Brockman soon resolved to build the lab they all envisioned. Taking on the project full-time, he approached Yoshua Bengio, a computer scientist at the University of Montreal and one of founding fathers of the deep learning movement. The field’s other two pioneers—Geoff Hinton and Yann LeCun—are now at Google and Facebook, respectively, but Bengio is committed to life in the world of academia, largely outside the aims of industry. He drew up a list of the best researchers in the field, and over the next several weeks, Brockman reached out to as many on the list as he could, along with several others.
Greg Brockman,
one of OpenAI’s founding fathers and
its chief technology officer.
CHRISTIE HEMM KLOK/WIRED
Many of these researchers liked the idea, but they were also wary of making the leap. In an effort to break the cycle, Brockman picked the ten researchers he wanted the most and invited them to spend a Saturday getting wined, dined, and cajoled at a winery in Napa Valley. For Brockman, even the drive into Napa served as a catalyst for the project. “An underrated way to bring people together are these times where there is no way to speed up getting to where you’re going,” he says. “You have to get there, and you have to talk.” And once they reached the wine country, that vibe remained. “It was one of those days where you could tell the chemistry was there,” Brockman says. Or as Sutskever puts it: “the wine was secondary to the talk.”
RELATED STORIES



By the end of the day, Brockman asked all ten researchers to join the lab, and he gave them three weeks to think about it. By the deadline, nine of them were in. And they stayed in, despite those big offers from the giants of Silicon Valley. “They did make it very compelling for me to stay, so it wasn’t an easy decision,” Sutskever says of Google, his former employer. “But in the end, I decided to go with OpenAI, partly of because of the very strong group of people and, to a very large extent, because of its mission.”
The deep learning movement began with academics. It’s only recently that companies like Google and Facebook and Microsoft have pushed into the field, as advances in raw computing power have made deep neural networks a reality, not just a theoretical possibility. People like Hinton and LeCun left academia for Google and Facebook because of the enormous resources inside these companies. But they remain intent on collaborating with other thinkers. Indeed, as LeCun explains, deep learning research requires this free flow of ideas. “When you do research in secret,” he says, “you fall behind.”
As a result, big companies now share a lot of their AI research. That’s a real change, especially for Google, which has long kept the tech at the heart of its online empiresecret. Recently, Google open sourced the software engine that drives its neural networks. But it still retains the inside track in the race to the future. Brockman, Altman, and Musk aim to push the notion of openness further still, saying they don’t want one or two large corporations controlling the future of artificial intelligence.
The Limits of Openness
All of which sounds great. But for all of OpenAI’s idealism, the researchers may find themselves facing some of the same compromises they had to make at their old jobs. Openness has its limits. And the long-term vision for AI isn’t the only interest in play. OpenAI is not a charity. Musk’s companies that could benefit greatly the startup’s work, and so could many of the companies backed by Altman’s Y Combinator. “There are certainly some competing objectives,” LeCun says. “It’s a non-profit, but then there is a very close link with Y Combinator. And people are paid as if they are working in the industry.”
According to Brockman, the lab doesn’t pay the same astronomical salaries that AI researchers are now getting at places like Google and Facebook. But he says the lab does want to “pay them well,” and it’s offering to compensate researchers with stock options, first in Y Combinator and perhaps later in SpaceX (which, unlike Tesla, is still a private company).

Brockman insists that OpenAI won’t give special treatment to its sister companies.
Nonetheless, Brockman insists that OpenAI won’t give special treatment to its sister companies. OpenAI is a research outfit, he says, not a consulting firm. But when pressed, he acknowledges that OpenAI’s idealistic vision has its limits. The company may not open source everything it produces, though it will aim to share most of its research eventually, either through research papers or Internet services. “Doing all your research in the open is not necessarily the best way to go. You want to nurture an idea, see where it goes, and then publish it,” Brockman says. “We will produce lot of open source code. But we will also have a lot of stuff that we are not quite ready to release.
Both Sutskever and Brockman also add that OpenAI could go so far as to patent some of its work. “We won’t patent anything in the near term,” Brockman says. “But we’re open to changing tactics in the long term, if we find it’s the best thing for the world.” For instance, he says, OpenAI could engage in pre-emptive patenting, a tactic that seeks to prevent others from securing patents.
But to some, patents suggest a profit motive—or at least a weaker commitment to open source than OpenAI’s founders have espoused. “That’s what the patent system is about,” says Oren Etzioni, head of the Allen Institute for Artificial Intelligence. “This makes me wonder where they’re really going.

The Super-Intelligence Problem
When Musk and Altman unveiled OpenAI, they also painted the project as a way to neutralize the threat of a malicious artificial super-intelligence. Of course, that super-intelligence could arise out of the tech OpenAI creates, but they insist that any threat would be mitigated because the technology would be usable by everyone. “We think its far more likely that many, many AIs will work to stop the occasional bad actors,” Altman says.
But not everyone in the field buys this. Nick Bostrom, the Oxford philosopher who, like Musk, has warned against the dangers of AI, points out that if you share research without restriction, bad actors could grab it before anyone has ensured that it’s safe. “If you have a button that could do bad things to the world,” Bostrom says, “you don’t want to give it to everyone.” If, on the other hand, OpenAI decides to hold back research to keep it from the bad guys, Bostrom wonders how it’s different from a Google or a Facebook.
If you share research without restriction, bad actors could grab it before anyone has ensured that it’s safe.
He does say that the not-for-profit status of OpenAI could change things—though not necessarily. The real power of the project, he says, is that it can indeed provide a check for the likes of Google and Facebook. “It can reduce the probability that super-intelligence would be monopolized,” he says. “It can remove one possible reason why some entity or group would have radically better AI than everyone else.
But as the philosopher explains in a new paper, the primary effect of an outfit like OpenAI—an outfit intent on freely sharing its work—is that it accelerates the progress of artificial intelligence, at least in the short term. And it may speed progress in the long term as well, provided that it, for altruistic reasons, “opts for a higher level of openness than would be commercially optimal.
It might still be plausible that a philanthropically motivated R&D funder would speed progress more by pursuing open science,” he says.


Like Xerox PARC
In early January, Brockman’s nine AI researchers met up at his apartment in San Francisco’s Mission District. The project was so new that they didn’t even have white boards. (Can you imagine?) They bought a few that day and got down to work.
Brockman says OpenAI will begin by exploring reinforcement learning, a way for machines to learn tasks by repeating them over and over again and tracking which methods produce the best results. But the other primary goal is what’s called unsupervised learning—creating machines that can truly learn on their own, without a human hand to guide them. Today, deep learning is driven by carefully labeled data. If you want to teach a neural network to recognize cat photos, you must feed it a certain number of examples—and these examples must be labeled as cat photos. The learning is supervised by human labelers. But like many others researchers, OpenAI aims to create neural nets that can learn without carefully labeled data.
If you have really good unsupervised learning, machines would be able to learn from all this knowledge on the Internet—just like humans learn by looking around—or reading books,” Brockman says.
He envisions OpenAI as the modern incarnation of Xerox PARC, the tech research lab that thrived in the 1970s. Just as PARC’s largely open and unfettered research gave rise to everything from the graphical user interface to the laser printer to object-oriented programing, Brockman and crew seek to delve even deeper into what we once considered science fiction. PARC was owned by, yes, Xerox, but it fed so many other companies, most notably Apple, because people like Steve Jobs were privy to its research. At OpenAI, Brockman wants to make everyone privy to its research.
This month, hoping to push this dynamic as far as it will go, Brockman and company snagged several other notable researchers, including Ian Goodfellow, another former senior researcher on the Google Brain team. “The thing that was really special about PARC is that they got a bunch of smart people together and let them go where they want,” Brockman says. “You want a shared vision, without central control.”
Giving up control is the essence of the open source ideal. If enough people apply themselves to a collective goal, the end result will trounce anything you concoct in secret. But if AI becomes as powerful as promised, the equation changes. We’ll have to ensure that new AIs adhere to the same egalitarian ideals that led to their creation in the first place. Musk, Altman, and Brockman are placing their faith in the wisdom of the crowd. But if they’re right, one day that crowd won’t be entirely human.
ORIGINAL: Wired

CADE METZ BUSINESS 
04.27.16 

Google Built Its Very Own Chips to Power Its AI Bots

By Hugo Angel,

GOOGLE
GOOGLE HAS DESIGNED its own computer chip for driving deep neural networks, an AI technology that is reinventing the way Internet services operate.
This morning, at Google I/O, the centerpiece of the company’s year, CEO Sundar Pichai said that Google has designed an ASIC, or application-specific integrated circuit, that’s specific to deep neural nets. These are networks of hardware and software that can learn specific tasks by analyzing vast amounts of data. Google uses neural nets to identify objects and faces in photos, recognize the commands you speak into Android phones, or translate text from one language to another. This technology has even begin to transform the Google search engine.
Big Brains
Google’s called its chip the Tensor Processing Unit, or TPU, because it underpins TensorFlow, the software engine that drives its deep learning services.
 
This past fall, Google released TensorFlow under an open-source license, which means anyone outside the company can use and even modify this software engine. It does not appear that Google will share the designs for the TPU, but outsider can make use of Google’s own machine learning hardware and software via various Google cloud services.
Google says it has been running TPUs for about a year, and that they were developed not long before that.Google is just one of so many companies adding deep learning to a wide range of Internet services, including everyone from Facebook and Microsoft to Twitter. Typically, these Internet giants drive their neural nets with graphics processing units, or GPUs, from chip makers like Nvidia. But some, including Microsoft, are also exploring the use of field programmable gate arrays, or FPGAs, chips that can be programmed to specific tasks.
GOOGLE
According to Google, on the massive hardware racks inside the data centers that power its online services, a TPU board fits into the same slot as a hard drive, and it provides an order of magnitude better-optimized performance per watt for machine learning than other hardware solutions.
TPU is tailored to machine learning applications, allowing the chip to be more tolerant of reduced computational precision, which means it requires fewer transistors per operation,” the company says in a blog post. “Because of this, we can squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models and apply these models more quickly, so users get more intelligent results more rapidly.
This means, among other things, that Google is not using chips from companies like Nvidia—or using fewer chips from these companies. It also indicates that Google is more than willing to build its own chips, which bad news from any chipmaker, most notably the world’s largest: Intel. Intel processor power a vast major of the computer servers inside Google, but the worry, for Intel, is that the Internet giant will one day design its own central processing units as well.
Google says it has been running TPUs for about a year, and that they were developed not long before that. After testing its first silicon, the company says, it had it running live applications inside its data centers within 22 days.
 
ORIGINAL: Wired
By Cade Metz
05.18.2016 

The Rise of Artificial Intelligence and the End of Code

By Hugo Angel,

EDWARD C. MONAGHAN
Soon We Won’t Program Computers. We’ll Train Them Like Dogs
Before the invention of the computer, most experimental psychologists thought the brain was an unknowable black box. You could analyze a subject’s behavior—ring bell, dog salivates—but thoughts, memories, emotions? That stuff was obscure and inscrutable, beyond the reach of science. So these behaviorists, as they called themselves, confined their work to the study of stimulus and response, feedback and reinforcement, bells and saliva. They gave up trying to understand the inner workings of the mind. They ruled their field for four decades.
Then, in the mid-1950s, a group of rebellious psychologists, linguists, information theorists, and early artificial-intelligence researchers came up with a different conception of the mind. People, they argued, were not just collections of conditioned responses. They absorbed information, processed it, and then acted upon it. They had systems for writing, storing, and recalling memories. They operated via a logical, formal syntax. The brain wasn’t a black box at all. It was more like a computer.
The so-called cognitive revolution started small, but as computers became standard equipment in psychology labs across the country, it gained broader acceptance. By the late 1970s, cognitive psychology had overthrown behaviorism, and with the new regime came a whole new language for talking about mental life. Psychologists began describing thoughts as programs, ordinary people talked about storing facts away in their memory banks, and business gurus fretted about the limits of mental bandwidth and processing power in the modern workplace. 
This story has repeated itself again and again. As the digital revolution wormed its way into every part of our lives, it also seeped into our language and our deep, basic theories about how things work. Technology always does this. During the Enlightenment, Newton and Descartes inspired people to think of the universe as an elaborate clock. In the industrial age, it was a machine with pistons. (Freud’s idea of psychodynamics borrowed from the thermodynamics of steam engines.) Now it’s a computer. Which is, when you think about it, a fundamentally empowering idea. Because if the world is a computer, then the world can be coded. 
Code is logical. Code is hackable. Code is destiny. These are the central tenets (and self-fulfilling prophecies) of life in the digital age. As software has eaten the world, to paraphrase venture capitalist Marc Andreessen, we have surrounded ourselves with machines that convert our actions, thoughts, and emotions into data—raw material for armies of code-wielding engineers to manipulate. We have come to see life itself as something ruled by a series of instructions that can be discovered, exploited, optimized, maybe even rewritten. Companies use code to understand our most intimate ties; Facebook’s Mark Zuckerberg has gone so far as to suggest there might be a “fundamental mathematical law underlying human relationships that governs the balance of who and what we all care about.In 2013, Craig Venter announced that, a decade after the decoding of the human genome, he had begun to write code that would allow him to create synthetic organisms. “It is becoming clear,” he said, “that all living cells that we know of on this planet are DNA-software-driven biological machines.” Even self-help literature insists that you can hack your own source code, reprogramming your love life, your sleep routine, and your spending habits.
In this world, the ability to write code has become not just a desirable skill but a language that grants insider status to those who speak it. They have access to what in a more mechanical age would have been called the levers of power. “If you control the code, you control the world,” wrote futurist Marc Goodman. (In Bloomberg Businessweek, Paul Ford was slightly more circumspect: “If coders don’t run the world, they run the things that run the world.” Tomato, tomahto.)
But whether you like this state of affairs or hate it—whether you’re a member of the coding elite or someone who barely feels competent to futz with the settings on your phone—don’t get used to it. Our machines are starting to speak a different language now, one that even the best coders can’t fully understand. 
Over the past several years, the biggest tech companies in Silicon Valley have aggressively pursued an approach to computing called machine learning. In traditional programming, an engineer writes explicit, step-by-step instructions for the computer to follow. With machine learning, programmers don’t encode computers with instructions. They train them. If you want to teach a neural network to recognize a cat, for instance, you don’t tell it to look for whiskers, ears, fur, and eyes. You simply show it thousands and thousands of photos of cats, and eventually it works things out. If it keeps misclassifying foxes as cats, you don’t rewrite the code. You just keep coaching it.
This approach is not new—it’s been around for decades—but it has recently become immensely more powerful, thanks in part to the rise of deep neural networks, massively distributed computational systems that mimic the multilayered connections of neurons in the brain. And already, whether you realize it or not, machine learning powers large swaths of our online activity. Facebook uses it to determine which stories show up in your News Feed, and Google Photos uses it to identify faces. Machine learning runs Microsoft’s Skype Translator, which converts speech to different languages in real time. Self-driving cars use machine learning to avoid accidents. Even Google’s search engine—for so many years a towering edifice of human-written rules—has begun to rely on these deep neural networks. In February the company replaced its longtime head of search with machine-learning expert John Giannandrea, and it has initiated a major program to retrain its engineers in these new techniques. “By building learning systems,” Giannandrea told reporters this fall, “we don’t have to write these rules anymore.
 
Our machines speak a different language now, one that even the best coders can’t fully understand. 
But here’s the thing: With machine learning, the engineer never knows precisely how the computer accomplishes its tasks. The neural network’s operations are largely opaque and inscrutable. It is, in other words, a black box. And as these black boxes assume responsibility for more and more of our daily digital tasks, they are not only going to change our relationship to technology—they are going to change how we think about ourselves, our world, and our place within it.
If in the old view programmers were like gods, authoring the laws that govern computer systems, now they’re like parents or dog trainers. And as any parent or dog owner can tell you, that is a much more mysterious relationship to find yourself in.
Andy Rubin is an inveterate tinkerer and coder. The cocreator of the Android operating system, Rubin is notorious in Silicon Valley for filling his workplaces and home with robots. He programs them himself. “I got into computer science when I was very young, and I loved it because I could disappear in the world of the computer. It was a clean slate, a blank canvas, and I could create something from scratch,” he says. “It gave me full control of a world that I played in for many, many years.
Now, he says, that world is coming to an end. Rubin is excited about the rise of machine learning—his new company, Playground Global, invests in machine-learning startups and is positioning itself to lead the spread of intelligent devices—but it saddens him a little too. Because machine learning changes what it means to be an engineer.
People don’t linearly write the programs,” Rubin says. “After a neural network learns how to do speech recognition, a programmer can’t go in and look at it and see how that happened. It’s just like your brain. You can’t cut your head off and see what you’re thinking.When engineers do peer into a deep neural network, what they see is an ocean of math: a massive, multilayer set of calculus problems that—by constantly deriving the relationship between billions of data points—generate guesses about the world. 
Artificial intelligence wasn’t supposed to work this way. Until a few years ago, mainstream AI researchers assumed that to create intelligence, we just had to imbue a machine with the right logic. Write enough rules and eventually we’d create a system sophisticated enough to understand the world. They largely ignored, even vilified, early proponents of machine learning, who argued in favor of plying machines with data until they reached their own conclusions. For years computers weren’t powerful enough to really prove the merits of either approach, so the argument became a philosophical one. “Most of these debates were based on fixed beliefs about how the world had to be organized and how the brain worked,” says Sebastian Thrun, the former Stanford AI professor who created Google’s self-driving car. “Neural nets had no symbols or rules, just numbers. That alienated a lot of people.
The implications of an unparsable machine language aren’t just philosophical. For the past two decades, learning to code has been one of the surest routes to reliable employment—a fact not lost on all those parents enrolling their kids in after-school code academies. But a world run by neurally networked deep-learning machines requires a different workforce. Analysts have already started worrying about the impact of AI on the job market, as machines render old skills irrelevant. Programmers might soon get a taste of what that feels like themselves.
Just as Newtonian physics wasn’t obviated by quantum mechanics, code will remain a powerful tool set to explore the world. 
I was just having a conversation about that this morning,” says tech guru Tim O’Reilly when I ask him about this shift. “I was pointing out how different programming jobs would be by the time all these STEM-educated kids grow up.” Traditional coding won’t disappear completely—indeed, O’Reilly predicts that we’ll still need coders for a long time yet—but there will likely be less of it, and it will become a meta skill, a way of creating what Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, calls the “scaffolding” within which machine learning can operate. Just as Newtonian physics wasn’t obviated by the discovery of quantum mechanics, code will remain a powerful, if incomplete, tool set to explore the world. But when it comes to powering specific functions, machine learning will do the bulk of the work for us. 
Of course, humans still have to train these systems. But for now, at least, that’s a rarefied skill. The job requires both a high-level grasp of mathematics and an intuition for pedagogical give-and-take. “It’s almost like an art form to get the best out of these systems,” says Demis Hassabis, who leads Google’s DeepMind AI team. “There’s only a few hundred people in the world that can do that really well.” But even that tiny number has been enough to transform the tech industry in just a couple of years.
Whatever the professional implications of this shift, the cultural consequences will be even bigger. If the rise of human-written software led to the cult of the engineer, and to the notion that human experience can ultimately be reduced to a series of comprehensible instructions, machine learning kicks the pendulum in the opposite direction. The code that runs the universe may defy human analysis. Right now Google, for example, is facing an antitrust investigation in Europe that accuses the company of exerting undue influence over its search results. Such a charge will be difficult to prove when even the company’s own engineers can’t say exactly how its search algorithms work in the first place.
This explosion of indeterminacy has been a long time coming. It’s not news that even simple algorithms can create unpredictable emergent behavior—an insight that goes back to chaos theory and random number generators. Over the past few years, as networks have grown more intertwined and their functions more complex, code has come to seem more like an alien force, the ghosts in the machine ever more elusive and ungovernable. Planes grounded for no reason. Seemingly unpreventable flash crashes in the stock market. Rolling blackouts.
These forces have led technologist Danny Hillis to declare the end of the age of Enlightenment, our centuries-long faith in logic, determinism, and control over nature. Hillis says we’re shifting to what he calls the age of Entanglement. “As our technological and institutional creations have become more complex, our relationship to them has changed,” he wrote in the Journal of Design and Science. “Instead of being masters of our creations, we have learned to bargain with them, cajoling and guiding them in the general direction of our goals. We have built our own jungle, and it has a life of its own.The rise of machine learning is the latest—and perhaps the last—step in this journey. 
This can all be pretty frightening. After all, coding was at least the kind of thing that a regular person could imagine picking up at a boot camp. Coders were at least human. Now the technological elite is even smaller, and their command over their creations has waned and become indirect. Already the companies that build this stuff find it behaving in ways that are hard to govern. Last summer, Google rushed to apologize when its photo recognition engine started tagging images of black people as gorillas. The company’s blunt first fix was to keep the system from labeling anything as a gorilla.

To nerds of a certain bent, this all suggests a coming era in which we forfeit authority over our machines. “One can imagine such technology 

  • outsmarting financial markets, 
  • out-inventing human researchers, 
  • out-manipulating human leaders, and 
  • developing weapons we cannot even understand,” 

wrote Stephen Hawking—sentiments echoed by Elon Musk and Bill Gates, among others. “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.” 

 
But don’t be too scared; this isn’t the dawn of Skynet. We’re just learning the rules of engagement with a new technology. Already, engineers are working out ways to visualize what’s going on under the hood of a deep-learning system. But even if we never fully understand how these new machines think, that doesn’t mean we’ll be powerless before them. In the future, we won’t concern ourselves as much with the underlying sources of their behavior; we’ll learn to focus on the behavior itself. The code will become less important than the data we use to train it.
This isn’t the dawn of Skynet. We’re just learning the rules of engagement with a new technology. 
If all this seems a little familiar, that’s because it looks a lot like good old 20th-century behaviorism. In fact, the process of training a machine-learning algorithm is often compared to the great behaviorist experiments of the early 1900s. Pavlov triggered his dog’s salivation not through a deep understanding of hunger but simply by repeating a sequence of events over and over. He provided data, again and again, until the code rewrote itself. And say what you will about the behaviorists, they did know how to control their subjects.
In the long run, Thrun says, machine learning will have a democratizing influence. In the same way that you don’t need to know HTML to build a website these days, you eventually won’t need a PhD to tap into the insane power of deep learning. Programming won’t be the sole domain of trained coders who have learned a series of arcane languages. It’ll be accessible to anyone who has ever taught a dog to roll over. “For me, it’s the coolest thing ever in programming,” Thrun says, “because now anyone can program.
For much of computing history, we have taken an inside-out view of how machines work. First we write the code, then the machine expresses it. This worldview implied plasticity, but it also suggested a kind of rules-based determinism, a sense that things are the product of their underlying instructions. Machine learning suggests the opposite, an outside-in view in which code doesn’t just determine behavior, behavior also determines code. Machines are products of the world.
Ultimately we will come to appreciate both the power of handwritten linear code and the power of machine-learning algorithms to adjust it—the give-and-take of design and emergence. It’s possible that biologists have already started figuring this out. Gene-editing techniques like Crispr give them the kind of code-manipulating power that traditional software programmers have wielded. But discoveries in the field of epigenetics suggest that genetic material is not in fact an immutable set of instructions but rather a dynamic set of switches that adjusts depending on the environment and experiences of its host. Our code does not exist separate from the physical world; it is deeply influenced and transmogrified by it. Venter may believe cells are DNA-software-driven machines, but epigeneticist Steve Cole suggests a different formulation: “A cell is a machine for turning experience into biology.
A cell is a machine for turning experience into biology.” 
Steve Cole
And now, 80 years after Alan Turing first sketched his designs for a problem-solving machine, computers are becoming devices for turning experience into technology. For decades we have sought the secret code that could explain and, with some adjustments, optimize our experience of the world. But our machines won’t work that way for much longer—and our world never really did. We’re about to have a more complicated but ultimately more rewarding relationship with technology. We will go from commanding our devices to parenting them.

What the AI Behind AlphaGo Teaches Us About Humanity. Watch this on The Scene.
Editor at large Jason Tanz (@jasontanz) wrote about Andy Rubin’s new company, Playground, in issue 24.03.
This article appears in the June issue. Go Back to Top. Skip To: Start of Article.
ORIGINAL: Wired

First Human Tests of Memory Boosting Brain Implant—a Big Leap Forward

By Hugo Angel,

You have to begin to lose your memory, if only bits and pieces, to realize that memory is what makes our lives. Life without memory is no life at all.” — Luis Buñuel Portolés, Filmmaker
Image Credit: Shutterstock.com
Every year, hundreds of millions of people experience the pain of a failing memory.
The reasons are many:

  • traumatic brain injury, which haunts a disturbingly high number of veterans and football players; 
  • stroke or Alzheimer’s disease, which often plagues the elderly; or 
  • even normal brain aging, which inevitably touches us all.
Memory loss seems to be inescapable. But one maverick neuroscientist is working hard on an electronic cure. Funded by DARPA, Dr. Theodore Berger, a biomedical engineer at the University of Southern California, is testing a memory-boosting implant that mimics the kind of signal processing that occurs when neurons are laying down new long-term memories.
The revolutionary implant, already shown to help memory encoding in rats and monkeys, is now being tested in human patients with epilepsy — an exciting first that may blow the field of memory prosthetics wide open.
To get here, however, the team first had to crack the memory code.

Deciphering Memory
From the very onset, Berger knew he was facing a behemoth of a problem.
We weren’t looking to match everything the brain does when it processes memory, but to at least come up with a decent mimic, said Berger.
Of course people asked: can you model it and put it into a device? Can you get that device to work in any brain? It’s those things that lead people to think I’m crazy. They think it’s too hard,” he said.
But the team had a solid place to start.
The hippocampus, a region buried deep within the folds and grooves of the brain, is the critical gatekeeper that transforms memories from short-lived to long-term. In dogged pursuit, Berger spent most of the last 35 years trying to understand how neurons in the hippocampus accomplish this complicated feat.
At its heart, a memory is a series of electrical pulses that occur over time that are generated by a given number of neurons, said Berger. This is important — it suggests that we can reduce it to mathematical equations and put it into a computational framework, he said.
Berger hasn’t been alone in his quest.
By listening to the chatter of neurons as an animal learns, teams of neuroscientists have begun to decipher the flow of information within the hippocampus that supports memory encoding. Key to this process is a strong electrical signal that travels from CA3, the “input” part of the hippocampus, to CA1, the “output” node.
This signal is impaired in people with memory disabilities, said Berger, so of course we thought if we could recreate it using silicon, we might be able to restore — or even boost — memory.

Bridging the Gap
Yet this brain’s memory code proved to be extremely tough to crack.
The problem lies in the non-linear nature of neural networks: signals are often noisy and constantly overlap in time, which leads to some inputs being suppressed or accentuated. In a network of hundreds and thousands of neurons, any small change could be greatly amplified and lead to vastly different outputs.
It’s a chaotic black box, laughed Berger.
With the help of modern computing techniques, however, Berger believes he may have a crude solution in hand. His proof?
Use his mathematical theorems to program a chip, and then see if the brain accepts the chip as a replacement — or additional — memory module.
Berger and his team began with a simple task using rats. They trained the animals to push one of two levers to get a tasty treat, and recorded the series of CA3 to CA1 electronic pulses in the hippocampus as the animals learned to pick the correct lever. The team carefully captured the way the signals were transformed as the session was laid down into long-term memory, and used that information — the electrical “essence” of the memory — to program an external memory chip.
They then injected the animals with a drug that temporarily disrupted their ability to form and access long-term memories, causing the animals to forget the reward-associated lever. Next, implanting microelectrodes into the hippocampus, the team pulsed CA1, the output region, with their memory code.
The results were striking — powered by an external memory module, the animals regained their ability to pick the right lever.
Encouraged by the results, Berger next tried his memory implant in monkeys, this time focusing on a brain region called the prefrontal cortex, which receives and modulates memories encoded by the hippocampus.
Placing electrodes into the monkey’s brains, the team showed the animals a series of semi-repeated images, and captured the prefrontal cortex’s activity when the animals recognized an image they had seen earlier. Then with a hefty dose of cocaine, the team inhibited that particular brain region, which disrupted the animal’s recall.
Next, using electrodes programmed with the “memory code,” the researchers guided the brain’s signal processing back on track — and the animal’s performance improved significantly.
A year later, the team further validated their memory implant by showing it could also rescue memory deficits due to hippocampal malfunction in the monkey brain.

A Human Memory Implant
Last year, the team cautiously began testing their memory implant prototype in human volunteers.
Because of the risks associated with brain surgery, the team recruited 12 patients with epilepsy, who already have electrodes implanted into their brain to track down the source of their seizures.
Repeated seizures steadily destroy critical parts of the hippocampus needed for long-term memory formation, explained Berger. So if the implant works, it could benefit these patients as well.
The team asked the volunteers to look through a series of pictures, and then recall which ones they had seen 90 seconds later. As the participants learned, the team recorded the firing patterns in both CA1 and CA3 — that is, the input and output nodes.
Using these data, the team extracted an algorithm — a specific human “memory code” — that could predict the pattern of activity in CA1 cells based on CA3 input. Compared to the brain’s actual firing patterns, the algorithm generated correct predictions roughly 80% of the time.
It’s not perfect, said Berger, but it’s a good start.
Using this algorithm, the researchers have begun to stimulate the output cells with an approximation of the transformed input signal.
We have already used the pattern to zap the brain of one woman with epilepsy, said Dr. Dong Song, an associate professor working with Berger. But he remained coy about the result, only saying that although promising, it’s still too early to tell.
Song’s caution is warranted. Unlike the motor cortex, with its clear structured representation of different body parts, the hippocampus is not organized in any obvious way.
It’s hard to understand why stimulating input locations can lead to predictable results, said Dr. Thoman McHugh, a neuroscientist at the RIKEN Brain Science Institute. It’s also difficult to tell whether such an implant could save the memory of those who suffer from damage to the output node of the hippocampus.
That said, the data is convincing,” McHugh acknowledged.
Berger, on the other hand, is ecstatic. “I never thought I’d see this go into humans,” he said.
But the work is far from done. Within the next few years, Berger wants to see whether the chip can help build long-term memories in a variety of different situations. After all, the algorithm was based on the team’s recordings of one specific task — what if the so-called memory code is not generalizable, instead varying based on the type of input that it receives?
Berger acknowledges that it’s a possibility, but he remains hopeful.
I do think that we will find a model that’s a pretty good fit for most conditions, he said. After all, the brain is restricted by its own biophysics — there’s only so many ways that electrical signals in the hippocampus can be processed, he said.
The goal is to improve the quality of life for somebody who has a severe memory deficit,” said Berger. “If I can give them the ability to form new long-term memories for half the conditions that most people live in, I’ll be happy as hell, and so will be most patients.
ORIGINAL: Singularity Hub

Next Rembrandt

By Hugo Angel,

01 GATHERING THE DATA
To distill the artistic DNA of Rembrandt, an extensive database of his paintings was built and analyzed, pixel by pixel.
FUN FACT:
150 Gigabytes of digitally rendered graphics

BUILDING AN EXTENSIVE POOL OF DATA
t’s been almost four centuries since the world lost the talent of one its most influential classical painters, Rembrandt van Rijn. To bring him back, we distilled the artistic DNA from his work and used it to create The Next Rembrandt.
We examined the entire collection of Rembrandt’s work, studying the contents of his paintings pixel by pixel. To get this data, we analyzed a broad range of materials like high resolution 3D scans and digital files, which were upscaled by deep learning algorithms to maximize resolution and quality. This extensive database was then used as the foundation for creating The Next Rembrandt.
Data is used by many people today to help them be more efficient and knowledgeable about their daily work, and about the decisions they need to make. But in this project it’s also used to make life itself more beautiful. It really touches the human soul.
– Ron Augustus, Microsoft
02 DETERMINING THE SUBJECT
Data from Rembrandt’s body of work showed the way to the subject of the new painting.
FUN FACT:
346 Paintings were studied


DELVING INTO REMBRANDT VAN RIJN
  • 49% FEMALE
  • 51% MALE
Throughout his life, Rembrandt painted a great number of self-portraits, commissioned portraits and group shots, Biblical scenes, and even a few landscapes. He’s known for painting brutally honest and unforgiving portrayals of his subjects, utilizing a limited color palette for facial emphasis, and innovating the use of light and shadows.
“There’s a lot of Rembrandt data available — you have this enormous amount of technical data from all these paintings from various collections. And can we actually create something out of it that looks like Rembrandt? That’s an appealing question.”
– Joris Dik, Technical University Delft
BREAKING DOWN THE DEMOGRAPHICS IN REMBRANDT’S WORK
To create new artwork using data from Rembrandt’s paintings, we had to maximize the data pool from which to pull information. Because he painted more portraits than any other subject, we narrowed down our exploration to these paintings.
Then we found the period in which the majority of these paintings were created: between 1632 and 1642. Next, we defined the demographic segmentation of the people in these works and saw which elements occurred in the largest sample of paintings. We funneled down that selection starting with gender and then went on to analyze everything from age and head direction, to the amount of facial hair present.
After studying the demographics, the data lead us to a conclusive subject: a portrait of a Caucasian male with facial hair, between the ages of thirty and forty, wearing black clothes with a white collar and a hat, facing to the right.
03 GENERATING THE FEATURES
A software system was designed to understand Rembrandt’s style and generate new features.
FUN FACT:
500+ Hours of rendering
MASTERING THE STYLE OF REMBRANDT
In creating the new painting, it was imperative to stay accurate to Rembrandt’s unique style. As “The Master of Light and Shadow,” Rembrandt relied on his innovative use of lighting to shape the features in his paintings. By using very concentrated light sources, he essentially created a “spotlight effect” that gave great attention to the lit elements and left the rest of the painting shrouded in shadows. This resulted in some of the features being very sharp and in focus and others becoming soft and almost blurry, an effect that had to be replicated in the new artwork.
When you want to make a new painting you have some idea of how it’s going to look. But in our case we started from basically nothing — we had to create a whole painting using just data from Rembrandt’s paintings.
– Ben Haanstra, Developer
GENERATING FEATURES BASED ON DATA
To master his style, we designed a software system that could understand Rembrandt based on his use of geometry, composition, and painting materials. A facial recognition algorithm identified and classified the most typical geometric patterns used by Rembrandt to paint human features. It then used the learned principles to replicate the style and generate new facial features for our painting.
CONSTRUCTING A FACE OUT OF THE NEW FEATURES
Once we generated the individual features, we had to assemble them into a fully formed face and bust according to Rembrandt’s use of proportions. An algorithm measured the distances between the facial features in Rembrandt’s paintings and calculated them based on percentages. Next, the features were transformed, rotated, and scaled, then accurately placed within the frame of the face. Finally, we rendered the light based on gathered data in order to cast authentic shadows on each feature.
04 BRINGING IT TO LIFE
CREATING ACCURATE DEPTH AND TEXTURE
Analyses
We now had a digital file true to Rembrandt’s style in content, shapes, and lighting. But paintings aren’t just 2D — they have a remarkable three-dimensionality that comes from brushstrokes and layers of paint. To recreate this texture, we had to study 3D scans of Rembrandt’s paintings and analyze the intricate layers on top of the canvas.
“We looked at a number of Rembrandt paintings, and we scanned their surface texture, their elemental composition, and what kinds of pigments were used. That’s the kind of information you need if you want to generate a painting by Rembrandt virtually.”
– Joris Dik, Technical University Delft
USING A HEIGHT MAP TO PRINT IN 3D
We created a height map using two different algorithms that found texture patterns of canvas surfaces and layers of paint. That information was transformed into height data, allowing us to mimic the brushstrokes used by Rembrandt.
We then used an elevated printing technique on a 3D printer that output multiple layers of paint-based UV ink. The final height map determined how much ink was released onto the canvas during each layer of the printing process. In the end, we printed thirteen layers of ink, one on top of the other, to create a painting texture true to Rembrandt’s style.

ORIGINAL: Next Rembrandt

Have we hit a major artificial intelligence milestone?

By Hugo Angel,

Image: REUTERS/China DailyStudents play the board game “Go”, known as “Weiqi” in Chinese, during a competition.
Google’s computer program AlphaGo has defeated a top-ranked Go player in the first round of five historic matches – marking a significant achievement in the development of artificial intelligence.
AlphaGo’s victory over a human champion shows an artificial intelligence system has mastered the most complex game ever designed. The ancient Chinese board game is vastly more complicated than chess and is said to have more possible configurations than there are atoms in the Universe.
The battle between AlphaGo, developed by Google’s Deepmind unit, and South Korea’s Lee Se-dol was said by commentators to be close, with both sides making some mistakes.
Game playing is an important way to measure AI advances, demonstrating that machines can outperform humans at intellectual tasks.
AlphaGo’s win follows in the footsteps of the legendary 1997 victory of IBM supercomputer Deep Blue over world chess champion Garry Kasparov. But Go, which relies heavily on players’ intuition to choose among vast numbers of board positions, is far more challenging for artificial intelligence than chess.
Speaking in the lead-up to the first match, Se-dol, who is currently ranked second in the world behind fellow South Korean Lee Chang-ho, said: “Having learned today how its algorithms narrow down possible choices, I have a feeling that AlphaGo can imitate human intuition to a certain degree.”
Demis Hassabis, founder and CEO of DeepMind, which was acquired by Google in 2014, previously described “Go as the pinnacle of game AI research” and the “holy grail” of AI since Deep Blue beat Kasparov.

Experts had predicted it would take another decade for AI systems to beat professional Go players. But in January, the journal Nature reported that AlphaGo won a five-game match against European champion Fan Hui. Since then the computer program’s performance has steadily improved.
Mastering the game of Go. Nature
While DeepMind’s team built AlphaGo to learn in a more human-like way, it still needs much more practice than a human expert, millions of games rather than thousands.
Potential future uses of AI programs like AlphaGo could include improving smartphone assistants such as Apple’s Siri, medical diagnostics, and possibly even working with human scientists in research.
by Rosamond Hutt, Senior Producer, Formative Content
9 March 2016

Monster Machine Cracks The Game Of Go

By Hugo Angel,

Illustration: Google DeepMind/Nature
A computer program has defeated a master of the ancient Chinese game of Go, achieving one of the loftiest of the Grand Challenges of AI at least a decade earlier than anyone had thought possible.
The programmers, at Google’s Deep Mind laboratory, in London, .write in today’s issue of .Nature that their program .AlphaGo defeated .Fan Hui, the European Go champion, 5 games to nil, in a match held last October in the company’s offices. Earlier, the program had won 494 out of 495 games against the best rival Go programs.
AlphaGo’s creators now hope to seal their victory at a 5-game match against .Lee Se-dol, the best Go player in the world. That match, for a $1 million prize fund, is scheduled to take place in March in Seoul, South Korea.
The program’s victory marks the rise not merely of the machines but of new methods of computer programming based on self-training neural networks. In support of their claim that this method can be applied broadly, the researchers cited their success, which we .reported a year ago, in getting neural networks to learn how to play an entire set of video games from Atari. Future applications, they say, may include financial trading, climate modeling and medical diagnosis.
Not all of AlphaGo’s skill is self-taught. First, the programmers jumpstarted the training by having the program predict moves in a database of master games. It eventually reached a success rate of 57 percent, versus 44 percent for the best rival programs.
Then, to go beyond mere human performance, the program conducted its own research through a trial-and-error approach that involved playing millions of games against itself. In this fashion it discovered, one by one, many of the rules of thumb that textbooks have been imparting to Go students for centuries. Google DeepMind calls the self-guided method reinforced learning, but it’s really just another word for “.deep learning,” the current AI buzzword.
Not only can self-trained machines surpass the game-playing powers of their creators, they can do so in ways that programmers can’t even explain. It’s a different world from the one that AI’s founders envisaged decades ago.
Commenting on the death yesterday of AI pioneer Marvin Minksy, Demis Hassabis, the lead author of the Nature paper, said “It would be interesting to see what he would have said,” said Hassabis. “I suspect he would have been pretty surprised at how quickly this has arrived.
That’s because, as programmers would say, Go is such a bear. Then again, even chess was a bear, at first. Back in 1957, the late Herbert Simon famously predicted that a computer would beat the world champion at chess within a decade. But it was only in 1997 that World Chess Champion Garry Kasparov lost to IBM’s Deep Blue—a multimillion-dollar, purpose-built machine that filled a small room. Today you can .download a $100 program to a decently powered laptop and watch it utterly cream any chess player in the world.
Go is harder for machines because the positions are harder to judge and there are a whole lot more positions.
Judgement is harder because the pieces, or “stones,” are all of equal value, whereas those in chess have varying values—a Queen, for instance, is worth nine times more than a pawn, on average. Chess programmers can thus add up those values (and throw in numerical estimates for the placement of pieces and pawns) to arrive at a quick-and-dirty score of a game position. No such expedient exists for Go.
There are vastly more positions to judge than in chess because Go offers on average 10 times more options at every move and there are about three times as many moves in an game. The number of possible board configurations in Go is estimated at 10 to the 170th power—“more than the number of atoms in the universe,” said Hassabis.
Some researchers .tried to adapt to Go some of the forward-search techniques devised for chess; others .relied on random simulations of games in the aptly named Monte Carlo method. The Google DeepMind people leapfrogged them all with deep, or convolutional, neural networks, so named because they imitate the brain (up to a point).
A neural network links units that are the computing equivalent to a brain’s neurons—first by putting them into layers, then by stacking the layers. AlphaGo’s are 12 layers deep. Each “neuron” connects to its neighbors in its own layer and also those in the layers directly above and below it. A signal sent to one neuron causes it to strengthen or weaken its connections to other ones, so over time, the network changes its configuration.
To train the system

  • you first expose it to input data
  • Next, you test the output signal against the metric you’re using—say, predicting a master’s move—and 
  • reinforce correct decisions by strengthening the underlying connections. 
  • Over time, the system produces better outputs. You might say that it learns.
AlphaGo has two networks

  • The policy network cuts down on the number of moves to look at, and 
  • the evaluation network allows you to cut short the depth of that search,” or the number of moves the machine must look ahead, Hassabis said. 

 “Both neural networks together make the search tractable.”

The main difference from the system that played Atari is the inclusion of a search-ahead function: “In Atari you can do well by reacting quickly to current events,” said Hassabis. “In Go you need a plan.”
After exhaustive training the two networks, taken by themselves, could play Go as well as any program did. But when the researchers coupled the neural networks to a forward-searching algorithm, the machine was able to dominate rival programs completely. Not only did it win all but one of the hundreds of games it played against them, it was even able to give them a handicap of four extra moves, made at the beginning of the game, and still beat them.
About that one defeat: “The search has stochastic [random] element, so there’s always a possibility that it will make a mistake,” David Silver said. “As we improve, we reduce probability of making a mistake, but mistakes happen. As in that one particular game.
Anyone might cock an eyebrow at the claim that AlphaGo will have practical spin-offs. Games programmers have often justified their work by promising such things but so far they’ve had little to show for their efforts. IBM’s Deep Blue did nothing but play chess, and IBM’s Watson—.designed to beat the television game show Jeopardy!—will need laborious retraining to be of service in its .next appointed task of helping doctors diagnose and treat patients.
But AlphaGo’s creators say that because of the generalized nature of their approach, direct spin-offs really will come—this time for sure. And they’ll get started on them just as soon as the March match against the world champion is behind them.
ORIGINAL: IEEE Spectrum
Posted 27 Jan 2016