NVIDIA DRIVE PX 2. NVIDIA Accelerates Race to Autonomous Driving at CES 2016

By Hugo Angel,

NVIDIA today shifted its autonomous-driving leadership into high gear.
At a press event kicking off CES 2016, we unveiled artificial-intelligence technology that will let cars sense the world around them and pilot a safe route forward.
Dressed in his trademark black leather jacket, speaking to a crowd of some 400 automakers, media and analysts, NVIDIA CEO Jen-Hsun Huang revealed DRIVE PX 2, an automotive supercomputing platform that processes 24 trillion deep learning operations a second. That’s 10 times the performance of the first-generation DRIVE PX, now being used by more than 50 companies in the automotive world.
The new DRIVE PX 2 delivers 8 teraflops of processing power. It has the processing power of 150 MacBook Pros. And it’s the size of a lunchbox in contrast to earlier autonomous-driving technology being used today, which takes up the entire trunk of a mid-sized sedan.
Self-driving cars will revolutionize society,” Huang said at the beginning of his talk. “And NVIDIA’s vision is to enable them.
 
Volvo to Deploy DRIVE PX in Self-Driving SUVs
As part of its quest to eliminate traffic fatalities, Volvo will be the first automaker to deploy DRIVE PX 2.
Huang announced that Volvo – known worldwide for safety and reliability – will be the first automaker to deploy DRIVE PX 2.
In the world’s first public trial of autonomous driving, the Swedish automaker next year will lease 100 XC90 luxury SUVs outfitted with DRIVE PX 2 technology. The technology will help the vehicles drive autonomously around Volvo’s hometown of Gothenburg, and semi-autonomously elsewhere.
DRIVE PX 2 has the power to harness a host of sensors to get a 360 degree view of the environment around the car.
The rear-view mirror is history,” Jen-Hsun said.
Drive Safely, by Not Driving at All
Not so long ago, pundits had questioned the safety of technology in cars. Now, with Volvo incorporating autonomous vehicles into its plan to end traffic fatalities, that script has been flipped. Autonomous cars may be vastly safer than human-piloted vehicles.
Car crashes – an estimated 93 percent of them caused by human error kill 1.3 million drivers each year. More American teenagers die from texting while driving than any other cause, including drunk driving.
There’s also a productivity issue. Americans waste some 5.5 billion hours of time each year in traffic, costing the U.S. about $121 billion, according to an Urban Mobility Report from Texas A&M. And inefficient use of roads by cars wastes even vaster sums spent on infrastructure.
Deep Learning Hits the Road
Self-driving solutions based on computer vision can provide some answers. But tackling the infinite permutations that a driver needs to react to – stray pets, swerving cars, slashing rain, steady road construction crews – is far too complex a programming challenge.
Deep learning enabled by NVIDIA technology can address these challenges. A highly trained deep neural network – residing on supercomputers in the cloud – captures the experience of many tens of thousands of hours of road time.
Huang noted that a number of automotive companies are already using NVIDIA’s deep learning technology to power their efforts, getting speedup of 30-40X in training their networks compared with other technology. BMW, Daimler and Ford are among them, along with innovative Japanese startups like Preferred Networks and ZMP. And Audi said it was able in four hours to do training that took it two years with a competing solution.
  NVIDIA DRIVE PX 2 is part of an end-to-end platform that brings deep learning to the road.
NVIDIA’s end-to-end solution for deep learning starts with NVIDIA DIGITS, a supercomputer that can be used to train digital neural networks by exposing them to data collected during that time on the road. On the other end is DRIVE PX 2, which draws on this training to make inferences to enable the car to progress safely down the road. In the middle is NVIDIA DriveWorks, a suite of software tools, libraries and modules that accelerates development and testing of autonomous vehicles.
DriveWorks enables sensor calibration, acquisition of surround data, synchronization, recording and then processing streams of sensor data through a complex pipeline of algorithms running on all of the DRIVE PX 2’s specialized and general-purpose processors.
During the event, Huang reminded the audience that machines are already beating humans at tasks once considered impossible for computers, such as image recognition. Systems trained with deep learning can now correctly classify images more than 96 percent of the time, exceeding what humans can do on similar tasks.
He used the event to show what deep learning can do for autonomous vehicles.
A series of demos drove this home, showing in three steps how DRIVE PX 2 harnesses a host of sensors – lidar, radar and cameras and ultrasonic – to understand the world around it, in real time, and plan a safe and efficient path forward.
The World’s Biggest Infotainment System
 
The highlight of the demos was what Huang called the world’s largest car infotainment system — an elegant block the size of a medium-sized bedroom wall mounted with a long horizontal screen and a long vertical one.
While a third larger screen showed the scene that a driver would take in, the wide demo screen showed how the car — using deep learning and sensor fusion — “viewed” the very same scene in real-time, stitched together from its array of sensors. On its right, the huge portrait-oriented screen shows a highly precise map that marked the car’s progress.
It’s a demo that will leave an impression on an audience that’s going to be hear a lot about the future of driving in the week ahead.
Photos from Our CES 2016 Press Event
NVIDIA Drive PX-2
ORIGINAL: Nvidia
By Bob Sherbin on January 3, 2016

Robots are learning from YouTube tutorials

By Hugo Angel,

Do it yourself, robot. (Reuters/Kim Kyung-Hoon)
For better or worse, we’ve taught robots to mimic human behavior in countless ways. They can perform tasks as rudimentary as picking up objects, or as creative as dreaming their own dreams. They can identify bullying, and even play jazz. Now, we’ve taught robots the most human task of all: how to teach themselves to make Jell-O shots from watching YouTube videos.
Ever go to YouTube and type in something like, “How to make pancakes,” or, “How to mount a TV”? Sure you have. While many such tutorials are awful—and some are just deliberately misleading—the sheer number of instructional videos offers strong odds of finding one that’s genuinely helpful. And when all those videos are aggregated and analyzed simultaneously, it’s not hard for a robot to figure out what the correct steps are.

Researchers at Cornell University have taught robots to do just that with a system called RoboWatch. By watching and scanning multiple videos of the same “how-to” activity (with subtitles enabled), bots can 

  • identify common steps, 
  • put them in order, and 
  • learn how to do whatever the tutorials are teaching.
Robot learning is not new, but what’s unusual here is that these robots can learn without human supervision, as Phys.Org points out.
Similar research usually requires human overseers to introduce and explain words, or captions, for the robots to parse. RoboWatch, however, needs no human help, save that someone ensure all the videos analyzed fall into a single category (pdf). The idea is that a human could one day tell a robot to perform a task and then the robot would independently research and learn how to carry out that task.
So next time you getting frustrated watching a video on how to change a tire, don’t fret. Soon, a robot will do all that for you. We just have to make sure it doesn’t watch any videos about “how to take over the world.
ORIGINAL: QZ
December 22, 2015

Scientists have discovered brain networks linked to intelligence for the first time

By Hugo Angel,

Neurons Shutterstock 265323554_1024
Ralwel/Shutterstock.com
And we may even be able to manipulate them.
For the first time ever, scientists have identified clusters of genes in the brain that are believed to be linked to human intelligence.
The two clusters, called M1 and M3, are networks each consisting of hundreds of individual genes, and are thought to influence our

  • cognitive functions, including 
  • memory, 
  • attention, 
  • processing speed, and 
  • reasoning.
Most provocatively, the researchers who identified M1 and M3 say that these clusters are probably under the control of master switches that regulate how the gene networks function. If this hypothesis is correct and scientists can indeed find these switches, we might even be able to manipulate our genetic intelligence and boost our cognitive capabilities.
“We know that genetics plays a major role in intelligence but until now haven’t known which genes are relevant,said neurologist Michael Johnson, at Imperial College London in the UK. “This research highlights some of the genes involved in human intelligence, and how they interact with each other.
The researchers made their discovery by examining the brains of patients who had undergone neurosurgery for the treatment of epilepsy. They analysed thousands of genes expressed in the brain and combined the findings with two sets of data: genetic information from healthy people who had performed IQ tests, and from people with neurological disorders and intellectual disability.
Comparing the results, the researchers discovered that some of the genes that influence human intelligence in healthy people can also cause significant neurological problems if they end up mutating.
Traits such as intelligence are governed by large groups of genes working together – like a football team made up of players in different positions,said Johnson. “We used computer analysis to identify the genes in the human brain that work together to influence our cognitive ability to make new memories or sensible decisions when faced with lots of complex information. We found that some of these genes overlap with those that cause severe childhood onset epilepsy or intellectual disability.
The research, which is reported in Nature Neuroscience, is at an early stage, but the authors believe their analysis could have a significant impact – not only on how we understand and treat brain diseases, but one day perhaps altering brainpower itself.
Eventually, we hope that this sort of analysis will provide new insights into better treatments for neurodevelopmental diseases such as epilepsy, and ameliorate or treat the cognitive impairments associated with these devastating diseases,” said Johnson. “Our research suggests that it might be possible to work with these genes to modify intelligence, but that is only a theoretical possibility at the moment – we have just taken a first step along that road.
ORIGINAL: Science Alert
PETER DOCKRILL
22 DEC 2015

Scientists have built a functional ‘hybrid’ logic gate for use in quantum computers

By Hugo Angel,

NIST Quantum Gate
An ion trap used in NIST quantum computing experiments. Credit: Blakestad/NIST
Here’s how to solve the problem of quantum memory.
As conventional computers draw ever closer to their theoretical limit, the race is on to build a machine that can truly harness the unprecedented processing power of quantum computing. And now two research teams have independently demonstrated how entangling atoms from different elements can address the problem of quantum memory errors while functioning within a logic gate framework, and also pass the all-important test of true entanglement. 
Hybrid quantum computers allow the unique advantages of different types of quantum systems to be exploited together in a single platform,said lead author Ting Rei Tan. “Each ion species is unique, and certain ones are better suited for certain tasks such as memory storage, while others are more suited to provide interconnects for data transfer between remote systems.
In the computers we use today, data is processed and stored as binary bits, with each individual bit taking on a state of either 0 or 1. Because these states are set, there’s a finite amount of information that can ultimately be processed, and we’re quickly approaching the point where this isn’t going to be enough.
Quantum computers, on the other hand, store data as qubits, which can be in the state of 0 or 1, or can take on another state called superposition, which allows them to be both 0 and 1 at the same time. If we can figure out how to build a machine that integrates this phenomenon with data-processing capabilities, we’re looking at computers that are hundreds of millions of times faster than the super computers of today.
The qubits used in this set-up are actually atomic ions (atoms with an electron removed), and their states are determined by their spin – spin up is 1, spin down is 0. Each atomic ion is paired off, and if the control ion takes on the state of superposition, it will become entangled with its partner, so anything you do to one ion will affect the other.
This can pose problems, particularly when it comes to memory, and there’s no point storing and processing information if you can’t reliably retain it. If you’ve got an entire system built on pairs of the same atomic ions, you leave yourself open to constant errors, because if one ion is affected by a malfunction, this will also affect its partner. At the same time, using the same atomic ions in a pair makes it very difficult for them to perform separate functions.
So researchers from the University of Oxford in the UK, and a second team from the National Institute of Standards and Technology (NIST) and the University of Washington, have figured out which combinations of different elements can function together as pairs in a quantum set-up.
Each trapped ion is used to represent one ‘quantum bit’ of information. The quantum states of the ions are controlled with laser pulses of precise frequency and duration,says one of the researchers, David Lucas from the University of Oxford. “Two different species of ion are needed in the computer

  • one to store information, a ‘memory qubit’, and 
  • one to link different parts of the computer together via photons, an ‘interface qubit’.
While the Oxford team achieved this using two different isotopes of calcium (the abundant isotope calcium-40 and the rare isotope calcium-43), the second team went even further by pairing up entirely different atoms – magnesium and beryllium. Each one is sensitive to a different wavelength of light, which means zapping one with a laser pulse to control its function won’t affect its partner.
The teams them went on to demonstrate for the first time that these pairs could have their 0,1, or superposition states controlled by two different types of logic gates, called the CNOT gate and the SWAP gate. Logic gates are crucial components of any digital circuit, because they’re able to record two input values and provide a new output based on programmed logic. 
A CNOT gate flips the second (target) qubit if the first (control) qubit is a 1; if it is a 0, the target bit is unchanged,the NIST press release explains. “If the control qubit is in a superposition, the ions become entangled. A SWAP gate interchanges the qubit states, including superpositions.
The Oxford team demonstrated ion pairing in this set-up for about 60 seconds, while the NIST/Washington team managed to keep theirs entangled for 1.5 seconds. That doesn’t sound like much, but that’s relatively stable when it comes to qubits.
Both teams confirm that their two atoms are entangled with a very high probability; 0.998 for one, 0.979 for the other (of a maximum of one),John Timmer reports for Ars Technica. “The NIST team even showed that it could track the beryllium atom as it changed state by observing the state of the magnesium atom.
Further, both teams were able to successfully perform a Bell test by using the logic gate to entangle the pairs of different-species ions, and then manipulating and measuring them independently.
[W]e show that quantum logic gates between different isotopic species are possible, can be driven by a relatively simple laser system, and can work with precision beyond the so-called ‘fault-tolerant threshold’ precision of approximately 99 percent – the precision necessary to implement the techniques of quantum error correction, without which a quantum computer of useful size cannot be built,said Lucas in an Oxford press release.
Of course, we don’t have proper quantum computers to actually test these components in the context of a functioning system – that will have to be the next step, and international teams of scientists and engineers are racing to get us there. We can’t wait to see it when they do.
The papers have been published in Nature here and here.
ORIGINAL: ScienceAlert
BEC CREW
18 DEC 2015

Forward to the Future: Visions of 2045

By Hugo Angel,

DARPA asked the world and our own researchers what technologies they expect to see 30 years from now—and received insightful, sometimes funny predictions
Today—October 21, 2015—is famous in popular culture as the date 30 years in the future when Marty McFly and Doc Brown arrive in their time-traveling DeLorean in the movie “Back to the Future Part II.” The film got some things right about 2015, including in-home videoconferencing and devices that recognize people by their voices and fingerprints. But it also predicted trunk-sized fusion reactors, hoverboards and flying cars—game-changing technologies that, despite the advances we’ve seen in so many fields over the past three decades, still exist only in our imaginations.
A big part of DARPA’s mission is to envision the future and make the impossible possible. So ten days ago, as the “Back to the Future” day approached, we turned to social media and asked the world to predict: What technologies might actually surround us 30 years from now? We pointed people to presentations from DARPA’s Future Technologies Forum, held last month in St. Louis, for inspiration and a reality check before submitting their predictions.
Well, you rose to the challenge and the results are in. So in honor of Marty and Doc (little known fact: he is a DARPA alum) and all of the world’s innovators past and future, we present here some highlights from your responses, in roughly descending order by number of mentions for each class of futuristic capability:
  • Space: Interplanetary and interstellar travel, including faster-than-light travel; missions and permanent settlements on the Moon, Mars and the asteroid belt; space elevators
  • Transportation & Energy: Self-driving and electric vehicles; improved mass transit systems and intercontinental travel; flying cars and hoverboards; high-efficiency solar and other sustainable energy sources
  • Medicine & Health: Neurological devices for memory augmentation, storage and transfer, and perhaps to read people’s thoughts; life extension, including virtual immortality via uploading brains into computers; artificial cells and organs; “Star Trek”-style tricorder for home diagnostics and treatment; wearable technology, such as exoskeletons and augmented-reality glasses and contact lenses
  • Materials & Robotics: Ubiquitous nanotechnology, 3-D printing and robotics; invisibility and cloaking devices; energy shields; anti-gravity devices
  • Cyber & Big Data: Improved artificial intelligence; optical and quantum computing; faster, more secure Internet; better use of data analytics to improve use of resources
A few predictions inspired us to respond directly:
  • Pizza delivery via teleportation”—DARPA took a close look at this a few years ago and decided there is plenty of incentive for the private sector to handle this challenge.
  • Time travel technology will be close, but will be closely guarded by the military as a matter of national security”—We already did this tomorrow.
  • Systems for controlling the weather”—Meteorologists told us it would be a job killer and we didn’t want to rain on their parade.
  • Space colonies…and unlimited cellular data plans that won’t be slowed by your carrier when you go over a limit”—We appreciate the idea that these are equally difficult, but they are not. We think likable cell-phone data plans are beyond even DARPA and a total non-starter.
So seriously, as an adjunct to this crowd-sourced view of the future, we asked three DARPA researchers from various fields to share their visions of 2045, and why getting there will require a group effort with players not only from academia and industry but from forward-looking government laboratories and agencies:

Pam Melroy, an aerospace engineer, former astronaut and current deputy director of DARPA’s Tactical Technologies Office (TTO), foresees technologies that would enable machines to collaborate with humans as partners on tasks far more complex than those we can tackle today:
Justin Sanchez, a neuroscientist and program manager in DARPA’s Biological Technologies Office (BTO), imagines a world where neurotechnologies could enable users to interact with their environment and other people by thought alone:
Stefanie Tompkins, a geologist and director of DARPA’s Defense Sciences Office, envisions building substances from the atomic or molecular level up to create “impossible” materials with previously unattainable capabilities.
Check back with us in 2045—or sooner, if that time machine stuff works out—for an assessment of how things really turned out in 30 years.
# # #
Associated images posted on www.darpa.mil and video posted at www.youtube.com/darpatv may be reused according to the terms of the DARPA User Agreement, available here:http://www.darpa.mil/policy/usage-policy.
Tweet @darpa
ORIGINAL: DARPA
10/21/2015

Computer Learns to Write Its ABCs

By Hugo Angel,

Danqing Wang Computer ABC
Photo-illustration: Danqing Wang
A new computer model can now mimic the human ability to learn new concepts from a single example instead of the hundreds or thousands of examples it takes other machine learning techniques, researchers say.

The new model learned how to write invented symbols from the animated show Futurama as well as dozens of alphabets from across the world. It also showed it could invent symbols of its own in the style of a given language
.The researchers suggest their model could also learn other kinds of concepts, such as speech and gestures.

Although scientists have made great advances in .machine learning in recent years, people remain much better at learning new concepts than machines.

People can learn new concepts extremely quickly, from very little data, often from only one or a few examples. You show even a young child a horse, a school bus, a skateboard, and they can get it from one example,” says study co-author Joshua Tenenbaum at the Massachusetts Institute of Technology. In contrast, “standard algorithms in machine learning require tens, hundreds or even thousands of examples to perform similarly.

To shorten machine learning, researchers sought to develop a model that better mimicked human learning, which makes generalizations from very few examples of a concept. They focused on learning simple visual concepts — handwritten symbols from alphabets around the world.

Our work has two goals: to better understand how people learn — to reverse engineer learning in the human mind — and to build machines that learn in more humanlike ways,” Tenenbaum says.

Whereas standard pattern recognition algorithms represent symbols as collections of pixels or arrangements of features, the new model the researchers developed represented each symbol as a simple computer program. For instance, the letter “A” is represented by a program that generates examples of that letter stroke by stroke when the program is run. No programmer is needed during the learning process — the model generates these programs itself.

Moreover, each program is designed to generate variations of each symbol whenever the programs are run, helping it capture the way instances of such concepts might vary, such as the differences between how two people draw a letter.

The idea for this algorithm came from a surprising finding we had while collecting a data set of handwritten characters from around the world. We found that if you ask a handful of people to draw a novel character, there is remarkable consistency in the way people draw,” says study lead author Brenden Lake at New York University. “When people learn or use or interact with these novel concepts, they do not just see characters as static visual objects. Instead, people see richer structure — something like a causal model, or a sequence of pen strokes — that describe how to efficiently produce new examples of the concept.

The model also applies knowledge from previous concepts to speed learn new concepts. For instance, the model can use knowledge learned from the Latin alphabet to learn the Greek alphabet. They call their model the Bayesian program learning or BPL framework.

The researchers applied their model to more than 1,600 types of handwritten characters in 50 writing systems, including Sanskrit, Tibetan, Gujarati, Glagolitic, and even invented characters such as those from the animated series Futurama and the online game Dark Horizon. In a kind of .Turing test, scientists found that volunteers recruited via .Amazon’s Mechanical Turk had difficulty distinguishing machine-written characters from human-written ones.

The scientists also had their model focus on creative tasks. They asked their system to create whole new concepts — for instance, creating a new Tibetan letter based on what it knew about letters in the Tibetan alphabet. The researchers found human volunteers rated machine-written characters on par with ones developed by humans recruited for the same task.

We got human-level performance on this creative task,” study co-author Ruslan Salakhutdinov at the University of Toronto.

Potential applications for this model could include

  • handwriting recognition,
  • speech recognition,
  • gesture recognition and
  • object recognition.
Ultimately we’re trying to figure out how we can get systems that come closer to displaying human-like intelligence,” Salakhutdinov says. “We’re still very, very far from getting there, though.“The scientists detailed .their findings in the December 11 issue of the journal Science.

ORIGINAL: .IEEE Spectrum

By Charles Q. Choi
Posted 10 Dec 2015 | 20:00 GMT

Elon Musk And Sam Altman Launch OpenAI, A Nonprofit That Will Use AI To ‘Benefit Humanity’

By Hugo Angel,

.
Led by an all-star team of Silicon Valley’s best and brightest, OpenAI already has $1 billion in funding.
.
Silicon Valley is in the midst of an .artificial intelligence war, as giants like Facebook and Google attempt to outdo each other by deploying machine learning and AI to automate services. But a brand-new organization called .OpenAI—helmed by Elon Musk and a posse of prominent techies—aims to use AI to “benefit humanity,” without worrying about profit.
Musk, the CEO of SpaceX and Tesla, .took to Twitter to announce OpenAI on Friday afternoon.

The organization, the formation of which has been in discussions for quite a while, came together in earnest over the last couple of months, co-chair and Y Combinator CEO Sam Altman told Fast Company. It is launching with $1 billion in funding from the likes of Altman, Musk, LinkedIn founder Reid Hoffman, and Palantir chairman Peter Thiel. In an .introductory blog post, the OpenAI team said “we expect to only spend a tiny fraction of this in the next few years.

Noting that it’s not yet clear on what it will accomplish, OpenAI explains that its nonprofit status should afford it more flexibility. “Since our research is free from financial obligations, we can better focus on a positive human impact,” the blog post reads. “We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely.” We’re just trying to create new knowledge and give it to the world.

The organization features an all-star group of leaders: Musk and Altman are co-chairs, while Google research scientist Ilya Sutskever is research director and Greg Brockman is CTO, a role he formerly held at payments company Stripe.

For nearly everyone involved in OpenAI, the project will be full-time work, Altman explained. For his part, it will be a “major commitment,” while Musk is expected to “come in every week, every other week, something like that.”

Altman explained that everything OpenAI works on—including any intellectual property it creates—will be made public. The one exception, he said, is if it could pose a risk. “Generally speaking,” Altman told Fast Company, “we’ll make all our patents available to the world.

Companies like Facebook and Google are working fast to use AI. Just yesterday, .Facebook announced it is open-sourcing new computing hardware, known as “Big Sur,” that doubles the power and efficiency of computers currently available for AI research. Facebook has also recently talked about using AI to help its blind users, as well as to make broad tasks easier on the giant social networking service. Google, .according to Recode, has also put significant efforts into AI research and development, but has been somewhat less willing to give away the fruits of its labor.

Altman said he imagines that OpenAI will work with both of those companies, as well as any others interested in AI. “One of the nice things about our structure is that because there is no fiduciary duty,” he said, “we can collaborate with anyone.

For now, there are no specific collaborations in the works, Altman added, though he expects that to change quickly now that OpenAI has been announced.

Ultimately, while many companies are working on artificial intelligence as part of for-profit projects, Altman said he thinks OpenAI’s mission—and funding—shouldn’t threaten anyone.I would be very concerned if they didn’t like our mission,” he said. “We’re just trying to create new knowledge and give it to the world.

ORIGINAL: .FastCompany
By
.Daniel Terdiman.

OpenAI’s research director is Ilya Sutskever, one of the world experts in machine learning. Our CTO is Greg Brockman, formerly the CTO of Stripe. The group’s other founding members are world-class research engineers and scientists: Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba. Pieter Abbeel, Yoshua Bengio, Alan Kay, Sergey Levine, and Vishal Sikka are advisors to the group. OpenAI’s co-chairs are Sam Altman and Elon Musk.Sam, Greg, Elon, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), Infosys, and YC Research are donating to support OpenAI. In total, these funders have committed $1 billion, although we expect to only spend a tiny fraction of this in the next few years.

You can follow us on Twitter at @open_ai or email us at [email protected].

Scaling up synthetic-biology innovation

By Hugo Angel,

.
Gen9’s BioFab platform synthesizes small DNA fragments on silicon chips
and uses other technologies to build longer DNA constructs from those
fragments. Done in a parallel, this produces hundreds to thousands of
DNA constructs simultaneously. Shown here is an automated
liquid-handling instrument that dispenses DNA onto the chips. Courtesy of Gen9
MIT professor’s startup makes synthesizing genes many times more cost effective.
Inside and outside of the classroom, MIT professor Joseph Jacobson has become a prominent figure in — and advocate for — the emerging field of synthetic biology.

As head of the Molecular Machines group at the MIT Media Lab, Jacobson’s work has focused on, among other things, developing technologies for the rapid fabrication of DNA molecules. In 2009, he spun out some of his work into .Gen9, which aims to boost synthetic-biology innovation by offering scientists more cost-effective tools and resources.
Headquartered in Cambridge, Massachusetts, Gen9 has developed a method for synthesizing DNA on silicon chips, which significantly cuts costs and accelerates the creation and testing of genes. Commercially available since 2013, the platform is now being used by dozens of scientists and commercial firms worldwide.
Synthetic biologists synthesize genes by combining strands of DNA. These new genes can be inserted into microorganisms such as yeast and bacteria. Using this approach, scientists can tinker with the cells’ metabolic pathways, enabling the microbes to perform new functions, including testing new antibodies, sensing chemicals in an environment, or creating biofuels.

But conventional gene-synthesizing methods can be time-consuming and costly. Chemical-based processes, for instance, cost roughly 20 cents per base pair — DNA’s key building block — and produce one strand of DNA at a time. This adds up in time and money when synthesizing genes comprising 100,000 base pairs.

Gen9’s chip-based DNA, however, drops the price to roughly 2 cents per base pair, Jacobson says. Additionally, hundreds of thousands of base pairs can be tested and compiled in parallel, as opposed to testing and compiling each pair individually through conventional methods.

This means faster testing and development of new pathways — which usually takes many years — for applications such as advanced therapeutics, and more effective enzymes for detergents, food processing, and biofuels, Jacobson says. “If you can build thousands of pathways on a chip in parallel, and can test them all at once, you get to a working metabolic pathway much faster,” he says.

Over the years, Jacobson and Gen9 have earned many awards and honors. In November, Jacobson was also inducted into the National Inventors Hall of Fame for co-inventing E Ink, the electronic ink used for Amazon’s Kindle e-reader display.

Scaling gene synthesizing Throughout the early-and mid-2000s, a few important pieces of research came together to allow for the scaling up of gene synthesis, which ultimately led to Gen9.

First, Jacobson and his students Chris Emig and Brian Chow began developing chips with thousands of “spots,” which each contained about 100 million copies of a different DNA sequence.

Then, Jacobson and another student, David Kong, created a process that used a certain enzyme as a catalyst to assemble those small DNA fragments into larger DNA strands inside microfluidics devices — “which was the first microfluidics assembly of DNA ever,” Jacobson says.

Despite the novelty, however, the process still wasn’t entirely cost effective. On average, it produced a 99 percent yield, meaning that about 1 percent of the base pairs didn’t match when constructing larger strands. That’s not so bad for making genes with 100 base pairs. “But if you want to make something that’s 10,000 or 100,000 bases long, that’s no good anymore,” Jacobson says.

Around 2004, Jacobson and then-postdoc Peter Carr, along with several other students, found a way to drastically increase yields by taking a cue from a natural error-correcting protein, Mut-S, which recognizes mismatches in DNA base pairing that occur when two DNA strands form a double helix. For synthetic DNA, the protein can detect and extract mismatches arising in base pairs synthesized on the chip, improving yields. In a paper published that year in Nucleic Acids Research, the researchers wrote that this process reduces the frequency of errors, from one in every 100 base pairs to around one in every 10,000.

With these innovations, Jacobson launched Gen9 with two co-founders: George Church of Harvard University, who was also working on synthesizing DNA on microchips, and Drew Endy of Stanford University, a world leader in synthetic-biology innovations.

Together with employees, they created a platform called BioFab and several other tools for synthetic biologists. Today, clients use an online portal to order gene sequences. Then Gen9 designs and fabricates those sequences on chips and delivers them to customers. Recently, the startup updated the portal to allow drag-and-drop capabilities and options for editing and storing gene sequences.

This allows users to “make these very extensive libraries that have been inaccessible previously,” Jacobson says.


Fueling big ideas

Many published studies have already used Gen9’s tools, several of which are posted to the startup’s website. Notable ones, Jacobson says, include designing proteins for therapeutics. In those cases, the researcher needs to make 10 million or 100 million versions of a protein, each comprising maybe 50,000 pieces of DNA, to see which ones work best.

Instead of making and testing DNA sequences one at a time with conventional methods, Gen9 lets researchers test hundreds of thousands of sequences at once on a chip. This should increase chances of finding the right protein, more quickly. “If you just have one shot you’re very unlikely to hit the target,” Jacobson says. “If you have thousands or tens of thousands of shots on a goal, you have a much better chance of success.


Currently, all the world’s synthetic-biology methods produce only about 300 million bases per year. About 10 of the chips Gen9 uses to make DNA can hold the same amount of content, Jacobson says. In principle, he says, the platform used to make Gen9’s chips — based on collaboration with manufacturing firm Agilent — could produce enough chips to cover about 200 billion bases. This is about the equivalent capacity of GenBank, an open-access database of DNA bases and gene sequences that has been constantly updated since the 1980s.

Such technology could soon be worth a pretty penny: According to a study published in November by MarketsandMarkets, a major marketing research firm, the market for synthesizing short DNA strands is expected to reach roughly $1.9 billion by 2020.

Still, Gen9 is pushing to drop costs for synthesis to under 1 cent per base pair, Jacobson says. Additionally, for the past few years, the startup has hosted an annual G-Prize Competition, which awards 1 million base pairs of DNA to researchers with creative synthetic-biology ideas. That’s a prize worth roughly $100,000.

The aim, Jacobson says, is to remove cost barriers for synthetic biologists to boost innovation. “People have lots of ideas but are unable to try out those ideas because of cost,” he says. “This encourages people to think about bigger and bigger ideas.”

ORIGINAL: .MIT News

Rob Matheson | MIT News Office
December 10, 2015

Facebook Joins Stampede of Tech Giants Giving Away Artificial Intelligence Technology

By Hugo Angel,

Open Sourced AIX519
Leading computing companies are helping both themselves and others by open-sourcing AI tools.
Facebook designed this server to put new power behind the simulated
neurons that enable software to do smart things like recognize speech or
the content of photos.
Facebook is releasing for free the designs of a powerful new computer server it crafted to put more power behind artificial-intelligence software. Serkan Piantino, an engineering director in Facebook’s AI Research group, says the new servers are twice as fast as those Facebook used before. “We will discover more things in machine learning and AI as a result,” he says.

The social network’s giveaway is the latest in a recent flurry of announcements by tech giants that are open-sourcing artificial-intelligence technology, which is becoming vital to consumer and business-computing services. Opening up the technology is seen as a way to accelerate progress in the broader field, while also helping tech companies to boost their reputations and make key hires.

In November, Google opened up software called .TensorFlow, used to power the company’s speech recognition and image search (see “.Here’s What Developers Are Doing with Google’s AI Brain”). Just three days later Microsoft released software that distributes machine-learning software across multiple machines to make it more powerful. Not long after, IBM announced the fruition of an earlier promise to open-source SystemML, originally developed to use machine learning to find useful patterns in corporate databanks.

Facebook’s new server design, dubbed Big Sur, was created to power deep-learning software, which processes data using roughly simulated neurons (see “.Teaching Computers to Understand Us”). The invention of ways to put more power behind deep learning, using graphics processors, or GPUs, was crucial to recent leaps in the ability of computers to understand speech, images, and language. Facebook worked closely with Nvidia, a leading manufacturer of GPUs, on its new server designs, which have been stripped down to cram in more of the chips. The hardware can be used to run Google’s TensorFlow software.

Yann LeCun, director of Facebook’s AI Research group, says that one reason to open up the Big Sur designs is that the social network is well placed to slurp up any new ideas it can unlock. “Companies like us actually thrive on fast progress; the faster the progress can be made, the better it is for us,” says LeCun. Facebook open-sourced deep-learning software of its own .in February of this year.

LeCun says that opening up Facebook’s technology also helps attract leading talent. A company can benefit by being seen as benevolent, and also by encouraging people to become familiar with a particular way of working and thinking. As Google, Facebook, and other companies have increased their investments in artificial intelligence, competition to hire experts in the technology has intensified (see “.Is Google Cornering the Market in Deep Learning?”).

Derek Schoettle, general manager of IBM Cloud Data Services unit, which offers tools to help companies analyze data, says that machine-learning technology has to be opened up for it to become widespread. Open-source projects have played a major role in establishing large-scale databases and data analysis as the bedrock of modern computing companies large and small, he says. Real value tends to lie in what companies can do with the tools, not the tools themselves.

What’s going to be interesting and valuable is the data that’s moving in that system and the ways people can find value in that data,” he says. Late last month, IBM transferred its SystemML machine-learning software, designed around techniques other than deep learning, to the Apache Software Foundation, which supports several major open-source projects.

Facebook’s Big Sur server design will be submitted to the Open Compute Project, a group started by the social network through which companies including Apple and Microsoft share designs of computing infrastructure to drive down costs (see “.Inside Facebook’s Not-So-Secret New Data Center”).

ORIGINAL: .Technology Review
By .Tom Simonite 
December 10, 2015

Quantum Computing

By Hugo Angel,

Image credit: .Yuri Samoilov on Flickr
Scientists are exploiting the laws of quantum mechanics to create computers with an exponential increase in computing power.
Quantum computing
Since their creation in the 1950s and 1960s, digital computers have become a mainstay of modern life. Originally taking up entire rooms and taking many hours to perform simple calculations, they have become both highly portable and extremely powerful. Computers can now be found in many people’s pockets, on their desks, in their watches, their televisions and their cars. Our demand for processing power continues to increase as more people connect to the internet and the integration of computing into our lives increases.
Video source: In a nutshell – Kurzgesagt / YouTube. .View video details.
When Moore’s Law meets quantum mechanics
In 1965, Gordon Moore, co-founder of Intel, one of the world’s largest computer companies, first described what has now become known as Moore’s Law. An observation rather than a physical law, Moore noticed that the number of components that could fit on a computer chip doubled roughly every two years, and this observation has proven to hold true over the decades. Accordingly, the processing power and memory capacity of computers has doubled every two years as well.
Starting from computer chips that held a few thousand components in the 1960s, chips today hold several billion components. There is a physical limit to how small these components can get, and as they get near the size of an atom, the quirky rules that govern quantum mechanics come into play. These rules that govern the quantum world are so different from those of the macro world that our traditional understanding of binary logic in a computer doesn’t really work effectively any more. Quantum laws are based on probabilities, so a computer on this scale no longer works in a ‘deterministic’ manner, which means it gives us a definite answer. Rather, it starts to behave in a ‘probablistic’ way—the answer the computer would give us is based on probabilities, each result could fluctuate and we would have to try several times to get a reliable answer.
So if we want to keep increasing computer power, we are going to have to find a new way. Instead of being stymied or trying to avoid the peculiarities of quantum mechanics, we must find ways to exploit them.
Source: TEDx Talks on YouTube. View .video details and transcript.
Bits and qubits
In the computer that sits on your desk, your smartphone, or the biggest supercomputer in the world, information, be it text, pictures or sound is stored very simply as a number. The computer does its job by performing arithmetic calculations upon all these numbers. For example, every pixel in a photo is assigned numbers that represents its colour or brightness, numbers that can then be used in calculations to change or alter the image.
The computer saves these numbers in binary form instead of the decimal form that we use every day. In binary, there are only two numberss: 0 and 1. In a computer, these are known as ‘bits’, short for ‘binary digits’. Every piece of information in your computer is stored as a string of these 0s and 1s. As there are only two options, the 1 or the 0, it’s easy to store these using a number of different methods—for example, as magnetic dots on a hard drive, where the bit is either magnetised one way (1) or another (0), or where the bit has a tiny amount of electrical charge (1) or no charge (0). These combinations of 0s and 1s can represent almost anything, including letters, sounds and commands that tell the computer what to do.
Instead of binary bits, a quantum computer uses qubits. These are particles, such as an atom, ion or photon, where the information is stored by manipulating the particles’ quantum properties, such as spin or polarisation states.
In a normal computer the many steps of a calculation are carried out one after the other. Even if the computer might work on several calculations in parallel, each calculation has to be done one step at a time. A quantum computer works differently. The qubits are programmed with a complex set of conditions, which formulates the question, and these conditions then evolve following the rules of the quantum worldSchrödinger’s wave equation—to find the answer. Each programmed qubit evolves simultaneously; all the steps of the calculation are taken at the same time. Mathematicians have found that this approach can solve a number of computational tasks that are very hard or time consuming on a classical computer. The speed advantage is enormous—and grows with the complexity we can program (i.e. the number of qubits the quantum computer has).
Superposition
Individually, each qubit has its own quantum properties, such as spin. This has two values +1 and -1, but can also be in what’s called a superposition: partly +1 and partly -1. If you think of a globe, you can point to the North Pole (+1) or the South Pole (-1) or any other point in between: London, or Sydney. A quantum particle can be a in a state that is part North Pole and part South Pole.
A qubit with superposition is in a much more complex state than the simple 1 or 0 of a binary bit. More parameters are required to describe that state, and this translates to the amount of information a qubit can hold and process.
Entanglement
Even more interesting is the fact that we can link many particles, each in their state of superposition, together. We can create a link, called entanglement, where all of these particles are dependent upon each other, all their properties exist at the same time. All the particles together are in one big state that evolves, according to the rules of quantum mechanics, as a single system. This is what gives quantum computers their power of parallel processing—the qubits all evolve, individual yet linked, simultaneously.
Imagine the complexity of all these combinations, all the superpositions. The number of parameters needed to fully describe N qubits grows as 2 to the power N. Basically, this means that for each qubit you add to the computer, the information required to describe the assembly of qubits doubles. Just 50 qubits would require more than a billion numbers to describe their collective states or contents. This is where the supreme power of a quantum computer lies, since the evolution in time of these qubits corresponds to a bigger calculation, without costing more time.
For the particular tasks suited to quantum computers, a quantum computer with 30 qubits would be more powerful than the world’s most powerful supercomputer, and a 300 qubit quantum computer would be more powerful than every computer in the world connected together.
A delicate operation
An important feature of these quantum rules is that they are very sensitive to outside interference. The qubits must be kept completely isolated, so they are only being controlled by the laws of quantum mechanics, and not influenced by any environmental factors. Any disturbance to the qubits will cause them to leave their state of superposition—this is called decoherence. If the qubits decohere, the computation will break down. Creating a totally quiet, isolated environment is one of the great challenges of building a quantum computer.
Another challenge is transferring information from the quantum processor to some sort of quantum memory system that can preserve the information so that we can then read the answer. Researchers are working on developing ‘non-demolition’ readouts—ways to read the output of a computation without breaking the computation.
What are quantum computers useful for?
A lot of coverage of the applications of quantum computers talk about the huge gains in processing power over classical computers. Many statements have been made about being able to effortlessly solve hard problems instantaneously but it’s not clear if all the promises will hold up. Rather than being able to solve all of the world’s financial, medical and scientific questions at the press of a button, it’s much more likely that, as with many major scientific projects, the knowledge gain that comes from building the computers will prove just as valuable as their potential applications.
The nearest term and most likely applications for quantum computers will be within quantum mechanics itself. Quantum computers will provide a useful new way of simulating and testing the workings of quantum theory, with implications for chemistry, biochemistry, nanotechnology and drug design. Search engine optimisation for internet searches, management of other types of big data and optimising other systems, such as fleet routing and manufacturing processes could also be impacted by quantum computing.
Another area where large scale quantum computers are predicted to have a big impact is that of data security. In a world where so much of our personal information is online, keeping our data—bank details or our medical records—secure is crucial. To keep it safe, our data is protected by encryption algorithms that the recipient needs to ‘unlock’ with a key. Prime number factoring is one method used to create encryption algorithms. The key is based on knowing the prime number factors of a large number. This sounds pretty basic, but it’s actually very difficult to figure out what the prime number factors of a large number are.
Classical computers can very easily multiply two prime numbers to find their product. But their only option when performing the operation in reverse is a repetitive process of checking one number after another. Even performing billions of calculations per second, this can take an extremely long time when the numbers get especially large. Once numbers reach over 1000 digits, figuring out its prime number factors is generally considered to take too long for a classical computer to calculate—the data encryption is ‘uncrackable’ and our data is kept safe and sound.
However, the superposed qubits of quantum computers change everything. In 1994, mathematician Peter Shor came up with an algorithm that would enable quantum computers to factor large prime numbers significantly faster than by classical methods. As quantum computing advances we may need to change the way we secure our data so that quantum computers can’t access it.
Beyond these applications that we can foretell, there will undoubtedly be many new applications appearing as the technology develops. With classical computers, it was impossible to predict the advances of the internet, voice recognition and touch interfaces that are today so commonplace. Similarly, the most important breakthroughs to come from quantum computing are likely still unknown.
Quantum computer research in Australia
There are several centres of quantum computing research in Australia, working all over the country on a wide range of different problems. The Australian Research Council Centre of Excellence for Quantum Computation and Communication Technology (CQC2T) and the Centre of Excellence for Engineered Quantum Systems (EQuS) are both at the forefront of research in this field.
Two teams from the University of NSW (UNSW) are using silicon to create extremely coherent qubits in new ways, which opens the door to creating quantum computers using easy to manufacture components. One team is focussing on using silicon transistors like those in our laptops and smartphones.
The other UNSW-led team is working to create qubits from phosphorus atoms embedded in silicon. In 2012, this team created atom-sized components ten years ahead of schedule by making the world’s smallest transistor.
Source: UNSWTV on YouTube. View .video details and transcript.
Source: UNSWTV on YouTube. View .video details and transcript.
 They placed a single phosphorus atom on a sheet of silicon with all the necessary atomic-sized components that would be needed to apply a voltage to the phosphorus atom, giving it its spin state in order to function as a qubit.

The nuclear spins of single phosphorus atoms have been shown to have 

  • the highest fidelity (>99%) and 
  • longest coherence time (>35 seconds

of any qubit in the solid state making them extremely attractive for a scalable system.

 A team at the University of Queensland is working to develop quantum computing techniques using single photons as qubits. In 2010, this team has also conducted the first quantum chemistry simulation. This sort of a task involves computing the complex quantum interactions between electrons and requires such complicated equations that performing the calculations with a classical computer necessarily requires a trade-off between accuracy and computational feasibility. Qubits, being in a quantum state themselves, are much more capable of representing these systems, and so offer great potential to the field of quantum chemistry.
This group has also performed a demonstration of a quantum device using photons capable of performing a task that is factorially difficult – i.e. one of the specific tasks that classical computers get stuck with.
Large sums of money are being invested into Australian quantum computing research. In 2014, the Commonwealth Bank made an investment of $5 million towards the .Centre of Excellence for Quantum Computation and Communication Technology at the University of New South Wales. Microsoft has invested more than $10M in engineered quantum systems at University of Sydney, also in 2014.
It’s not very likely that in 20 years we’ll all be walking around with quantum devices in our pockets. Most likely, the first quantum computers will be servers that people will access to undertake complex calculations. However, it is not easy to predict the future, who would have thought fifty years ago, that we would enjoy the power and functionality of today’s computers, like the smartphones that so many of us now depend upon? Who can tell what technology will be at our beck and call if the power of quantum mechanics can be harvested?
ORIGINAL: .NOVA.org.au

Here’s What Developers Are Doing with Google’s AI Brain

By Hugo Angel,

Google Tensor Flow. Jeff Dean
Researchers outside Google are testing the software that the company uses to add artificial intelligence to many of its products.
WHY IT MATTERS
Tech companies are racing to set the standard for machine learning, and to attract technical talent.
Jeff Dean speaks at a Google event in 2007. Credit: Photo by Niall Kennedy / CC BY-NC 2.0
An artificial intelligence engine that Google uses in many of its products, and that it made freely available last month, is now being used by others to perform some neat tricks, including 
  • translating English into Chinese, 
  • reading handwritten text, and 
  • even generating original artwork.
The AI software, called Tensor Flow, provides a straightforward way for users to train computers to perform tasks by feeding them large amounts of data. The software incorporates various methods for efficiently building and training simulated “deep learning” neural networks across different computer hardware.
Deep learning is an extremely effective technique for training computers to recognize patterns in images or audio, enabling machines to perform with human-like competence useful tasks such as recognizing faces or objects in images. Recently, deep learning also has shown significant promise for parsing natural language, by enabling machines to respond to spoken or written queries in meaningful ways.
Speaking at the Neural Information Processing Society (NIPS) conference in Montreal this week, Jeff Dean, the computer scientist at Google who leads the Tensor Flow effort, said that the software is being used for a growing number of experimental projects outside the company.
These include software that generates captions for images and code that translates the documentation for Tensor Flow into Chinese. Another project uses Tensor Flow to generate artificial artwork. “It’s still pretty early,” Dean said after the talk. “People are trying to understand what it’s best at.
Tensor Flow grew out of a project at Google, called Google Brain, aimed at applying various kinds of neural network machine learning to products and services across the company. The reach of Google Brain has grown dramatically in recent years. Dean said that the number of projects at Google that involve Google Brain has grown from a handful in early 2014 to more than 600 today.
Most recently, the Google Brain helped develop Smart Reply, a system that automatically recommends a quick response to messages in Gmail after it scans the text of an incoming message. The neural network technique used to develop Smart Reply was presented by Google researchers at the NIPS conference last year.
Dean expects deep learning and machine learning to have a similar impact on many other companies. “There is a vast array of ways in which machine learning is influencing lots of different products and industries,” he said. For example, the technique is being tested in many industries that try to make predictions from large amounts of data, ranging from retail to insurance.
Google was able to give away the code for Tensor Flow because the data it owns is a far more valuable asset for building a powerful AI engine. The company hopes that the open-source code will help it establish itself as a leader in machine learning and foster relationships with collaborators and future employees. Tensor Flow “gives us a common language to speak, in some sense,” Dean said. “We get benefits from having people we hire who have been using Tensor Flow. It’s not like it’s completely altruistic.
A neural network consists of layers of virtual neurons that fire in a cascade in response to input. A network “learns” as the sensitivity of these neurons is tuned to match particular input and output, and having many layers makes it possible to recognize more abstract features, such as a face in a photograph.
Tensor Flow is now one of several open-source deep learning software libraries, and its performance currently lags behind some other libraries for certain tasks. However, it is designed to be easy to use, and it can easily be ported between different hardware. And Dean says his team is hard at work trying to improve its performance.
In the race to dominate machine learning and attract the best talent, however, other companies may release competing AI engines of their own.
December 8, 2015

Google says its quantum computer is more than 100 million times faster than a regular computer chip

By Hugo Angel,

NASA Quantum Vesuvius Close Up
Above: The D-Wave 2X quantum computer at NASA Ames Research Lab in Mountain View, California, on December 8.
Image Credit: Jordan Novet/VentureBeat
Google appears to be more confident about the technical capabilities of its D-Wave 2X quantum computer, which it operates alongside NASA at the U.S. space agency’s Ames Research Center in Mountain View, California.
D-Wave’s machines are the closest thing we have today to quantum computing, which works with quantum bits, or qubits — each of which can be zero or one or both — instead of more conventional bits. The superposition of these qubits enable machines to make great numbers of computations to simultaneously, making a quantum computer highly desirable for certain types of processes.
In two tests, the Google NASA Quantum Artificial Intelligence Lab today announced that it has found the D-Wave machine to be considerably faster than simulated annealing — a simulation of quantum computation on a classical computer chip.
Google director of engineering Hartmut Neven went over the results of the tests in a blog post today:
We found that for problem instances involving nearly 1,000 binary variables, quantum annealing significantly outperforms its classical counterpart, simulated annealing. It is more than 108 times faster than simulated annealing running on a single core. We also compared the quantum hardware to another algorithm called Quantum Monte Carlo. This is a method designed to emulate the behavior of quantum systems, but it runs on conventional processors. While the scaling with size between these two methods is comparable, they are again separated by a large factor sometimes as high as 108.
Google has also published a paper on the findings.
If nothing else, this is a positive signal for venture-backed D-Wave, which has also sold quantum computers to Lockheed Martin and Los Alamos National Laboratory. At an event at NASA Ames today where reporters looked at the D-Wave machine, chief executive Vern Brownell sounded awfully pleased at the discovery. Without question, the number 100,000,000 is impressive. It’s certainly the kind of thing the startup can show when it attempts to woo IT buyers and show why its technology might well succeed in disrupting legacy chipmakers such as Intel.
But Google continues to work with NASA on quantum computing, and meanwhile Google also has its own quantum computing hardware lab. And in that initiative, Google is still in the early days.
I would say building a quantum computer is really, really hard, so first of all, we’re just trying to get it to work and not worry about cost or size or whatever,” said John Martinis, the person leading up Google’s hardware program and a professor of physics at the University of California, Santa Barbara.
Commercial applications of this technology might not happen overnight, but it’s possible that eventually they could lead to speed-ups for things like image recognition, which is in place inside of many Google services. But the tool could also come in handy for a traditional thing like cleaning up dirty data. Outside of Google, quantum speed-ups could translate into improvements for planning and scheduling and air traffic management, said David Bell, director of the Universities Space Research Association’s Research Institute for Advanced Computer Science, which also works on the D-Wave machine at NASA Ames.
ORIGINAL: Venture Beat
DECEMBER 8, 2015