|As part of its quest to eliminate traffic fatalities, Volvo will be the first automaker to deploy DRIVE PX 2.|
|NVIDIA Drive PX-2|
|As part of its quest to eliminate traffic fatalities, Volvo will be the first automaker to deploy DRIVE PX 2.|
|NVIDIA Drive PX-2|
|Do it yourself, robot. (Reuters/Kim Kyung-Hoon)|
Researchers at Cornell University have taught robots to do just that with a system called RoboWatch. By watching and scanning multiple videos of the same “how-to” activity (with subtitles enabled), bots can
An ion trap used in NIST quantum computing experiments. Credit: Blakestad/NIST
|Photo-illustration: Danqing Wang|
Although scientists have made great advances in .machine learning in recent years, people remain much better at learning new concepts than machines.
“People can learn new concepts extremely quickly, from very little data, often from only one or a few examples. You show even a young child a horse, a school bus, a skateboard, and they can get it from one example,” says study co-author Joshua Tenenbaum at the Massachusetts Institute of Technology. In contrast, “standard algorithms in machine learning require tens, hundreds or even thousands of examples to perform similarly.”
To shorten machine learning, researchers sought to develop a model that better mimicked human learning, which makes generalizations from very few examples of a concept. They focused on learning simple visual concepts — handwritten symbols from alphabets around the world.
“Our work has two goals: to better understand how people learn — to reverse engineer learning in the human mind — and to build machines that learn in more humanlike ways,” Tenenbaum says.
Whereas standard pattern recognition algorithms represent symbols as collections of pixels or arrangements of features, the new model the researchers developed represented each symbol as a simple computer program. For instance, the letter “A” is represented by a program that generates examples of that letter stroke by stroke when the program is run. No programmer is needed during the learning process — the model generates these programs itself.
Moreover, each program is designed to generate variations of each symbol whenever the programs are run, helping it capture the way instances of such concepts might vary, such as the differences between how two people draw a letter.
“The idea for this algorithm came from a surprising finding we had while collecting a data set of handwritten characters from around the world. We found that if you ask a handful of people to draw a novel character, there is remarkable consistency in the way people draw,” says study lead author Brenden Lake at New York University. “When people learn or use or interact with these novel concepts, they do not just see characters as static visual objects. Instead, people see richer structure — something like a causal model, or a sequence of pen strokes — that describe how to efficiently produce new examples of the concept.”
The model also applies knowledge from previous concepts to speed learn new concepts. For instance, the model can use knowledge learned from the Latin alphabet to learn the Greek alphabet. They call their model the Bayesian program learning or BPL framework.
The researchers applied their model to more than 1,600 types of handwritten characters in 50 writing systems, including Sanskrit, Tibetan, Gujarati, Glagolitic, and even invented characters such as those from the animated series Futurama and the online game Dark Horizon. In a kind of .Turing test, scientists found that volunteers recruited via .Amazon’s Mechanical Turk had difficulty distinguishing machine-written characters from human-written ones.
The scientists also had their model focus on creative tasks. They asked their system to create whole new concepts — for instance, creating a new Tibetan letter based on what it knew about letters in the Tibetan alphabet. The researchers found human volunteers rated machine-written characters on par with ones developed by humans recruited for the same task.
“We got human-level performance on this creative task,” study co-author Ruslan Salakhutdinov at the University of Toronto.
Potential applications for this model could include
ORIGINAL: .IEEE Spectrum
Led by an all-star team of Silicon Valley’s best and brightest, OpenAI already has $1 billion in funding.
Silicon Valley is in the midst of an .artificial intelligence war, as giants like Facebook and Google attempt to outdo each other by deploying machine learning and AI to automate services. But a brand-new organization called .OpenAI—helmed by Elon Musk and a posse of prominent techies—aims to use AI to “benefit humanity,” without worrying about profit.
Musk, the CEO of SpaceX and Tesla, .took to Twitter to announce OpenAI on Friday afternoon.
The organization, the formation of which has been in discussions for quite a while, came together in earnest over the last couple of months, co-chair and Y Combinator CEO Sam Altman told Fast Company. It is launching with $1 billion in funding from the likes of Altman, Musk, LinkedIn founder Reid Hoffman, and Palantir chairman Peter Thiel. In an .introductory blog post, the OpenAI team said “we expect to only spend a tiny fraction of this in the next few years.”
Noting that it’s not yet clear on what it will accomplish, OpenAI explains that its nonprofit status should afford it more flexibility. “Since our research is free from financial obligations, we can better focus on a positive human impact,” the blog post reads. “We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely.” We’re just trying to create new knowledge and give it to the world.
The organization features an all-star group of leaders: Musk and Altman are co-chairs, while Google research scientist Ilya Sutskever is research director and Greg Brockman is CTO, a role he formerly held at payments company Stripe.
For nearly everyone involved in OpenAI, the project will be full-time work, Altman explained. For his part, it will be a “major commitment,” while Musk is expected to “come in every week, every other week, something like that.”
Altman explained that everything OpenAI works on—including any intellectual property it creates—will be made public. The one exception, he said, is if it could pose a risk. “Generally speaking,” Altman told Fast Company, “we’ll make all our patents available to the world.”
Companies like Facebook and Google are working fast to use AI. Just yesterday, .Facebook announced it is open-sourcing new computing hardware, known as “Big Sur,” that doubles the power and efficiency of computers currently available for AI research. Facebook has also recently talked about using AI to help its blind users, as well as to make broad tasks easier on the giant social networking service. Google, .according to Recode, has also put significant efforts into AI research and development, but has been somewhat less willing to give away the fruits of its labor.
Altman said he imagines that OpenAI will work with both of those companies, as well as any others interested in AI. “One of the nice things about our structure is that because there is no fiduciary duty,” he said, “we can collaborate with anyone.”
For now, there are no specific collaborations in the works, Altman added, though he expects that to change quickly now that OpenAI has been announced.
Ultimately, while many companies are working on artificial intelligence as part of for-profit projects, Altman said he thinks OpenAI’s mission—and funding—shouldn’t threaten anyone. “I would be very concerned if they didn’t like our mission,” he said. “We’re just trying to create new knowledge and give it to the world.”
|Gen9’s BioFab platform synthesizes small DNA fragments on silicon chips
and uses other technologies to build longer DNA constructs from those
fragments. Done in a parallel, this produces hundreds to thousands of
DNA constructs simultaneously. Shown here is an automated
liquid-handling instrument that dispenses DNA onto the chips. Courtesy of Gen9
As head of the Molecular Machines group at the MIT Media Lab, Jacobson’s work has focused on, among other things, developing technologies for the rapid fabrication of DNA molecules. In 2009, he spun out some of his work into .Gen9, which aims to boost synthetic-biology innovation by offering scientists more cost-effective tools and resources.
Headquartered in Cambridge, Massachusetts, Gen9 has developed a method for synthesizing DNA on silicon chips, which significantly cuts costs and accelerates the creation and testing of genes. Commercially available since 2013, the platform is now being used by dozens of scientists and commercial firms worldwide.
Synthetic biologists synthesize genes by combining strands of DNA. These new genes can be inserted into microorganisms such as yeast and bacteria. Using this approach, scientists can tinker with the cells’ metabolic pathways, enabling the microbes to perform new functions, including testing new antibodies, sensing chemicals in an environment, or creating biofuels.
But conventional gene-synthesizing methods can be time-consuming and costly. Chemical-based processes, for instance, cost roughly 20 cents per base pair — DNA’s key building block — and produce one strand of DNA at a time. This adds up in time and money when synthesizing genes comprising 100,000 base pairs.
Gen9’s chip-based DNA, however, drops the price to roughly 2 cents per base pair, Jacobson says. Additionally, hundreds of thousands of base pairs can be tested and compiled in parallel, as opposed to testing and compiling each pair individually through conventional methods.
This means faster testing and development of new pathways — which usually takes many years — for applications such as advanced therapeutics, and more effective enzymes for detergents, food processing, and biofuels, Jacobson says. “If you can build thousands of pathways on a chip in parallel, and can test them all at once, you get to a working metabolic pathway much faster,” he says.
Over the years, Jacobson and Gen9 have earned many awards and honors. In November, Jacobson was also inducted into the National Inventors Hall of Fame for co-inventing E Ink, the electronic ink used for Amazon’s Kindle e-reader display.
Scaling gene synthesizing Throughout the early-and mid-2000s, a few important pieces of research came together to allow for the scaling up of gene synthesis, which ultimately led to Gen9.
First, Jacobson and his students Chris Emig and Brian Chow began developing chips with thousands of “spots,” which each contained about 100 million copies of a different DNA sequence.
Then, Jacobson and another student, David Kong, created a process that used a certain enzyme as a catalyst to assemble those small DNA fragments into larger DNA strands inside microfluidics devices — “which was the first microfluidics assembly of DNA ever,” Jacobson says.
Despite the novelty, however, the process still wasn’t entirely cost effective. On average, it produced a 99 percent yield, meaning that about 1 percent of the base pairs didn’t match when constructing larger strands. That’s not so bad for making genes with 100 base pairs. “But if you want to make something that’s 10,000 or 100,000 bases long, that’s no good anymore,” Jacobson says.
Around 2004, Jacobson and then-postdoc Peter Carr, along with several other students, found a way to drastically increase yields by taking a cue from a natural error-correcting protein, Mut-S, which recognizes mismatches in DNA base pairing that occur when two DNA strands form a double helix. For synthetic DNA, the protein can detect and extract mismatches arising in base pairs synthesized on the chip, improving yields. In a paper published that year in Nucleic Acids Research, the researchers wrote that this process reduces the frequency of errors, from one in every 100 base pairs to around one in every 10,000.
With these innovations, Jacobson launched Gen9 with two co-founders: George Church of Harvard University, who was also working on synthesizing DNA on microchips, and Drew Endy of Stanford University, a world leader in synthetic-biology innovations.
Together with employees, they created a platform called BioFab and several other tools for synthetic biologists. Today, clients use an online portal to order gene sequences. Then Gen9 designs and fabricates those sequences on chips and delivers them to customers. Recently, the startup updated the portal to allow drag-and-drop capabilities and options for editing and storing gene sequences.
This allows users to “make these very extensive libraries that have been inaccessible previously,” Jacobson says.
Fueling big ideas
Many published studies have already used Gen9’s tools, several of which are posted to the startup’s website. Notable ones, Jacobson says, include designing proteins for therapeutics. In those cases, the researcher needs to make 10 million or 100 million versions of a protein, each comprising maybe 50,000 pieces of DNA, to see which ones work best.
Instead of making and testing DNA sequences one at a time with conventional methods, Gen9 lets researchers test hundreds of thousands of sequences at once on a chip. This should increase chances of finding the right protein, more quickly. “If you just have one shot you’re very unlikely to hit the target,” Jacobson says. “If you have thousands or tens of thousands of shots on a goal, you have a much better chance of success.”
Currently, all the world’s synthetic-biology methods produce only about 300 million bases per year. About 10 of the chips Gen9 uses to make DNA can hold the same amount of content, Jacobson says. In principle, he says, the platform used to make Gen9’s chips — based on collaboration with manufacturing firm Agilent — could produce enough chips to cover about 200 billion bases. This is about the equivalent capacity of GenBank, an open-access database of DNA bases and gene sequences that has been constantly updated since the 1980s.
Such technology could soon be worth a pretty penny: According to a study published in November by MarketsandMarkets, a major marketing research firm, the market for synthesizing short DNA strands is expected to reach roughly $1.9 billion by 2020.
Still, Gen9 is pushing to drop costs for synthesis to under 1 cent per base pair, Jacobson says. Additionally, for the past few years, the startup has hosted an annual G-Prize Competition, which awards 1 million base pairs of DNA to researchers with creative synthetic-biology ideas. That’s a prize worth roughly $100,000.
The aim, Jacobson says, is to remove cost barriers for synthetic biologists to boost innovation. “People have lots of ideas but are unable to try out those ideas because of cost,” he says. “This encourages people to think about bigger and bigger ideas.”
ORIGINAL: .MIT News
|Facebook designed this server to put new power behind the simulated
neurons that enable software to do smart things like recognize speech or
the content of photos.
The social network’s giveaway is the latest in a recent flurry of announcements by tech giants that are open-sourcing artificial-intelligence technology, which is becoming vital to consumer and business-computing services. Opening up the technology is seen as a way to accelerate progress in the broader field, while also helping tech companies to boost their reputations and make key hires.
In November, Google opened up software called .TensorFlow, used to power the company’s speech recognition and image search (see “.Here’s What Developers Are Doing with Google’s AI Brain”). Just three days later Microsoft released software that distributes machine-learning software across multiple machines to make it more powerful. Not long after, IBM announced the fruition of an earlier promise to open-source SystemML, originally developed to use machine learning to find useful patterns in corporate databanks.
Facebook’s new server design, dubbed Big Sur, was created to power deep-learning software, which processes data using roughly simulated neurons (see “.Teaching Computers to Understand Us”). The invention of ways to put more power behind deep learning, using graphics processors, or GPUs, was crucial to recent leaps in the ability of computers to understand speech, images, and language. Facebook worked closely with Nvidia, a leading manufacturer of GPUs, on its new server designs, which have been stripped down to cram in more of the chips. The hardware can be used to run Google’s TensorFlow software.
Yann LeCun, director of Facebook’s AI Research group, says that one reason to open up the Big Sur designs is that the social network is well placed to slurp up any new ideas it can unlock. “Companies like us actually thrive on fast progress; the faster the progress can be made, the better it is for us,” says LeCun. Facebook open-sourced deep-learning software of its own .in February of this year.
LeCun says that opening up Facebook’s technology also helps attract leading talent. A company can benefit by being seen as benevolent, and also by encouraging people to become familiar with a particular way of working and thinking. As Google, Facebook, and other companies have increased their investments in artificial intelligence, competition to hire experts in the technology has intensified (see “.Is Google Cornering the Market in Deep Learning?”).
Derek Schoettle, general manager of IBM Cloud Data Services unit, which offers tools to help companies analyze data, says that machine-learning technology has to be opened up for it to become widespread. Open-source projects have played a major role in establishing large-scale databases and data analysis as the bedrock of modern computing companies large and small, he says. Real value tends to lie in what companies can do with the tools, not the tools themselves.
“What’s going to be interesting and valuable is the data that’s moving in that system and the ways people can find value in that data,” he says. Late last month, IBM transferred its SystemML machine-learning software, designed around techniques other than deep learning, to the Apache Software Foundation, which supports several major open-source projects.
Facebook’s Big Sur server design will be submitted to the Open Compute Project, a group started by the social network through which companies including Apple and Microsoft share designs of computing infrastructure to drive down costs (see “.Inside Facebook’s Not-So-Secret New Data Center”).
|Image credit: .Yuri Samoilov on Flickr|
The nuclear spins of single phosphorus atoms have been shown to have
of any qubit in the solid state making them extremely attractive for a scalable system.
|Jeff Dean speaks at a Google event in 2007. Credit: Photo by Niall Kennedy / CC BY-NC 2.0|
Above: The D-Wave 2X quantum computer at NASA Ames Research Lab in Mountain View, California, on December 8.
Image Credit: Jordan Novet/VentureBeat