||Chef CTO Adam Jacob.CHRISTIE HEMM KLOK/WIRED|
|In the ’70s, Alan Kay was a researcher at Xerox PARC, where he helped develop the notion of personal computing, the laptop, the now ubiquitous overlapping-window interface, and object-oriented programming.
COMPUTER HISTORY MUSEUM
|Chef CTO Adam Jacob.CHRISTIE HEMM KLOK/WIRED|
|Above: KnuEdge’s first chip has 256 cores.Image Credit: KnuEdge|
Above: Daniel Goldin, CEO of KnuEdge.
Image Credit: KnuEdge
|Above: KnuEdge’s first chip has 256 cores.Image Credit: KnuEdge|
Above: Daniel Goldin, CEO of KnuEdge.
Image Credit: KnuEdge
,” he said. “It’s going to be too expensive. It’s not propulsion. It’s not environmental control. It’s not power. This software business is a very big problem, and that nation couldn’t afford it.”
Illustration by Sophia Foster-Dimino
|EDWARD C. MONAGHAN|
To nerds of a certain bent, this all suggests a coming era in which we forfeit authority over our machines. “One can imagine such technology
- outsmarting financial markets,
- out-inventing human researchers,
- out-manipulating human leaders, and
- developing weapons we cannot even understand,”
wrote Stephen Hawking—sentiments echoed by Elon Musk and Bill Gates, among others. “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”
|Image Credit: Shutterstock.com|
- traumatic brain injury, which haunts a disturbingly high number of veterans and football players;
- stroke or Alzheimer’s disease, which often plagues the elderly; or
- even normal brain aging, which inevitably touches us all.
- Space: Interplanetary and interstellar travel, including faster-than-light travel; missions and permanent settlements on the Moon, Mars and the asteroid belt; space elevators
- Transportation & Energy: Self-driving and electric vehicles; improved mass transit systems and intercontinental travel; flying cars and hoverboards; high-efficiency solar and other sustainable energy sources
- Medicine & Health: Neurological devices for memory augmentation, storage and transfer, and perhaps to read people’s thoughts; life extension, including virtual immortality via uploading brains into computers; artificial cells and organs; “Star Trek”-style tricorder for home diagnostics and treatment; wearable technology, such as exoskeletons and augmented-reality glasses and contact lenses
- Materials & Robotics: Ubiquitous nanotechnology, 3-D printing and robotics; invisibility and cloaking devices; energy shields; anti-gravity devices
- Cyber & Big Data: Improved artificial intelligence; optical and quantum computing; faster, more secure Internet; better use of data analytics to improve use of resources
- “Pizza delivery via teleportation”—DARPA took a close look at this a few years ago and decided there is plenty of incentive for the private sector to handle this challenge.
- “Time travel technology will be close, but will be closely guarded by the military as a matter of national security”—We already did this tomorrow.
- “Systems for controlling the weather”—Meteorologists told us it would be a job killer and we didn’t want to rain on their parade.
- “Space colonies…and unlimited cellular data plans that won’t be slowed by your carrier when you go over a limit”—We appreciate the idea that these are equally difficult, but they are not. We think likable cell-phone data plans are beyond even DARPA and a total non-starter.
|Photo-illustration: Danqing Wang|
The new model learned how to write invented symbols from the animated show Futurama as well as dozens of alphabets from across the world. It also showed it could invent symbols of its own in the style of a given language.The researchers suggest their model could also learn other kinds of concepts, such as speech and gestures.
Although scientists have made great advances in .machine learning in recent years, people remain much better at learning new concepts than machines.
“People can learn new concepts extremely quickly, from very little data, often from only one or a few examples. You show even a young child a horse, a school bus, a skateboard, and they can get it from one example,” says study co-author Joshua Tenenbaum at the Massachusetts Institute of Technology. In contrast, “standard algorithms in machine learning require tens, hundreds or even thousands of examples to perform similarly.”
To shorten machine learning, researchers sought to develop a model that better mimicked human learning, which makes generalizations from very few examples of a concept. They focused on learning simple visual concepts — handwritten symbols from alphabets around the world.
“Our work has two goals: to better understand how people learn — to reverse engineer learning in the human mind — and to build machines that learn in more humanlike ways,” Tenenbaum says.
Whereas standard pattern recognition algorithms represent symbols as collections of pixels or arrangements of features, the new model the researchers developed represented each symbol as a simple computer program. For instance, the letter “A” is represented by a program that generates examples of that letter stroke by stroke when the program is run. No programmer is needed during the learning process — the model generates these programs itself.
Moreover, each program is designed to generate variations of each symbol whenever the programs are run, helping it capture the way instances of such concepts might vary, such as the differences between how two people draw a letter.
“The idea for this algorithm came from a surprising finding we had while collecting a data set of handwritten characters from around the world. We found that if you ask a handful of people to draw a novel character, there is remarkable consistency in the way people draw,” says study lead author Brenden Lake at New York University. “When people learn or use or interact with these novel concepts, they do not just see characters as static visual objects. Instead, people see richer structure — something like a causal model, or a sequence of pen strokes — that describe how to efficiently produce new examples of the concept.”
The model also applies knowledge from previous concepts to speed learn new concepts. For instance, the model can use knowledge learned from the Latin alphabet to learn the Greek alphabet. They call their model the Bayesian program learning or BPL framework.
The researchers applied their model to more than 1,600 types of handwritten characters in 50 writing systems, including Sanskrit, Tibetan, Gujarati, Glagolitic, and even invented characters such as those from the animated series Futurama and the online game Dark Horizon. In a kind of .Turing test, scientists found that volunteers recruited via .Amazon’s Mechanical Turk had difficulty distinguishing machine-written characters from human-written ones.
The scientists also had their model focus on creative tasks. They asked their system to create whole new concepts — for instance, creating a new Tibetan letter based on what it knew about letters in the Tibetan alphabet. The researchers found human volunteers rated machine-written characters on par with ones developed by humans recruited for the same task.
“We got human-level performance on this creative task,” study co-author Ruslan Salakhutdinov at the University of Toronto.
Potential applications for this model could include
- handwriting recognition,
- speech recognition,
- gesture recognition and
- object recognition.
ORIGINAL: .IEEE Spectrum
Posted 10 Dec 2015 | 20:00 GMT
Inside and outside of the classroom, MIT professor Joseph Jacobson has become a prominent figure in — and advocate for — the emerging field of synthetic biology.
As head of the Molecular Machines group at the MIT Media Lab, Jacobson’s work has focused on, among other things, developing technologies for the rapid fabrication of DNA molecules. In 2009, he spun out some of his work into .Gen9, which aims to boost synthetic-biology innovation by offering scientists more cost-effective tools and resources.
Headquartered in Cambridge, Massachusetts, Gen9 has developed a method for synthesizing DNA on silicon chips, which significantly cuts costs and accelerates the creation and testing of genes. Commercially available since 2013, the platform is now being used by dozens of scientists and commercial firms worldwide.
Synthetic biologists synthesize genes by combining strands of DNA. These new genes can be inserted into microorganisms such as yeast and bacteria. Using this approach, scientists can tinker with the cells’ metabolic pathways, enabling the microbes to perform new functions, including testing new antibodies, sensing chemicals in an environment, or creating biofuels.
But conventional gene-synthesizing methods can be time-consuming and costly. Chemical-based processes, for instance, cost roughly 20 cents per base pair — DNA’s key building block — and produce one strand of DNA at a time. This adds up in time and money when synthesizing genes comprising 100,000 base pairs.
Gen9’s chip-based DNA, however, drops the price to roughly 2 cents per base pair, Jacobson says. Additionally, hundreds of thousands of base pairs can be tested and compiled in parallel, as opposed to testing and compiling each pair individually through conventional methods.
This means faster testing and development of new pathways — which usually takes many years — for applications such as advanced therapeutics, and more effective enzymes for detergents, food processing, and biofuels, Jacobson says. “If you can build thousands of pathways on a chip in parallel, and can test them all at once, you get to a working metabolic pathway much faster,” he says.
Over the years, Jacobson and Gen9 have earned many awards and honors. In November, Jacobson was also inducted into the National Inventors Hall of Fame for co-inventing E Ink, the electronic ink used for Amazon’s Kindle e-reader display.
Scaling gene synthesizing Throughout the early-and mid-2000s, a few important pieces of research came together to allow for the scaling up of gene synthesis, which ultimately led to Gen9.
First, Jacobson and his students Chris Emig and Brian Chow began developing chips with thousands of “spots,” which each contained about 100 million copies of a different DNA sequence.
Then, Jacobson and another student, David Kong, created a process that used a certain enzyme as a catalyst to assemble those small DNA fragments into larger DNA strands inside microfluidics devices — “which was the first microfluidics assembly of DNA ever,” Jacobson says.
Despite the novelty, however, the process still wasn’t entirely cost effective. On average, it produced a 99 percent yield, meaning that about 1 percent of the base pairs didn’t match when constructing larger strands. That’s not so bad for making genes with 100 base pairs. “But if you want to make something that’s 10,000 or 100,000 bases long, that’s no good anymore,” Jacobson says.
Around 2004, Jacobson and then-postdoc Peter Carr, along with several other students, found a way to drastically increase yields by taking a cue from a natural error-correcting protein, Mut-S, which recognizes mismatches in DNA base pairing that occur when two DNA strands form a double helix. For synthetic DNA, the protein can detect and extract mismatches arising in base pairs synthesized on the chip, improving yields. In a paper published that year in Nucleic Acids Research, the researchers wrote that this process reduces the frequency of errors, from one in every 100 base pairs to around one in every 10,000.
With these innovations, Jacobson launched Gen9 with two co-founders: George Church of Harvard University, who was also working on synthesizing DNA on microchips, and Drew Endy of Stanford University, a world leader in synthetic-biology innovations.
Together with employees, they created a platform called BioFab and several other tools for synthetic biologists. Today, clients use an online portal to order gene sequences. Then Gen9 designs and fabricates those sequences on chips and delivers them to customers. Recently, the startup updated the portal to allow drag-and-drop capabilities and options for editing and storing gene sequences.
This allows users to “make these very extensive libraries that have been inaccessible previously,” Jacobson says.
Fueling big ideas
Many published studies have already used Gen9’s tools, several of which are posted to the startup’s website. Notable ones, Jacobson says, include designing proteins for therapeutics. In those cases, the researcher needs to make 10 million or 100 million versions of a protein, each comprising maybe 50,000 pieces of DNA, to see which ones work best.
Instead of making and testing DNA sequences one at a time with conventional methods, Gen9 lets researchers test hundreds of thousands of sequences at once on a chip. This should increase chances of finding the right protein, more quickly. “If you just have one shot you’re very unlikely to hit the target,” Jacobson says. “If you have thousands or tens of thousands of shots on a goal, you have a much better chance of success.”
Currently, all the world’s synthetic-biology methods produce only about 300 million bases per year. About 10 of the chips Gen9 uses to make DNA can hold the same amount of content, Jacobson says. In principle, he says, the platform used to make Gen9’s chips — based on collaboration with manufacturing firm Agilent — could produce enough chips to cover about 200 billion bases. This is about the equivalent capacity of GenBank, an open-access database of DNA bases and gene sequences that has been constantly updated since the 1980s.
Such technology could soon be worth a pretty penny: According to a study published in November by MarketsandMarkets, a major marketing research firm, the market for synthesizing short DNA strands is expected to reach roughly $1.9 billion by 2020.
Still, Gen9 is pushing to drop costs for synthesis to under 1 cent per base pair, Jacobson says. Additionally, for the past few years, the startup has hosted an annual G-Prize Competition, which awards 1 million base pairs of DNA to researchers with creative synthetic-biology ideas. That’s a prize worth roughly $100,000.
The aim, Jacobson says, is to remove cost barriers for synthetic biologists to boost innovation. “People have lots of ideas but are unable to try out those ideas because of cost,” he says. “This encourages people to think about bigger and bigger ideas.”
ORIGINAL: .MIT News
December 10, 2015
|Image Credit: diez artwork/Shutterstock|
- There’s Enswarm, a UK startup that is using swarm technologies to assist with recruitment and employment decisions.
- There’s Swarm.fund, a startup using swarming and crypto-currencies like Bitcoin as a new model for fundraising.
- And the human swarming company I founded, Unanimous A.I., creates a unified intellect from any group of networked users.
Clearly, we lack the natural ability to form closed-loop swarms, but like many other skills we can’t do naturally, emerging technologies are filling a void. Leveraging our vast networking infrastructure, new software techniques are allowing online groups to form artificial swarms that can work in synchrony to answer questions, reach decisions, and make predictions, all while exhibiting the same types of intelligence amplifications as seen in nature. The approach is sometimes called “blended intelligence” because it combines the hardware and software technologies used by AI systems with populations of real people, creating human-machine systems that have the potential of outsmarting both humans and pure-software AIs alike.
Although heavily reliant on hardware and software, swarming keeps human sensibilities and moralities as an integral part of the processes. As a result, this “human-in-the-loop” approach to AI combines the benefits of computational infrastructure and software efficiencies with the unique values that each person brings to the table:
- morality, and
And because swarm-based intelligence is rooted in human input, the resulting intelligence is far more likely to be aligned with humanity – not just with our values and morals, but also with our goals and objectives.
That’s still an open question, but with the potential to engage millions, even billions of people around the globe, each brimming with unique ideas and insights, swarm intelligence may be society’s best hope for staying one step ahead of the pure machine intelligences that emerge from busy AI labs around the world.
Illustration by Julia Suits, The New Yorker Cartoonist & author of The Extraordinary Catalog of Peculiar Inventions.
In this video, watch how novel robotic insects developed by a team of Seoul National University and Harvard scientists can jump directly off water’s surface. The robots emulate the natural locomotion of water strider insects, which skim on and jump off the surface of water. Credit: Wyss Institute at Harvard University
|From left, Seoul National University (SNU) professors Ho-Young Kim, Ph.D., and Kyu Jin Cho, Ph.D., observe the semi-aquatic jumping robotic insects developed by an SNU and Harvard team. Credit: Seoul National University.|