Category: Biomimicry


Scientists Have Created an Artificial Synapse That Can Learn Autonomously

By Hugo Angel,

Sergey Tarasov/Shutterstock
Developments and advances in artificial intelligence (AI) have been due in large part to technologies that mimic how the human brain works. In the world of information technology, such AI systems are called neural networks.
These contain algorithms that can be trained, among other things, to imitate how the brain recognises speech and images. However, running an Artificial Neural Network consumes a lot of time and energy.
Now, researchers from the National Centre for Scientific Research (CNRS) in Thales, the University of Bordeaux in Paris-Sud, and Evry have developed an artificial synapse called a memristor directly on a chip.
It paves the way for intelligent systems that required less time and energy to learn, and it can learn autonomously.
In the human brain, synapses work as connections between neurons. The connections are reinforced and learning is improved the more these synapses are stimulated.
The memristor works in a similar fashion. It’s made up of a thin ferroelectric layer (which can be spontaneously polarised) that is enclosed between two electrodes.
Using voltage pulses, their resistance can be adjusted, like biological neurons. The synaptic connection will be strong when resistance is low, and vice-versa.
Figure 1
(a) Sketch of pre- and post-neurons connected by a synapse. The synaptic transmission is modulated by the causality (Δt) of neuron spikes. (b) Sketch of the ferroelectric memristor where a ferroelectric tunnel barrier of BiFeO3 (BFO) is sandwiched between a bottom electrode of (Ca,Ce)MnO3 (CCMO) and a top submicron pillar of Pt/Co. YAO stands for YAlO3. (c) Single-pulse hysteresis loop of the ferroelectric memristor displaying clear voltage thresholds ( and ). (d) Measurements of STDP in the ferroelectric memristor. Modulation of the device conductance (ΔG) as a function of the delay (Δt) between pre- and post-synaptic spikes. Seven data sets were collected on the same device showing the reproducibility of the effect. The total length of each pre- and post-synaptic spike is 600 ns.
Source: Nature Communications
The memristor’s capacity for learning is based on this adjustable resistance.
AI systems have developed considerably in the past couple of years. Neural networks built with learning algorithms are now capable of performing tasks which synthetic systems previously could not do.
For instance, intelligent systems can now compose music, play games and beat human players, or do your taxes. Some can even identify suicidal behaviour, or differentiate between what is lawful and what isn’t.
This is all thanks to AI’s capacity to learn, the only limitation of which is the amount of time and effort it takes to consume the data that serve as its springboard.
With the memristor, this learning process can be greatly improved. Work continues on the memristor, particularly on exploring ways to optimise its function.
For starters, the researchers have successfully built a physical model to help predict how it functions.
Their work is published in the journal Nature Communications.
ORIGINAL: ScienceAlert
DOM GALEON, FUTURISM
7 APR 2017

The Quest to Make Code Work Like Biology Just Took A Big Step

By Hugo Angel,

THE QUEST TO MAKE CODE WORK LIKE BIOLOGY JUST TOOK A BIG STEP

|Chef CTO Adam Jacob.CHRISTIE HEMM KLOK/WIRED
IN THE EARLY 1970s, at Silicon Valley’s Xerox PARC, Alan Kay envisioned computer software as something akin to a biological system, a vast collection of small cells that could communicate via simple messages. Each cell would perform its own discrete task. But in communicating with the rest, it would form a more complex whole. “This is an almost foolproof way of operating,” Kay once told me. Computer programmers could build something large by focusing on something small. That’s a simpler task, and in the end, the thing you build is stronger and more efficient. 
The result was a programming language called SmallTalk. Kay called it an object-oriented language—the “objects” were the cells—and it spawned so many of the languages that programmers use today, from Objective-C and Swiftwhich run all the apps on your Apple iPhone, to JavaGoogle’s language of choice on Android phones. Kay’s vision of code as biology is now the norm. It’s how the world’s programmers think about building software. 

In the ’70s, Alan Kay was a researcher at Xerox PARC, where he helped develop the notion of personal computing, the laptop, the now ubiquitous overlapping-window interface, and object-oriented programming.
COMPUTER HISTORY MUSEUM
But Kay’s big idea extends well beyond individual languages like Swift and Java. This is also how Google, Twitter, and other Internet giants now think about building and running their massive online services. The Google search engine isn’t software that runs on a single machine. Serving millions upon millions of people around the globe, it’s software that runs on thousands of machines spread across multiple computer data centers. Google runs this entire service like a biological system, as a vast collection of self-contained pieces that work in concert. It can readily spread those cells of code across all those machines, and when machines break—as they inevitably do—it can move code to new machines and keep the whole alive. 
Now, Adam Jacob wants to bring this notion to every other business on earth. Jacob is a bearded former comic-book-store clerk who, in the grand tradition of Alan Kay, views technology like a philosopher. He’s also the chief technology officer and co-founder of Chef, a Seattle company that has long helped businesses automate the operation of their online services through a techno-philosophy known as “DevOps.” Today, he and his company unveiled a new creation they call Habitat. Habitat is a way of packaging entire applications into something akin to Alan Kay’s biological cells, squeezing in not only the application code but everything needed to run, oversee, and update that code—all its “dependencies,” in programmer-speak. Then you can deploy hundreds or even thousands of these cells across a network of machines, and they will operate as a whole, with Habitat handling all the necessary communication between each cell. “With Habitat,” Jacob says, “all of the automation travels with the application itself.” 
That’s something that will at least capture the imagination of coders. And if it works, it will serve the rest of us too. If businesses push their services towards the biological ideal, then we, the people who use those services, will end up with technology that just works better—that coders can improve more easily and more quickly than before
Reduce, Reuse, Repackage 
Habitat is part of a much larger effort to remake any online business in the image of Google. Alex Polvi, CEO and founder of a startup called CoreOS, calls this movement GIFEE—or Google Infrastructure For Everyone Else—and it includes tools built by CoreOS as well as such companies as Docker and Mesosphere, not to mention Google itself. The goal: to create tools that more efficiently juggle software across the vast computer networks that drive the modern digital world. 
But Jacob seeks to shift this idea’s center of gravity. He wants to make it as easy as possible for businesses to run their existing applications in this enormously distributed manner. He wants businesses embrace this ideal even if they’re not willing to rebuild these applications or the computer platforms they run on. He aims to provide a way of wrapping any code—new or old—in an interface that can run on practically any machine. Rather than rebuilding your operation in the image of Google, Jacob says, you can simply repackage it. 
If what I want is an easier application to manage, why do I need to change the infrastructure for that application?” he says. It’s yet another extension of Alan Kay’s biological metaphor—as he himself will tell you. When I describe Habitat to Kay—now revered as one of the founding fathers of the PC, alongside so many other PARC researchers—he says it does what SmallTalk did so long go
Chef CTO Adam Jacob.CHRISTIE HEMM KLOK/WIRED
The Unknown Programmer 
Kay traces the origins of SmallTalk to his time in the Air Force. In 1961, he was stationed at Randolph Air Force Base near San Antonio, Texas, and he worked as a programmer, building software for a vacuum-tube computer called the Burroughs 220. In those days, computers didn’t have operating systems. No Apple iOS. No Windows. No Unix. And data didn’t come packaged in standard file formats. No .doc. No .xls. No .txt. But the Air Force needed a way of sending files between bases so that different machines could read them. Sometime before Kay arrived, another Air Force programmer—whose name is lost to history—cooked up a good way. 
This unnamed programmer—“almost certainly an enlisted man,” Kay says, “because officers didn’t program back then”—would put data on a magnetic-tape reel along with all the procedures needed to read that data. Then, he tacked on a simple interface—a few “pointers,” in programmer-speak—that allowed the machine to interact with those procedures. To read the data, all the machine needed to understand were the pointers—not a whole new way of doing things. In this way, someone like Kay could read the tape from any machine on any Air Force base. 
Kay’s programming objects worked in a similar way. Each did its own thing, but could communicate with the outside world through a simple interface. That meant coders could readily plug an old object into a new program, or reuse it several times across the same program. Today, this notion is fundamental to software design. And now, Habitat wants to recreate this dynamic on a higher level: not within an application, but in a way that allows an application to run across as a vast computer network. 
Because Habitat wraps an application in a package that includes everything needed to run and oversee the application—while fronting this package with a simple interface—you can potentially run that application on any machine. Or, indeed, you can spread tens, hundreds, or even thousands of packages across a vast network of machines. Software called the Habitat Supervisor sits on each machine, running each package and ensuring it can communicate with the rest. Written in a new programming language called Rust which is suited to modern online systems, Chef designed this Supervisor specifically to juggle code on an enormous scale. 
Kay’s vision of code as biology is now the norm. It’s how the world’s programmers think about the software they build. 
But the important stuff lies inside those packages. Each package includes everything you need to orchestrate the application, as modern coders say, across myriad machines. Once you deploy your packages across a network, Jacob says, they can essentially orchestrate themselves. Instead of overseeing the application from one central nerve center, you can distribute the task—the ultimate aim of Kay’s biological system. That’s simpler and less likely to fail, at least in theory. 
What’s more, each package includes everything you need to modify the application—to, say, update the code or apply new security rules. This is what Jacob means when he says that all the automation travels with the application. “Having the management go with the package,” he says, “means I can manage in the same way, no matter where I choose to run it.” That’s vital in the modern world. Online code is constantly changing, and this system is designed for change.

‘Grownup Containers’ 
The idea at the heart of Habitat is similar to concepts that drive Mesosphere, Google’s Kubernetes, and Docker’s Swarm. All of these increasingly popular tools run software inside Linux “containers”—walled-off spaces within the Linux operating system that provide ways to orchestrate discrete pieces of code across myriad machines. Google uses containers in running its own online empire, and the rest of Silicon Valley is following suit. 
But Chef is taking a different tack. Rather than centering Habitat around Linux containers, they’ve built a new kind of package designed to run in other ways too. You can run Habitat packages atop Mesosphere or Kubernetes. You can also run them atop virtual machines, such as those offered by Amazon or Google on their cloud services. Or you can just run them on your own servers. “We can take all the existing software in the world, which wasn’t built with any of this new stuff in mind, and make it behave,” Jacob says. 
Jon Cowie, senior operations engineer at the online marketplace Etsy, is among the few outsiders who have kicked the tires on Habibat. He calls it “grownup containers.” Building an application around containers can be a complicated business, he explains. Habitat, he says, is simpler. You wrap your code, old or new, in a new interface and run it where you want to run it. “They are giving you a flexible toolkit,” he says. 
That said, container systems like Mesosphere and Kubernetes can still be a very important thing. These tools include “schedulers” that spread code across myriad machines in a hyper-efficient way, finding machines that have available resources and actually launching the code. Habitat doesn’t do that. It handles everything after the code is in place. 
Jacob sees Habitat as a tool that runs in tandem with a Mesophere or a Kubernetes—or atop other kinds of systems. He sees it as a single tool that can run any application on anything. But you may have to tweak Habitat so it will run on your infrastructure of choice. In packaging your app, Habitat must use a format that can speak to each type of system you want it to run on (the inputs and outputs for a virtual machine are different, say, from the inputs and outputs for Kubernetes), and at the moment, it only offers certain formats. If it doesn’t handle your format of choice, you’ll have to write a little extra code of your own. 
Jacob says writing this code is “trivial.” And for seasoned developers, it may be. Habitat’s overarching mission is to bring the biological imperative to as many businesses as possible. But of course, the mission isn’t everything. The importance of Habitat will really come down to how well it works.

Promise Theory 
Whatever the case, the idea behind Habitat is enormously powerful. The biological ideal has driven the evolution of computing systems for decades—and will continue to drive their evolution. Jacob and Chef are taking a concept that computer coders are intimately familiar with, and they’re applying it to something new. 
They’re trying to take away more of the complexity—and do this in a way that matches the cultural affiliation of developers,” says Mark Burgess, a computer scientist, physicist, and philosopher whose ideas helped spawn Chef and other DevOps projects. 
Burgess compares this phenomenon to what he calls Promise Theory, where humans and autonomous agents work together to solve problems by striving to fulfill certain intentions, or promises. He sees computer automation not just as a cooperation of code, but of people and code. That’s what Jacob is striving for. You share your intentions with Habitat, and its autonomous agents work to realize them—a flesh-and-blood biological system combining with its idealized counterpart in code. 
ORIGINAL: Wired
AUTHOR: CADE METZ.CADE METZ BUSINESS 
DATE OF PUBLICATION: 06.14.16.06.14.16 

Former NASA chief unveils $100 million neural chip maker KnuEdge

By Hugo Angel,

Daniel Goldin
It’s not all that easy to call KnuEdge a startup. Created a decade ago by Daniel Goldin, the former head of the National Aeronautics and Space Administration, KnuEdge is only now coming out of stealth mode. It has already raised $100 million in funding to build a “neural chip” that Goldin says will make data centers more efficient in a hyperscale age.
Goldin, who founded the San Diego, California-based company with the former chief technology officer of NASA, said he believes the company’s brain-like chip will be far more cost and power efficient than current chips based on the computer design popularized by computer architect John von Neumann. In von Neumann machines, memory and processor are separated and linked via a data pathway known as a bus. Over the years, von Neumann machines have gotten faster by sending more and more data at higher speeds across the bus as processor and memory interact. But the speed of a computer is often limited by the capacity of that bus, leading to what some computer scientists to call the “von Neumann bottleneck.” IBM has seen the same problem, and it has a research team working on brain-like data center chips. Both efforts are part of an attempt to deal with the explosion of data driven by artificial intelligence and machine learning.
Goldin’s company is doing something similar to IBM, but only on the surface. Its approach is much different, and it has been secretly funded by unknown angel investors. And Goldin said in an interview with VentureBeat that the company has already generated $20 million in revenue and is actively engaged in hyperscale computing companies and Fortune 500 companies in the aerospace, banking, health care, hospitality, and insurance industries. The mission is a fundamental transformation of the computing world, Goldin said.
It all started over a mission to Mars,” Goldin said.

Above: KnuEdge’s first chip has 256 cores.Image Credit: KnuEdge
Back in the year 2000, Goldin saw that the time delay for controlling a space vehicle would be too long, so the vehicle would have to operate itself. He calculated that a mission to Mars would take software that would push technology to the limit, with more than tens of millions of lines of code.
Above: Daniel Goldin, CEO of KnuEdge.
Image Credit: KnuEdge
I thought, Former NASA chief unveils $100 million neural chip maker KnuEdge

It’s not all that easy to call KnuEdge a startup. Created a decade ago by Daniel Goldin, the former head of the National Aeronautics and Space Administration, KnuEdge is only now coming out of stealth mode. It has already raised $100 million in funding to build a “neural chip” that Goldin says will make data centers more efficient in a hyperscale age.
Goldin, who founded the San Diego, California-based company with the former chief technology officer of NASA, said he believes the company’s brain-like chip will be far more cost and power efficient than current chips based on the computer design popularized by computer architect John von Neumann. In von Neumann machines, memory and processor are separated and linked via a data pathway known as a bus. Over the years, von Neumann machines have gotten faster by sending more and more data at higher speeds across the bus as processor and memory interact. But the speed of a computer is often limited by the capacity of that bus, leading to what some computer scientists to call the “von Neumann bottleneck.” IBM has seen the same problem, and it has a research team working on brain-like data center chips. Both efforts are part of an attempt to deal with the explosion of data driven by artificial intelligence and machine learning.
Goldin’s company is doing something similar to IBM, but only on the surface. Its approach is much different, and it has been secretly funded by unknown angel investors. And Goldin said in an interview with VentureBeat that the company has already generated $20 million in revenue and is actively engaged in hyperscale computing companies and Fortune 500 companies in the aerospace, banking, health care, hospitality, and insurance industries. The mission is a fundamental transformation of the computing world, Goldin said.
It all started over a mission to Mars,” Goldin said.

Above: KnuEdge’s first chip has 256 cores.Image Credit: KnuEdge
Back in the year 2000, Goldin saw that the time delay for controlling a space vehicle would be too long, so the vehicle would have to operate itself. He calculated that a mission to Mars would take software that would push technology to the limit, with more than tens of millions of lines of code.
Above: Daniel Goldin, CEO of KnuEdge.
Image Credit: KnuEdge
I thought, holy smokes,” he said. “It’s going to be too expensive. It’s not propulsion. It’s not environmental control. It’s not power. This software business is a very big problem, and that nation couldn’t afford it.
So Goldin looked further into the brains of the robotics, and that’s when he started thinking about the computing it would take.
Asked if it was easier to run NASA or a startup, Goldin let out a guffaw.
I love them both, but they’re both very different,” Goldin said. “At NASA, I spent a lot of time on non-technical issues. I had a project every quarter, and I didn’t want to become dull technically. I tried to always take on a technical job doing architecture, working with a design team, and always doing something leading edge. I grew up at a time when you graduated from a university and went to work for someone else. If I ever come back to this earth, I would graduate and become an entrepreneur. This is so wonderful.
Back in 1992, Goldin was planning on starting a wireless company as an entrepreneur. But then he got the call to “go serve the country,” and he did that work for a decade. He started KnuEdge (previously called Intellisis) in 2005, and he got very patient capital.
When I went out to find investors, I knew I couldn’t use the conventional Silicon Valley approach (impatient capital),” he said. “It is a fabulous approach that has generated incredible wealth. But I wanted to undertake revolutionary technology development. To build the future tools for next-generation machine learning, improving the natural interface between humans and machines. So I got patient capital that wanted to see lightning strike. Between all of us, we have a board of directors that can contact almost anyone in the world. They’re fabulous business people and technologists. We knew we had a ten-year run-up.
But he’s not saying who those people are yet.
KnuEdge’s chips are part of a larger platform. KnuEdge is also unveiling KnuVerse, a military-grade voice recognition and authentication technology that unlocks the potential of voice interfaces to power next-generation computing, Goldin said.
While the voice technology market has exploded over the past five years due to the introductions of Siri, Cortana, Google Home, Echo, and ViV, the aspirations of most commercial voice technology teams are still on hold because of security and noise issues. KnuVerse solutions are based on patented authentication techniques using the human voice — even in extremely noisy environments — as one of the most secure forms of biometrics. Secure voice recognition has applications in industries such as banking, entertainment, and hospitality.
KnuEdge says it is now possible to authenticate to computers, web and mobile apps, and Internet of Things devices (or everyday objects that are smart and connected) with only a few words spoken into a microphone — in any language, no matter how loud the background environment or how many other people are talking nearby. In addition to KnuVerse, KnuEdge offers Knurld.io for application developers, a software development kit, and a cloud-based voice recognition and authentication service that can be integrated into an app typically within two hours.
And KnuEdge is announcing KnuPath with LambdaFabric computing. KnuEdge’s first chip, built with an older manufacturing technology, has 256 cores, or neuron-like brain cells, on a single chip. Each core is a tiny digital signal processor. The LambdaFabric makes it possible to instantly connect those cores to each other — a trick that helps overcome one of the major problems of multicore chips, Goldin said. The LambdaFabric is designed to connect up to 512,000 devices, enabling the system to be used in the most demanding computing environments. From rack to rack, the fabric has a latency (or interaction delay) of only 400 nanoseconds. And the whole system is designed to use a low amount of power.
All of the company’s designs are built on biological principles about how the brain gets a lot of computing work done with a small amount of power. The chip is based on what Goldin calls “sparse matrix heterogeneous machine learning algorithms.” And it will run C++ software, something that is already very popular. Programmers can program each one of the cores with a different algorithm to run simultaneously, for the “ultimate in heterogeneity.” It’s multiple input, multiple data, and “that gives us some of our power,” Goldin said.

Above: KnuEdge’s KnuPath chip.
Image Credit: KnuEdge
KnuEdge is emerging out of stealth mode to aim its new Voice and Machine Learning technologies at key challenges in IoT, cloud based machine learning and pattern recognition,” said Paul Teich, principal analyst at Tirias Research, in a statement. “Dan Goldin used his experience in transforming technology to charter KnuEdge with a bold idea, with the patience of longer development timelines and away from typical startup hype and practices. The result is a new and cutting-edge path for neural computing acceleration. There is also a refreshing surprise element to KnuEdge announcing a relevant new architecture that is ready to ship… not just a concept or early prototype.”
Today, Goldin said the company is ready to show off its designs. The first chip was ready last December, and KnuEdge is sharing it with potential customers. That chip was built with a 32-nanometer manufacturing process, and even though that’s an older technology, it is a powerful chip, Goldin said. Even at 32 nanometers, the chip has something like a two-times to six-times performance advantage over similar chips, KnuEdge said.
The human brain has a couple of hundred billion neurons, and each neuron is connected to at least 10,000 to 100,000 neurons,” Goldin said. “And the brain is the most energy efficient and powerful computer in the world. That is the metaphor we are using.”
KnuEdge has a new version of its chip under design. And the company has already generated revenue from sales of the prototype systems. Each board has about four chips.
As for the competition from IBM, Goldin said, “I believe we made the right decision and are going in the right direction. IBM’s approach is very different from what we have. We are not aiming at anyone. We are aiming at the future.
In his NASA days, Goldin had a lot of successes. There, he redesigned and delivered the International Space Station, tripled the number of space flights, and put a record number of people into space, all while reducing the agency’s planned budget by 25 percent. He also spent 25 years at TRW, where he led the development of satellite television services.
KnuEdge has 100 employees, but Goldin said the company outsources almost everything. Goldin said he is planning to raised a round of funding late this year or early next year. The company collaborated with the University of California at San Diego and UCSD’s California Institute for Telecommunications and Information Technology.
With computers that can handle natural language systems, many people in the world who can’t read or write will be able to fend for themselves more easily, Goldin said.
I want to be able to take machine learning and help people communicate and make a living,” he said. “This is just the beginning. This is the Wild West. We are talking to very large companies about this, and they are getting very excited.
A sample application is a home that has much greater self-awareness. If there’s something wrong in the house, the KnuEdge system could analyze it and figure out if it needs to alert the homeowner.
Goldin said it was hard to keep the company secret.
I’ve been biting my lip for ten years,” he said.
As for whether KnuEdge’s technology could be used to send people to Mars, Goldin said. “This is available to whoever is going to Mars. I tried twice. I would love it if they use it to get there.
ORIGINAL: Venture Beat

holy smokes

,” he said. “It’s going to be too expensive. It’s not propulsion. It’s not environmental control. It’s not power. This software business is a very big problem, and that nation couldn’t afford it.

So Goldin looked further into the brains of the robotics, and that’s when he started thinking about the computing it would take.
Asked if it was easier to run NASA or a startup, Goldin let out a guffaw.
I love them both, but they’re both very different,” Goldin said. “At NASA, I spent a lot of time on non-technical issues. I had a project every quarter, and I didn’t want to become dull technically. I tried to always take on a technical job doing architecture, working with a design team, and always doing something leading edge. I grew up at a time when you graduated from a university and went to work for someone else. If I ever come back to this earth, I would graduate and become an entrepreneur. This is so wonderful.
Back in 1992, Goldin was planning on starting a wireless company as an entrepreneur. But then he got the call to “go serve the country,” and he did that work for a decade. He started KnuEdge (previously called Intellisis) in 2005, and he got very patient capital.
When I went out to find investors, I knew I couldn’t use the conventional Silicon Valley approach (impatient capital),” he said. “It is a fabulous approach that has generated incredible wealth. But I wanted to undertake revolutionary technology development. To build the future tools for next-generation machine learning, improving the natural interface between humans and machines. So I got patient capital that wanted to see lightning strike. Between all of us, we have a board of directors that can contact almost anyone in the world. They’re fabulous business people and technologists. We knew we had a ten-year run-up.
But he’s not saying who those people are yet.
KnuEdge’s chips are part of a larger platform. KnuEdge is also unveiling KnuVerse, a military-grade voice recognition and authentication technology that unlocks the potential of voice interfaces to power next-generation computing, Goldin said.
While the voice technology market has exploded over the past five years due to the introductions of Siri, Cortana, Google Home, Echo, and ViV, the aspirations of most commercial voice technology teams are still on hold because of security and noise issues. KnuVerse solutions are based on patented authentication techniques using the human voice — even in extremely noisy environments — as one of the most secure forms of biometrics. Secure voice recognition has applications in industries such as banking, entertainment, and hospitality.
KnuEdge says it is now possible to authenticate to computers, web and mobile apps, and Internet of Things devices (or everyday objects that are smart and connected) with only a few words spoken into a microphone — in any language, no matter how loud the background environment or how many other people are talking nearby. In addition to KnuVerse, KnuEdge offers Knurld.io for application developers, a software development kit, and a cloud-based voice recognition and authentication service that can be integrated into an app typically within two hours.
And KnuEdge is announcing KnuPath with LambdaFabric computing. KnuEdge’s first chip, built with an older manufacturing technology, has 256 cores, or neuron-like brain cells, on a single chip. Each core is a tiny digital signal processor. The LambdaFabric makes it possible to instantly connect those cores to each other — a trick that helps overcome one of the major problems of multicore chips, Goldin said. The LambdaFabric is designed to connect up to 512,000 devices, enabling the system to be used in the most demanding computing environments. From rack to rack, the fabric has a latency (or interaction delay) of only 400 nanoseconds. And the whole system is designed to use a low amount of power.
All of the company’s designs are built on biological principles about how the brain gets a lot of computing work done with a small amount of power. The chip is based on what Goldin calls “sparse matrix heterogeneous machine learning algorithms.” And it will run C++ software, something that is already very popular. Programmers can program each one of the cores with a different algorithm to run simultaneously, for the “ultimate in heterogeneity.” It’s multiple input, multiple data, and “that gives us some of our power,” Goldin said.

Above: KnuEdge’s KnuPath chip.
Image Credit: KnuEdge
KnuEdge is emerging out of stealth mode to aim its new Voice and Machine Learning technologies at key challenges in IoT, cloud based machine learning and pattern recognition,” said Paul Teich, principal analyst at Tirias Research, in a statement. “Dan Goldin used his experience in transforming technology to charter KnuEdge with a bold idea, with the patience of longer development timelines and away from typical startup hype and practices. The result is a new and cutting-edge path for neural computing acceleration. There is also a refreshing surprise element to KnuEdge announcing a relevant new architecture that is ready to ship… not just a concept or early prototype.”
Today, Goldin said the company is ready to show off its designs. The first chip was ready last December, and KnuEdge is sharing it with potential customers. That chip was built with a 32-nanometer manufacturing process, and even though that’s an older technology, it is a powerful chip, Goldin said. Even at 32 nanometers, the chip has something like a two-times to six-times performance advantage over similar chips, KnuEdge said.
The human brain has a couple of hundred billion neurons, and each neuron is connected to at least 10,000 to 100,000 neurons,” Goldin said. “And the brain is the most energy efficient and powerful computer in the world. That is the metaphor we are using.”
KnuEdge has a new version of its chip under design. And the company has already generated revenue from sales of the prototype systems. Each board has about four chips.
As for the competition from IBM, Goldin said, “I believe we made the right decision and are going in the right direction. IBM’s approach is very different from what we have. We are not aiming at anyone. We are aiming at the future.
In his NASA days, Goldin had a lot of successes. There, he redesigned and delivered the International Space Station, tripled the number of space flights, and put a record number of people into space, all while reducing the agency’s planned budget by 25 percent. He also spent 25 years at TRW, where he led the development of satellite television services.
KnuEdge has 100 employees, but Goldin said the company outsources almost everything. Goldin said he is planning to raised a round of funding late this year or early next year. The company collaborated with the University of California at San Diego and UCSD’s California Institute for Telecommunications and Information Technology.
With computers that can handle natural language systems, many people in the world who can’t read or write will be able to fend for themselves more easily, Goldin said.
I want to be able to take machine learning and help people communicate and make a living,” he said. “This is just the beginning. This is the Wild West. We are talking to very large companies about this, and they are getting very excited.
A sample application is a home that has much greater self-awareness. If there’s something wrong in the house, the KnuEdge system could analyze it and figure out if it needs to alert the homeowner.
Goldin said it was hard to keep the company secret.
I’ve been biting my lip for ten years,” he said.
As for whether KnuEdge’s technology could be used to send people to Mars, Goldin said. “This is available to whoever is going to Mars. I tried twice. I would love it if they use it to get there.
ORIGINAL: Venture Beat

Inside Vicarious, the Secretive AI Startup Bringing Imagination to Computers

By Hugo Angel,

By reinventing the neural network, the company hopes to help computers make the leap from processing words and symbols to comprehending the real world.
Life would be pretty dull without imagination. In fact, maybe the biggest problem for computers is that they don’t have any.
That’s the belief motivating the founders of Vicarious, an enigmatic AI company backed by some of the most famous and successful names in Silicon Valley. Vicarious is developing a new way of processing data, inspired by the way information seems to flow through the brain. The company’s leaders say this gives computers something akin to imagination, which they hope will help make the machines a lot smarter.
Vicarious is also, essentially, betting against the current boom in AI. Companies including Google, Facebook, Amazon, and Microsoft have made stunning progress in the past few years by feeding huge quantities of data into large neural networks in a process called “deep learning.” When trained on enough examples, for instance, deep-learning systems can learn to recognize a particular face or type of animal with very high accuracy (see “10 Breakthrough Technologies 2013: Deep Learning”). But those neural networks are only very crude approximations of what’s found inside a real brain.
Illustration by Sophia Foster-Dimino
Vicarious has introduced a new kind of neural-network algorithm designed to take into account more of the features that appear in biology. An important one is the ability to picture what the information it’s learned should look like in different scenarios—a kind of artificial imagination. The company’s founders believe a fundamentally different design will be essential if machines are to demonstrate more human like intelligence. Computers will have to be able to learn from less data, and to recognize stimuli or concepts more easily.
Despite generating plenty of early excitement, Vicarious has been quiet over the past couple of years. But this year, the company says, it will publish details of its research, and it promises some eye-popping demos that will show just how useful a computer with an imagination could be.
The company’s headquarters don’t exactly seem like the epicenter of a revolution in artificial intelligence. Located in Union City, a short drive across the San Francisco Bay from Palo Alto, the offices are plain—a stone’s throw from a McDonald’s and a couple of floors up from a dentist. Inside, though, are all the trappings of a vibrant high-tech startup. A dozen or so engineers were hard at work when I visited, several using impressive treadmill desks. Microsoft Kinect 3-D sensors sat on top of some of the engineers’ desks.
D. Scott Phoenix, the company’s 33-year-old CEO, speaks in suitably grandiose terms. “We are really rapidly approaching the amount of computational power we need to be able to do some interesting things in AI,” he told me shortly after I walked through the door. “In 15 years, the fastest computer will do more operations per second than all the neurons in all the brains of all the people who are alive. So we are really close.
Vicarious is about more than just harnessing more computer power, though. Its mathematical innovations, Phoenix says, will more faithfully mimic the information processing found in the human brain. It’s true enough that the relationship between the neural networks currently used in AI and the neurons, dendrites, and synapses found in a real brain is tenuous at best.
One of the most glaring shortcomings of artificial neural networks, Phoenix says, is that information flows only one way. “If you look at the information flow in a classic neural network, it’s a feed-forward architecture,” he says. “There are actually more feedback connections in the brain than feed-forward connections—so you’re missing more than half of the information flow.
It’s undeniably alluring to think that imagination—a capability so fundamentally human it sounds almost mystical in a computer—could be the key to the next big advance in AI.
Vicarious has so far shown that its approach can create a visual system capable of surprisingly deft interpretation. In 2013 it showed that the system could solve any captcha (the visual puzzles that are used to prevent spam-bots from signing up for e-mail accounts and the like). As Phoenix explains it, the feedback mechanism built into Vicarious’s system allows it to imagine what a character would look like if it weren’t distorted or partly obscured (see “AI Startup Says It Has Defeated Captchas”).
Phoenix sketched out some of the details of the system at the heart of this approach on a whiteboard. But he is keeping further details quiet until a scientific paper outlining the captcha approach is published later this year.
In principle, this visual system could be put to many other practical uses, like recognizing objects on shelves more accurately or interpreting real-world scenes more intelligently. The founders of Vicarious also say that their approach extends to other, much more complex areas of intelligence, including language and logical reasoning.
Phoenix says his company may give a demo later this year involving robots. And indeed, the job listings on the company’s website include several postings for robotics experts. Currently robots are bad at picking up unfamiliar, oddly arranged, or partly obscured objects, because they have trouble recognizing what they are. “If you look at people who are picking up objects in an Amazon facility, most of the time they aren’t even looking at what they’re doing,” he explains. “And they’re imagining—using their sensory motor simulator—where the object is, and they’re imagining at what point their finger will touch it.
While Phoenix is the company’s leader, his cofounder, Dileep George, might be considered its technical visionary. George was born in India and received a PhD in electrical engineering from Stanford University, where he turned his attention to neuroscience toward the end of his doctoral studies. In 2005 he cofounded Numenta with Jeff Hawkins, the creator of Palm Computing. But in 2010 George left to pursue his own ideas about the mathematical principles behind information processing in the brain, founding Vicarious with Phoenix the same year.
I bumped into George in the elevator when I first arrived. He is unassuming and speaks quietly, with a thick accent. But he’s also quite matter-of-fact about what seem like very grand objectives.
George explained that imagination could help computers process language by tying words, or symbols, to low-level physical representations of real-world things. In theory, such a system might automatically understand the physical properties of something like water, for example, which would make it better able to discuss the weather. “When I utter a word, you know what it means because you can simulate the concept,” he says.
This ambitious vision for the future of AI has helped Vicarious raise an impressive $72 million so far. Its list of investors also reads like a who’s who of the tech world. Early cash came from Dustin Moskovitz, ex-CTO of Facebook, and Adam D’Angelo, cofounder of Quora. Further funding came from Peter Thiel, Mark Zuckerberg, Jeff Bezos, and Elon Musk.
Many people are itching to see what Vicarious has done beyond beating captchas. “I would love it if they showed us something new this year,” says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence in Seattle.
In contrast to the likes of Google, Facebook, or Baidu, Vicarious hasn’t published any papers or released any tools that researchers can play with. “The people [involved] are great, and the problems [they are working on] are great,” says Etzioni. “But it’s time to deliver.
For those who’ve put their money behind Vicarious, the company’s remarkable goals should make the wait well worth it. Even if progress takes a while, the potential payoffs seem so huge that the bet makes sense, says Matt Ocko, a partner at Data Collective, a venture firm that has backed Vicarious. A better machine-learning approach could be applied in just about any industry that handles large amounts of data, he says. “Vicarious sat us down and demonstrated the most credible pathway to reasoning machines that I have ever seen.
Ocko adds that Vicarious has demonstrated clear evidence it can commercialize what it’s working on. “We approached it with a crapload of intellectual rigor,” he says.
It will certainly be interesting to see if Vicarious can inspire this kind of confidence among other AI researchers and technologists with its papers and demos this year. If it does, then the company could quickly go from one of the hottest prospects in the Valley to one of its fastest-growing businesses.
That’s something the company’s founders would certainly like to imagine.
ORIGINAL: MIT Tech Review
by Will Knight. Senior Editor, AI
May 19, 2016

The Rise of Artificial Intelligence and the End of Code

By Hugo Angel,

EDWARD C. MONAGHAN
Soon We Won’t Program Computers. We’ll Train Them Like Dogs
Before the invention of the computer, most experimental psychologists thought the brain was an unknowable black box. You could analyze a subject’s behavior—ring bell, dog salivates—but thoughts, memories, emotions? That stuff was obscure and inscrutable, beyond the reach of science. So these behaviorists, as they called themselves, confined their work to the study of stimulus and response, feedback and reinforcement, bells and saliva. They gave up trying to understand the inner workings of the mind. They ruled their field for four decades.
Then, in the mid-1950s, a group of rebellious psychologists, linguists, information theorists, and early artificial-intelligence researchers came up with a different conception of the mind. People, they argued, were not just collections of conditioned responses. They absorbed information, processed it, and then acted upon it. They had systems for writing, storing, and recalling memories. They operated via a logical, formal syntax. The brain wasn’t a black box at all. It was more like a computer.
The so-called cognitive revolution started small, but as computers became standard equipment in psychology labs across the country, it gained broader acceptance. By the late 1970s, cognitive psychology had overthrown behaviorism, and with the new regime came a whole new language for talking about mental life. Psychologists began describing thoughts as programs, ordinary people talked about storing facts away in their memory banks, and business gurus fretted about the limits of mental bandwidth and processing power in the modern workplace. 
This story has repeated itself again and again. As the digital revolution wormed its way into every part of our lives, it also seeped into our language and our deep, basic theories about how things work. Technology always does this. During the Enlightenment, Newton and Descartes inspired people to think of the universe as an elaborate clock. In the industrial age, it was a machine with pistons. (Freud’s idea of psychodynamics borrowed from the thermodynamics of steam engines.) Now it’s a computer. Which is, when you think about it, a fundamentally empowering idea. Because if the world is a computer, then the world can be coded. 
Code is logical. Code is hackable. Code is destiny. These are the central tenets (and self-fulfilling prophecies) of life in the digital age. As software has eaten the world, to paraphrase venture capitalist Marc Andreessen, we have surrounded ourselves with machines that convert our actions, thoughts, and emotions into data—raw material for armies of code-wielding engineers to manipulate. We have come to see life itself as something ruled by a series of instructions that can be discovered, exploited, optimized, maybe even rewritten. Companies use code to understand our most intimate ties; Facebook’s Mark Zuckerberg has gone so far as to suggest there might be a “fundamental mathematical law underlying human relationships that governs the balance of who and what we all care about.In 2013, Craig Venter announced that, a decade after the decoding of the human genome, he had begun to write code that would allow him to create synthetic organisms. “It is becoming clear,” he said, “that all living cells that we know of on this planet are DNA-software-driven biological machines.” Even self-help literature insists that you can hack your own source code, reprogramming your love life, your sleep routine, and your spending habits.
In this world, the ability to write code has become not just a desirable skill but a language that grants insider status to those who speak it. They have access to what in a more mechanical age would have been called the levers of power. “If you control the code, you control the world,” wrote futurist Marc Goodman. (In Bloomberg Businessweek, Paul Ford was slightly more circumspect: “If coders don’t run the world, they run the things that run the world.” Tomato, tomahto.)
But whether you like this state of affairs or hate it—whether you’re a member of the coding elite or someone who barely feels competent to futz with the settings on your phone—don’t get used to it. Our machines are starting to speak a different language now, one that even the best coders can’t fully understand. 
Over the past several years, the biggest tech companies in Silicon Valley have aggressively pursued an approach to computing called machine learning. In traditional programming, an engineer writes explicit, step-by-step instructions for the computer to follow. With machine learning, programmers don’t encode computers with instructions. They train them. If you want to teach a neural network to recognize a cat, for instance, you don’t tell it to look for whiskers, ears, fur, and eyes. You simply show it thousands and thousands of photos of cats, and eventually it works things out. If it keeps misclassifying foxes as cats, you don’t rewrite the code. You just keep coaching it.
This approach is not new—it’s been around for decades—but it has recently become immensely more powerful, thanks in part to the rise of deep neural networks, massively distributed computational systems that mimic the multilayered connections of neurons in the brain. And already, whether you realize it or not, machine learning powers large swaths of our online activity. Facebook uses it to determine which stories show up in your News Feed, and Google Photos uses it to identify faces. Machine learning runs Microsoft’s Skype Translator, which converts speech to different languages in real time. Self-driving cars use machine learning to avoid accidents. Even Google’s search engine—for so many years a towering edifice of human-written rules—has begun to rely on these deep neural networks. In February the company replaced its longtime head of search with machine-learning expert John Giannandrea, and it has initiated a major program to retrain its engineers in these new techniques. “By building learning systems,” Giannandrea told reporters this fall, “we don’t have to write these rules anymore.
 
Our machines speak a different language now, one that even the best coders can’t fully understand. 
But here’s the thing: With machine learning, the engineer never knows precisely how the computer accomplishes its tasks. The neural network’s operations are largely opaque and inscrutable. It is, in other words, a black box. And as these black boxes assume responsibility for more and more of our daily digital tasks, they are not only going to change our relationship to technology—they are going to change how we think about ourselves, our world, and our place within it.
If in the old view programmers were like gods, authoring the laws that govern computer systems, now they’re like parents or dog trainers. And as any parent or dog owner can tell you, that is a much more mysterious relationship to find yourself in.
Andy Rubin is an inveterate tinkerer and coder. The cocreator of the Android operating system, Rubin is notorious in Silicon Valley for filling his workplaces and home with robots. He programs them himself. “I got into computer science when I was very young, and I loved it because I could disappear in the world of the computer. It was a clean slate, a blank canvas, and I could create something from scratch,” he says. “It gave me full control of a world that I played in for many, many years.
Now, he says, that world is coming to an end. Rubin is excited about the rise of machine learning—his new company, Playground Global, invests in machine-learning startups and is positioning itself to lead the spread of intelligent devices—but it saddens him a little too. Because machine learning changes what it means to be an engineer.
People don’t linearly write the programs,” Rubin says. “After a neural network learns how to do speech recognition, a programmer can’t go in and look at it and see how that happened. It’s just like your brain. You can’t cut your head off and see what you’re thinking.When engineers do peer into a deep neural network, what they see is an ocean of math: a massive, multilayer set of calculus problems that—by constantly deriving the relationship between billions of data points—generate guesses about the world. 
Artificial intelligence wasn’t supposed to work this way. Until a few years ago, mainstream AI researchers assumed that to create intelligence, we just had to imbue a machine with the right logic. Write enough rules and eventually we’d create a system sophisticated enough to understand the world. They largely ignored, even vilified, early proponents of machine learning, who argued in favor of plying machines with data until they reached their own conclusions. For years computers weren’t powerful enough to really prove the merits of either approach, so the argument became a philosophical one. “Most of these debates were based on fixed beliefs about how the world had to be organized and how the brain worked,” says Sebastian Thrun, the former Stanford AI professor who created Google’s self-driving car. “Neural nets had no symbols or rules, just numbers. That alienated a lot of people.
The implications of an unparsable machine language aren’t just philosophical. For the past two decades, learning to code has been one of the surest routes to reliable employment—a fact not lost on all those parents enrolling their kids in after-school code academies. But a world run by neurally networked deep-learning machines requires a different workforce. Analysts have already started worrying about the impact of AI on the job market, as machines render old skills irrelevant. Programmers might soon get a taste of what that feels like themselves.
Just as Newtonian physics wasn’t obviated by quantum mechanics, code will remain a powerful tool set to explore the world. 
I was just having a conversation about that this morning,” says tech guru Tim O’Reilly when I ask him about this shift. “I was pointing out how different programming jobs would be by the time all these STEM-educated kids grow up.” Traditional coding won’t disappear completely—indeed, O’Reilly predicts that we’ll still need coders for a long time yet—but there will likely be less of it, and it will become a meta skill, a way of creating what Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, calls the “scaffolding” within which machine learning can operate. Just as Newtonian physics wasn’t obviated by the discovery of quantum mechanics, code will remain a powerful, if incomplete, tool set to explore the world. But when it comes to powering specific functions, machine learning will do the bulk of the work for us. 
Of course, humans still have to train these systems. But for now, at least, that’s a rarefied skill. The job requires both a high-level grasp of mathematics and an intuition for pedagogical give-and-take. “It’s almost like an art form to get the best out of these systems,” says Demis Hassabis, who leads Google’s DeepMind AI team. “There’s only a few hundred people in the world that can do that really well.” But even that tiny number has been enough to transform the tech industry in just a couple of years.
Whatever the professional implications of this shift, the cultural consequences will be even bigger. If the rise of human-written software led to the cult of the engineer, and to the notion that human experience can ultimately be reduced to a series of comprehensible instructions, machine learning kicks the pendulum in the opposite direction. The code that runs the universe may defy human analysis. Right now Google, for example, is facing an antitrust investigation in Europe that accuses the company of exerting undue influence over its search results. Such a charge will be difficult to prove when even the company’s own engineers can’t say exactly how its search algorithms work in the first place.
This explosion of indeterminacy has been a long time coming. It’s not news that even simple algorithms can create unpredictable emergent behavior—an insight that goes back to chaos theory and random number generators. Over the past few years, as networks have grown more intertwined and their functions more complex, code has come to seem more like an alien force, the ghosts in the machine ever more elusive and ungovernable. Planes grounded for no reason. Seemingly unpreventable flash crashes in the stock market. Rolling blackouts.
These forces have led technologist Danny Hillis to declare the end of the age of Enlightenment, our centuries-long faith in logic, determinism, and control over nature. Hillis says we’re shifting to what he calls the age of Entanglement. “As our technological and institutional creations have become more complex, our relationship to them has changed,” he wrote in the Journal of Design and Science. “Instead of being masters of our creations, we have learned to bargain with them, cajoling and guiding them in the general direction of our goals. We have built our own jungle, and it has a life of its own.The rise of machine learning is the latest—and perhaps the last—step in this journey. 
This can all be pretty frightening. After all, coding was at least the kind of thing that a regular person could imagine picking up at a boot camp. Coders were at least human. Now the technological elite is even smaller, and their command over their creations has waned and become indirect. Already the companies that build this stuff find it behaving in ways that are hard to govern. Last summer, Google rushed to apologize when its photo recognition engine started tagging images of black people as gorillas. The company’s blunt first fix was to keep the system from labeling anything as a gorilla.

To nerds of a certain bent, this all suggests a coming era in which we forfeit authority over our machines. “One can imagine such technology 

  • outsmarting financial markets, 
  • out-inventing human researchers, 
  • out-manipulating human leaders, and 
  • developing weapons we cannot even understand,” 

wrote Stephen Hawking—sentiments echoed by Elon Musk and Bill Gates, among others. “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.” 

 
But don’t be too scared; this isn’t the dawn of Skynet. We’re just learning the rules of engagement with a new technology. Already, engineers are working out ways to visualize what’s going on under the hood of a deep-learning system. But even if we never fully understand how these new machines think, that doesn’t mean we’ll be powerless before them. In the future, we won’t concern ourselves as much with the underlying sources of their behavior; we’ll learn to focus on the behavior itself. The code will become less important than the data we use to train it.
This isn’t the dawn of Skynet. We’re just learning the rules of engagement with a new technology. 
If all this seems a little familiar, that’s because it looks a lot like good old 20th-century behaviorism. In fact, the process of training a machine-learning algorithm is often compared to the great behaviorist experiments of the early 1900s. Pavlov triggered his dog’s salivation not through a deep understanding of hunger but simply by repeating a sequence of events over and over. He provided data, again and again, until the code rewrote itself. And say what you will about the behaviorists, they did know how to control their subjects.
In the long run, Thrun says, machine learning will have a democratizing influence. In the same way that you don’t need to know HTML to build a website these days, you eventually won’t need a PhD to tap into the insane power of deep learning. Programming won’t be the sole domain of trained coders who have learned a series of arcane languages. It’ll be accessible to anyone who has ever taught a dog to roll over. “For me, it’s the coolest thing ever in programming,” Thrun says, “because now anyone can program.
For much of computing history, we have taken an inside-out view of how machines work. First we write the code, then the machine expresses it. This worldview implied plasticity, but it also suggested a kind of rules-based determinism, a sense that things are the product of their underlying instructions. Machine learning suggests the opposite, an outside-in view in which code doesn’t just determine behavior, behavior also determines code. Machines are products of the world.
Ultimately we will come to appreciate both the power of handwritten linear code and the power of machine-learning algorithms to adjust it—the give-and-take of design and emergence. It’s possible that biologists have already started figuring this out. Gene-editing techniques like Crispr give them the kind of code-manipulating power that traditional software programmers have wielded. But discoveries in the field of epigenetics suggest that genetic material is not in fact an immutable set of instructions but rather a dynamic set of switches that adjusts depending on the environment and experiences of its host. Our code does not exist separate from the physical world; it is deeply influenced and transmogrified by it. Venter may believe cells are DNA-software-driven machines, but epigeneticist Steve Cole suggests a different formulation: “A cell is a machine for turning experience into biology.
A cell is a machine for turning experience into biology.” 
Steve Cole
And now, 80 years after Alan Turing first sketched his designs for a problem-solving machine, computers are becoming devices for turning experience into technology. For decades we have sought the secret code that could explain and, with some adjustments, optimize our experience of the world. But our machines won’t work that way for much longer—and our world never really did. We’re about to have a more complicated but ultimately more rewarding relationship with technology. We will go from commanding our devices to parenting them.

What the AI Behind AlphaGo Teaches Us About Humanity. Watch this on The Scene.
Editor at large Jason Tanz (@jasontanz) wrote about Andy Rubin’s new company, Playground, in issue 24.03.
This article appears in the June issue. Go Back to Top. Skip To: Start of Article.
ORIGINAL: Wired

First Human Tests of Memory Boosting Brain Implant—a Big Leap Forward

By Hugo Angel,

You have to begin to lose your memory, if only bits and pieces, to realize that memory is what makes our lives. Life without memory is no life at all.” — Luis Buñuel Portolés, Filmmaker
Image Credit: Shutterstock.com
Every year, hundreds of millions of people experience the pain of a failing memory.
The reasons are many:

  • traumatic brain injury, which haunts a disturbingly high number of veterans and football players; 
  • stroke or Alzheimer’s disease, which often plagues the elderly; or 
  • even normal brain aging, which inevitably touches us all.
Memory loss seems to be inescapable. But one maverick neuroscientist is working hard on an electronic cure. Funded by DARPA, Dr. Theodore Berger, a biomedical engineer at the University of Southern California, is testing a memory-boosting implant that mimics the kind of signal processing that occurs when neurons are laying down new long-term memories.
The revolutionary implant, already shown to help memory encoding in rats and monkeys, is now being tested in human patients with epilepsy — an exciting first that may blow the field of memory prosthetics wide open.
To get here, however, the team first had to crack the memory code.

Deciphering Memory
From the very onset, Berger knew he was facing a behemoth of a problem.
We weren’t looking to match everything the brain does when it processes memory, but to at least come up with a decent mimic, said Berger.
Of course people asked: can you model it and put it into a device? Can you get that device to work in any brain? It’s those things that lead people to think I’m crazy. They think it’s too hard,” he said.
But the team had a solid place to start.
The hippocampus, a region buried deep within the folds and grooves of the brain, is the critical gatekeeper that transforms memories from short-lived to long-term. In dogged pursuit, Berger spent most of the last 35 years trying to understand how neurons in the hippocampus accomplish this complicated feat.
At its heart, a memory is a series of electrical pulses that occur over time that are generated by a given number of neurons, said Berger. This is important — it suggests that we can reduce it to mathematical equations and put it into a computational framework, he said.
Berger hasn’t been alone in his quest.
By listening to the chatter of neurons as an animal learns, teams of neuroscientists have begun to decipher the flow of information within the hippocampus that supports memory encoding. Key to this process is a strong electrical signal that travels from CA3, the “input” part of the hippocampus, to CA1, the “output” node.
This signal is impaired in people with memory disabilities, said Berger, so of course we thought if we could recreate it using silicon, we might be able to restore — or even boost — memory.

Bridging the Gap
Yet this brain’s memory code proved to be extremely tough to crack.
The problem lies in the non-linear nature of neural networks: signals are often noisy and constantly overlap in time, which leads to some inputs being suppressed or accentuated. In a network of hundreds and thousands of neurons, any small change could be greatly amplified and lead to vastly different outputs.
It’s a chaotic black box, laughed Berger.
With the help of modern computing techniques, however, Berger believes he may have a crude solution in hand. His proof?
Use his mathematical theorems to program a chip, and then see if the brain accepts the chip as a replacement — or additional — memory module.
Berger and his team began with a simple task using rats. They trained the animals to push one of two levers to get a tasty treat, and recorded the series of CA3 to CA1 electronic pulses in the hippocampus as the animals learned to pick the correct lever. The team carefully captured the way the signals were transformed as the session was laid down into long-term memory, and used that information — the electrical “essence” of the memory — to program an external memory chip.
They then injected the animals with a drug that temporarily disrupted their ability to form and access long-term memories, causing the animals to forget the reward-associated lever. Next, implanting microelectrodes into the hippocampus, the team pulsed CA1, the output region, with their memory code.
The results were striking — powered by an external memory module, the animals regained their ability to pick the right lever.
Encouraged by the results, Berger next tried his memory implant in monkeys, this time focusing on a brain region called the prefrontal cortex, which receives and modulates memories encoded by the hippocampus.
Placing electrodes into the monkey’s brains, the team showed the animals a series of semi-repeated images, and captured the prefrontal cortex’s activity when the animals recognized an image they had seen earlier. Then with a hefty dose of cocaine, the team inhibited that particular brain region, which disrupted the animal’s recall.
Next, using electrodes programmed with the “memory code,” the researchers guided the brain’s signal processing back on track — and the animal’s performance improved significantly.
A year later, the team further validated their memory implant by showing it could also rescue memory deficits due to hippocampal malfunction in the monkey brain.

A Human Memory Implant
Last year, the team cautiously began testing their memory implant prototype in human volunteers.
Because of the risks associated with brain surgery, the team recruited 12 patients with epilepsy, who already have electrodes implanted into their brain to track down the source of their seizures.
Repeated seizures steadily destroy critical parts of the hippocampus needed for long-term memory formation, explained Berger. So if the implant works, it could benefit these patients as well.
The team asked the volunteers to look through a series of pictures, and then recall which ones they had seen 90 seconds later. As the participants learned, the team recorded the firing patterns in both CA1 and CA3 — that is, the input and output nodes.
Using these data, the team extracted an algorithm — a specific human “memory code” — that could predict the pattern of activity in CA1 cells based on CA3 input. Compared to the brain’s actual firing patterns, the algorithm generated correct predictions roughly 80% of the time.
It’s not perfect, said Berger, but it’s a good start.
Using this algorithm, the researchers have begun to stimulate the output cells with an approximation of the transformed input signal.
We have already used the pattern to zap the brain of one woman with epilepsy, said Dr. Dong Song, an associate professor working with Berger. But he remained coy about the result, only saying that although promising, it’s still too early to tell.
Song’s caution is warranted. Unlike the motor cortex, with its clear structured representation of different body parts, the hippocampus is not organized in any obvious way.
It’s hard to understand why stimulating input locations can lead to predictable results, said Dr. Thoman McHugh, a neuroscientist at the RIKEN Brain Science Institute. It’s also difficult to tell whether such an implant could save the memory of those who suffer from damage to the output node of the hippocampus.
That said, the data is convincing,” McHugh acknowledged.
Berger, on the other hand, is ecstatic. “I never thought I’d see this go into humans,” he said.
But the work is far from done. Within the next few years, Berger wants to see whether the chip can help build long-term memories in a variety of different situations. After all, the algorithm was based on the team’s recordings of one specific task — what if the so-called memory code is not generalizable, instead varying based on the type of input that it receives?
Berger acknowledges that it’s a possibility, but he remains hopeful.
I do think that we will find a model that’s a pretty good fit for most conditions, he said. After all, the brain is restricted by its own biophysics — there’s only so many ways that electrical signals in the hippocampus can be processed, he said.
The goal is to improve the quality of life for somebody who has a severe memory deficit,” said Berger. “If I can give them the ability to form new long-term memories for half the conditions that most people live in, I’ll be happy as hell, and so will be most patients.
ORIGINAL: Singularity Hub

Forward to the Future: Visions of 2045

By Hugo Angel,

DARPA asked the world and our own researchers what technologies they expect to see 30 years from now—and received insightful, sometimes funny predictions
Today—October 21, 2015—is famous in popular culture as the date 30 years in the future when Marty McFly and Doc Brown arrive in their time-traveling DeLorean in the movie “Back to the Future Part II.” The film got some things right about 2015, including in-home videoconferencing and devices that recognize people by their voices and fingerprints. But it also predicted trunk-sized fusion reactors, hoverboards and flying cars—game-changing technologies that, despite the advances we’ve seen in so many fields over the past three decades, still exist only in our imaginations.
A big part of DARPA’s mission is to envision the future and make the impossible possible. So ten days ago, as the “Back to the Future” day approached, we turned to social media and asked the world to predict: What technologies might actually surround us 30 years from now? We pointed people to presentations from DARPA’s Future Technologies Forum, held last month in St. Louis, for inspiration and a reality check before submitting their predictions.
Well, you rose to the challenge and the results are in. So in honor of Marty and Doc (little known fact: he is a DARPA alum) and all of the world’s innovators past and future, we present here some highlights from your responses, in roughly descending order by number of mentions for each class of futuristic capability:
  • Space: Interplanetary and interstellar travel, including faster-than-light travel; missions and permanent settlements on the Moon, Mars and the asteroid belt; space elevators
  • Transportation & Energy: Self-driving and electric vehicles; improved mass transit systems and intercontinental travel; flying cars and hoverboards; high-efficiency solar and other sustainable energy sources
  • Medicine & Health: Neurological devices for memory augmentation, storage and transfer, and perhaps to read people’s thoughts; life extension, including virtual immortality via uploading brains into computers; artificial cells and organs; “Star Trek”-style tricorder for home diagnostics and treatment; wearable technology, such as exoskeletons and augmented-reality glasses and contact lenses
  • Materials & Robotics: Ubiquitous nanotechnology, 3-D printing and robotics; invisibility and cloaking devices; energy shields; anti-gravity devices
  • Cyber & Big Data: Improved artificial intelligence; optical and quantum computing; faster, more secure Internet; better use of data analytics to improve use of resources
A few predictions inspired us to respond directly:
  • Pizza delivery via teleportation”—DARPA took a close look at this a few years ago and decided there is plenty of incentive for the private sector to handle this challenge.
  • Time travel technology will be close, but will be closely guarded by the military as a matter of national security”—We already did this tomorrow.
  • Systems for controlling the weather”—Meteorologists told us it would be a job killer and we didn’t want to rain on their parade.
  • Space colonies…and unlimited cellular data plans that won’t be slowed by your carrier when you go over a limit”—We appreciate the idea that these are equally difficult, but they are not. We think likable cell-phone data plans are beyond even DARPA and a total non-starter.
So seriously, as an adjunct to this crowd-sourced view of the future, we asked three DARPA researchers from various fields to share their visions of 2045, and why getting there will require a group effort with players not only from academia and industry but from forward-looking government laboratories and agencies:

Pam Melroy, an aerospace engineer, former astronaut and current deputy director of DARPA’s Tactical Technologies Office (TTO), foresees technologies that would enable machines to collaborate with humans as partners on tasks far more complex than those we can tackle today:
Justin Sanchez, a neuroscientist and program manager in DARPA’s Biological Technologies Office (BTO), imagines a world where neurotechnologies could enable users to interact with their environment and other people by thought alone:
Stefanie Tompkins, a geologist and director of DARPA’s Defense Sciences Office, envisions building substances from the atomic or molecular level up to create “impossible” materials with previously unattainable capabilities.
Check back with us in 2045—or sooner, if that time machine stuff works out—for an assessment of how things really turned out in 30 years.
# # #
Associated images posted on www.darpa.mil and video posted at www.youtube.com/darpatv may be reused according to the terms of the DARPA User Agreement, available here:http://www.darpa.mil/policy/usage-policy.
Tweet @darpa
ORIGINAL: DARPA
[email protected]
10/21/2015

Computer Learns to Write Its ABCs

By Hugo Angel,

Danqing Wang Computer ABC
Photo-illustration: Danqing Wang
A new computer model can now mimic the human ability to learn new concepts from a single example instead of the hundreds or thousands of examples it takes other machine learning techniques, researchers say.

The new model learned how to write invented symbols from the animated show Futurama as well as dozens of alphabets from across the world. It also showed it could invent symbols of its own in the style of a given language
.The researchers suggest their model could also learn other kinds of concepts, such as speech and gestures.

Although scientists have made great advances in .machine learning in recent years, people remain much better at learning new concepts than machines.

People can learn new concepts extremely quickly, from very little data, often from only one or a few examples. You show even a young child a horse, a school bus, a skateboard, and they can get it from one example,” says study co-author Joshua Tenenbaum at the Massachusetts Institute of Technology. In contrast, “standard algorithms in machine learning require tens, hundreds or even thousands of examples to perform similarly.

To shorten machine learning, researchers sought to develop a model that better mimicked human learning, which makes generalizations from very few examples of a concept. They focused on learning simple visual concepts — handwritten symbols from alphabets around the world.

Our work has two goals: to better understand how people learn — to reverse engineer learning in the human mind — and to build machines that learn in more humanlike ways,” Tenenbaum says.

Whereas standard pattern recognition algorithms represent symbols as collections of pixels or arrangements of features, the new model the researchers developed represented each symbol as a simple computer program. For instance, the letter “A” is represented by a program that generates examples of that letter stroke by stroke when the program is run. No programmer is needed during the learning process — the model generates these programs itself.

Moreover, each program is designed to generate variations of each symbol whenever the programs are run, helping it capture the way instances of such concepts might vary, such as the differences between how two people draw a letter.

The idea for this algorithm came from a surprising finding we had while collecting a data set of handwritten characters from around the world. We found that if you ask a handful of people to draw a novel character, there is remarkable consistency in the way people draw,” says study lead author Brenden Lake at New York University. “When people learn or use or interact with these novel concepts, they do not just see characters as static visual objects. Instead, people see richer structure — something like a causal model, or a sequence of pen strokes — that describe how to efficiently produce new examples of the concept.

The model also applies knowledge from previous concepts to speed learn new concepts. For instance, the model can use knowledge learned from the Latin alphabet to learn the Greek alphabet. They call their model the Bayesian program learning or BPL framework.

The researchers applied their model to more than 1,600 types of handwritten characters in 50 writing systems, including Sanskrit, Tibetan, Gujarati, Glagolitic, and even invented characters such as those from the animated series Futurama and the online game Dark Horizon. In a kind of .Turing test, scientists found that volunteers recruited via .Amazon’s Mechanical Turk had difficulty distinguishing machine-written characters from human-written ones.

The scientists also had their model focus on creative tasks. They asked their system to create whole new concepts — for instance, creating a new Tibetan letter based on what it knew about letters in the Tibetan alphabet. The researchers found human volunteers rated machine-written characters on par with ones developed by humans recruited for the same task.

We got human-level performance on this creative task,” study co-author Ruslan Salakhutdinov at the University of Toronto.

Potential applications for this model could include

  • handwriting recognition,
  • speech recognition,
  • gesture recognition and
  • object recognition.
Ultimately we’re trying to figure out how we can get systems that come closer to displaying human-like intelligence,” Salakhutdinov says. “We’re still very, very far from getting there, though.“The scientists detailed .their findings in the December 11 issue of the journal Science.

ORIGINAL: .IEEE Spectrum

By Charles Q. Choi
Posted 10 Dec 2015 | 20:00 GMT

Scaling up synthetic-biology innovation

By Hugo Angel,

.
Gen9’s BioFab platform synthesizes small DNA fragments on silicon chips
and uses other technologies to build longer DNA constructs from those
fragments. Done in a parallel, this produces hundreds to thousands of
DNA constructs simultaneously. Shown here is an automated
liquid-handling instrument that dispenses DNA onto the chips. Courtesy of Gen9
MIT professor’s startup makes synthesizing genes many times more cost effective.
Inside and outside of the classroom, MIT professor Joseph Jacobson has become a prominent figure in — and advocate for — the emerging field of synthetic biology.

As head of the Molecular Machines group at the MIT Media Lab, Jacobson’s work has focused on, among other things, developing technologies for the rapid fabrication of DNA molecules. In 2009, he spun out some of his work into .Gen9, which aims to boost synthetic-biology innovation by offering scientists more cost-effective tools and resources.
Headquartered in Cambridge, Massachusetts, Gen9 has developed a method for synthesizing DNA on silicon chips, which significantly cuts costs and accelerates the creation and testing of genes. Commercially available since 2013, the platform is now being used by dozens of scientists and commercial firms worldwide.
Synthetic biologists synthesize genes by combining strands of DNA. These new genes can be inserted into microorganisms such as yeast and bacteria. Using this approach, scientists can tinker with the cells’ metabolic pathways, enabling the microbes to perform new functions, including testing new antibodies, sensing chemicals in an environment, or creating biofuels.

But conventional gene-synthesizing methods can be time-consuming and costly. Chemical-based processes, for instance, cost roughly 20 cents per base pair — DNA’s key building block — and produce one strand of DNA at a time. This adds up in time and money when synthesizing genes comprising 100,000 base pairs.

Gen9’s chip-based DNA, however, drops the price to roughly 2 cents per base pair, Jacobson says. Additionally, hundreds of thousands of base pairs can be tested and compiled in parallel, as opposed to testing and compiling each pair individually through conventional methods.

This means faster testing and development of new pathways — which usually takes many years — for applications such as advanced therapeutics, and more effective enzymes for detergents, food processing, and biofuels, Jacobson says. “If you can build thousands of pathways on a chip in parallel, and can test them all at once, you get to a working metabolic pathway much faster,” he says.

Over the years, Jacobson and Gen9 have earned many awards and honors. In November, Jacobson was also inducted into the National Inventors Hall of Fame for co-inventing E Ink, the electronic ink used for Amazon’s Kindle e-reader display.

Scaling gene synthesizing Throughout the early-and mid-2000s, a few important pieces of research came together to allow for the scaling up of gene synthesis, which ultimately led to Gen9.

First, Jacobson and his students Chris Emig and Brian Chow began developing chips with thousands of “spots,” which each contained about 100 million copies of a different DNA sequence.

Then, Jacobson and another student, David Kong, created a process that used a certain enzyme as a catalyst to assemble those small DNA fragments into larger DNA strands inside microfluidics devices — “which was the first microfluidics assembly of DNA ever,” Jacobson says.

Despite the novelty, however, the process still wasn’t entirely cost effective. On average, it produced a 99 percent yield, meaning that about 1 percent of the base pairs didn’t match when constructing larger strands. That’s not so bad for making genes with 100 base pairs. “But if you want to make something that’s 10,000 or 100,000 bases long, that’s no good anymore,” Jacobson says.

Around 2004, Jacobson and then-postdoc Peter Carr, along with several other students, found a way to drastically increase yields by taking a cue from a natural error-correcting protein, Mut-S, which recognizes mismatches in DNA base pairing that occur when two DNA strands form a double helix. For synthetic DNA, the protein can detect and extract mismatches arising in base pairs synthesized on the chip, improving yields. In a paper published that year in Nucleic Acids Research, the researchers wrote that this process reduces the frequency of errors, from one in every 100 base pairs to around one in every 10,000.

With these innovations, Jacobson launched Gen9 with two co-founders: George Church of Harvard University, who was also working on synthesizing DNA on microchips, and Drew Endy of Stanford University, a world leader in synthetic-biology innovations.

Together with employees, they created a platform called BioFab and several other tools for synthetic biologists. Today, clients use an online portal to order gene sequences. Then Gen9 designs and fabricates those sequences on chips and delivers them to customers. Recently, the startup updated the portal to allow drag-and-drop capabilities and options for editing and storing gene sequences.

This allows users to “make these very extensive libraries that have been inaccessible previously,” Jacobson says.


Fueling big ideas

Many published studies have already used Gen9’s tools, several of which are posted to the startup’s website. Notable ones, Jacobson says, include designing proteins for therapeutics. In those cases, the researcher needs to make 10 million or 100 million versions of a protein, each comprising maybe 50,000 pieces of DNA, to see which ones work best.

Instead of making and testing DNA sequences one at a time with conventional methods, Gen9 lets researchers test hundreds of thousands of sequences at once on a chip. This should increase chances of finding the right protein, more quickly. “If you just have one shot you’re very unlikely to hit the target,” Jacobson says. “If you have thousands or tens of thousands of shots on a goal, you have a much better chance of success.


Currently, all the world’s synthetic-biology methods produce only about 300 million bases per year. About 10 of the chips Gen9 uses to make DNA can hold the same amount of content, Jacobson says. In principle, he says, the platform used to make Gen9’s chips — based on collaboration with manufacturing firm Agilent — could produce enough chips to cover about 200 billion bases. This is about the equivalent capacity of GenBank, an open-access database of DNA bases and gene sequences that has been constantly updated since the 1980s.

Such technology could soon be worth a pretty penny: According to a study published in November by MarketsandMarkets, a major marketing research firm, the market for synthesizing short DNA strands is expected to reach roughly $1.9 billion by 2020.

Still, Gen9 is pushing to drop costs for synthesis to under 1 cent per base pair, Jacobson says. Additionally, for the past few years, the startup has hosted an annual G-Prize Competition, which awards 1 million base pairs of DNA to researchers with creative synthetic-biology ideas. That’s a prize worth roughly $100,000.

The aim, Jacobson says, is to remove cost barriers for synthetic biologists to boost innovation. “People have lots of ideas but are unable to try out those ideas because of cost,” he says. “This encourages people to think about bigger and bigger ideas.”

ORIGINAL: .MIT News

Rob Matheson | MIT News Office
December 10, 2015

How swarm intelligence could save us from the dangers of AI

By Hugo Angel,

Image Credit: diez artwork/Shutterstock
We’ve heard a lot of talk recently about the dangers of artificial intelligence. From Stephen Hawking and Bill Gates, to Elon Musk, and Steve Wozniak, luminaries around the globe have been sounding the alarm, warning that we could lose control over this powerful technology — after all, AI is about creating systems that have minds of their own. A true AI could one day adopt goals and aspirations that harm us.
But what if we could enjoy the benefits of AI while ensuring that human values and sensibilities remain an integral part of the system?
This is where something called Artificial Swarm Intelligence comes in – a method for building intelligent systems that keeps humans in the loop, merging the power of computational algorithms with the wisdom, creativity, and intuition of real people. A number of companies around the world are already exploring swarms.

  • There’s Enswarm, a UK startup that is using swarm technologies to assist with recruitment and employment decisions
  • There’s Swarm.fund, a startup using swarming and crypto-currencies like Bitcoin as a new model for fundraising
  • And the human swarming company I founded, Unanimous A.I., creates a unified intellect from any group of networked users.
This swarm intelligence technology may sound like science fiction, but it has its roots in nature.
It all goes back to the birds and the bees – fish and ants too. Across countless species, social groups have developed methods of amplifying their intelligence by working together in closed-loop systems. Known commonly as flocks, schools, colonies, and swarms, these natural systems enable groups to combine their insights and thereby outperform individual members when solving problems and making decisions. Scientists call this “Swarm Intelligence” and it supports the old adage that many minds are better than one.
But what about us humans?
Clearly, we lack the natural ability to form closed-loop swarms, but like many other skills we can’t do naturally, emerging technologies are filling a void. Leveraging our vast networking infrastructure, new software techniques are allowing online groups to form artificial swarms that can work in synchrony to answer questions, reach decisions, and make predictions, all while exhibiting the same types of intelligence amplifications as seen in nature. The approach is sometimes called “blended intelligence” because it combines the hardware and software technologies used by AI systems with populations of real people, creating human-machine systems that have the potential of outsmarting both humans and pure-software AIs alike.
It should be noted that swarming” is different from traditional “crowdsourcing,” which generally uses votes, polls, or surveys to aggregate opinions. While such methods are valuable for characterizing populations, they don’t employ the real-time feedback loops used by artificial swarms to enable a unique intelligent system to emerge. It’s the difference between measuring what the average member of a group thinks versus allowing that group to think together and draw conclusions based upon their combined knowledge and intuition.
Outside of the companies I mentioned above, where else can such collective technologies be applied? One area that’s currently being explored is medical diagnosis, a process that requires deep factual knowledge along with the experiential wisdom of the practitioner. Can we merge the knowledge and wisdom of many doctors into a single emergent diagnosis that outperforms the diagnosis of a single practitioner? The answer appears to be yes. In a recent study conducted by Humboldt-University of Berlin and RAND Corporation, a computational collective of radiologists outperformed single practitioners when viewing mammograms, reducing false positives and false negatives. In a separate study conducted by John Carroll University and the Cleveland Clinic, a collective of 12 radiologists diagnosed skeletal abnormalities. As a computational collective, the radiologists produced a significantly higher rate of correct diagnosis than any single practitioner in the group. Of course, the potential of artificially merging many minds into a single unified intelligence extends beyond medical diagnosis to any field where we aim to exceed natural human abilities when making decisions, generating predictions, and solving problems.
Now, back to the original question of why Artificial Swarm Intelligence is a safer form of AI.
Although heavily reliant on hardware and software, swarming keeps human sensibilities and moralities as an integral part of the processes. As a result, this “human-in-the-loop” approach to AI combines the benefits of computational infrastructure and software efficiencies with the unique values that each person brings to the table:

  • creativity, 
  • empathy, 
  • morality, and 
  • justice. 

And because swarm-based intelligence is rooted in human input, the resulting intelligence is far more likely to be aligned with humanity – not just with our values and morals, but also with our goals and objectives.

How smart can an Artificial Swarm Intelligence get?
That’s still an open question, but with the potential to engage millions, even billions of people around the globe, each brimming with unique ideas and insights, swarm intelligence may be society’s best hope for staying one step ahead of the pure machine intelligences that emerge from busy AI labs around the world.
Louis Rosenberg is CEO of swarm intelligence company Unanimous A.I. He did his doctoral work at Stanford University in robotics, virtual reality, and human-computer interaction. He previously developed the first immersive augmented reality system as a researcher for the U.S. Air Force in the early 1990s and founded the VR company Immersion Corp and the 3D digitizer company Microscribe.
ORIGINAL: VentureBeat
NOVEMBER 22, 2015

DNA Is Multibillion-Year-Old Software

By Hugo Angel,

Illustration by Julia Suits, The New Yorker Cartoonist & author of The Extraordinary Catalog of Peculiar Inventions.
Illustration by Julia Suits, The New Yorker Cartoonist & author of The Extraordinary Catalog of Peculiar Inventions.
Nature invented software billions of years before we did. “The origin of life is really the origin of software,” says Gregory Chaitin. Life requires what software does (it’s foundationally algorithmic).

1. “DNA is multibillion-year-old software,says Chaitin (inventor of mathematical metabiology). We’re surrounded by software, but couldn’t see it until we had suitable thinking tools.
2. Alan Turing described modern software in 1936, inspiring John Von Neumann to connect software to biology. Before DNA was understood, Von Neumann saw that self-reproducing automata needed software. We now know DNA stores information; it’s a biochemical version of Turning’s software tape, but more generally: All that lives must process information. Biology’s basic building blocks are processes that make decisions.
3. Casting life as software provides many technomorphic insights (and mis-analogies), but let’s consider just its informational complexity. Do life’s patterns fit the tools of simpler sciences, like physics? How useful are experiments? Algebra? Statistics?
4. The logic of life is more complex than the inanimate sciences need. The deep structure of life’s interactions are algorithmic (loosely algorithms = logic with if-then-else controls). Can physics-friendly algebra capture life’s biochemical computations?
5. Describing its “pernicious influence” on science, Jack Schwartz says, mathematics succeeds in only “the simplest of situations” or when “rare good fortune makes [a] complex situation hinge upon a few dominant simple factors.”
6. Physics has low “causal density” — a great Jim Manzi coinage. Nothing in physics chooses. Or changes how it chooses. A few simple factors dominate, operating on properties that generally combine in simple ways. Its parameters are independent. Its algebra-friendly patterns generalize well (its equations suit stable categories and equilibrium states).
7. Higher-causal-density domains mean harder experiments (many hard-to-control factors that often can’t be varied independently). Fields like medicine can partly counter their complexity by randomized trials, but reliable generalization requires biological “uniformity of response.”
8. Social sciences have even higher causal densities, so “generalizing from even properly randomized experiments” is “hazardous,” Manzi says. “Omitted variable bias” in human systems is “massive.” Randomization ≠ representativeness of results is guaranteed. 
9. Complexity economist Brian Arthur says science’s pattern-grasping toolbox is becoming “more algorithmic … and less equation-based. But the nascent algorithmic era hasn’t had its Newton yet.
10. With studies in high-causal-density fields, always consider how representative data is, and ponder if uniform or stable responses are plausible. Human systems are often highly variable; our behaviors aren’t homogenous; they can change types; they’re often not in equilibrium.
11. Bad examples: Malcolm Gladwell puts entertainment first (again) by asserting that “the easiest way to raise people’s scores” is to make a test less readable (n = 40 study, later debunked). Also succumbing to unwarranted extrapolation, leading data-explainer Ezra Klein said, “Cutting-edge research shows that the more information partisans get, the deeper their disagreements.” That study neither represents all kinds of information, nor is a uniform response likely (in fact, assuming that would be ridiculous). Such rash generalizations = far from spotless record. 
Mismatched causal density and thinking tools creates errors. Entire fields are built on assuming such (mismatched) metaphors and methods
Related
olicausal sciences; Newton pattern vs. Darwin pattern; the two kinds of data (history ≠ nomothetic); life = game theoretic = fundamentally algorithmic.
(Hat tip to Bryan Atkins @postgenetic for pointer to Brian Arthur).
ORIGINAL: Big Think
5 MONTHS AGO

Robotic insect mimics Nature’s extreme moves

By Hugo Angel,

An international team of Seoul National University and Harvard researchers looked to water strider insects to develop robots that jump off water’s surface
(SEOUL and BOSTON) — The concept of walking on water might sound supernatural, but in fact it is a quite natural phenomenon. Many small living creatures leverage water’s surface tension to maneuver themselves around. One of the most complex maneuvers, jumping on water, is achieved by a species of semi-aquatic insects called water striders that not only skim along water’s surface but also generate enough upward thrust with their legs to launch themselves airborne from it.


In this video, watch how novel robotic insects developed by a team of Seoul National University and Harvard scientists can jump directly off water’s surface. The robots emulate the natural locomotion of water strider insects, which skim on and jump off the surface of water. Credit: Wyss Institute at Harvard University
Now, emulating this natural form of water-based locomotion, an international team of scientists from Seoul National University, Korea (SNU), Harvard’s Wyss Institute for Biologically Inspired Engineering, and the Harvard John A. Paulson School of Engineering and Applied Sciences, has unveiled a novel robotic insect that can jump off of water’s surface. In doing so, they have revealed new insights into the natural mechanics that allow water striders to jump from rigid ground or fluid water with the same amount of power and height. The work is reported in the July 31 issue of Science.
Water’s surface needs to be pressed at the right speed for an adequate amount of time, up to a certain depth, in order to achieve jumping,” said the study’s co–senior author Kyu Jin Cho, Associate Professor in the Department of Mechanical and Aerospace Engineering and Director of the Biorobotics Laboratory at Seoul National University. “The water strider is capable of doing all these things flawlessly.
The water strider, whose legs have slightly curved tips, employs a rotational leg movement to aid it its takeoff from the water’s surface, discovered co–senior author Ho–Young Kim who is Professor in SNU’s Department of Mechanical and Aerospace Engineering and Director of SNU’s Micro Fluid Mechanics Lab. Kim, a former Wyss Institute Visiting Scholar, worked with the study’s co–first author Eunjin Yang, a graduate researcher at SNU’s Micro Fluid Mechanics lab, to collect water striders and take extensive videos of their movements to analyze the mechanics that enable the insects to skim on and jump off water’s surface.
It took the team several trial and error attempts to fully understand the mechanics of the water strider, using robotic prototypes to test and shape their hypotheses.
If you apply as much force as quickly as possible on water, the limbs will break through the surface and you won’t get anywhere,” said Robert Wood, Ph.D., who is a co–author on the study, a Wyss Institute Core Faculty member, the Charles River Professor of Engineering and Applied Sciences at the Harvard Paulson School, and founder of the Harvard Microrobotics Lab.
But by studying water striders in comparison to iterative prototypes of their robotic insect, the SNU and Harvard team discovered that the best way to jump off of water is to maintain leg contact on the water for as long as possible during the jump motion.
Using its legs to push down on water, the natural water strider exerts the maximum amount of force just below the threshold that would break the water’s surface,” said the study’s co-first author Je-Sung Koh, Ph.D., who was pursuing his doctoral degree at SNU during the majority of this research and is now a Postdoctoral Fellow at the Wyss Institute and the Harvard Paulson School.
Mimicking these mechanics, the robotic insect built by the team can exert up to 16 times its own body weight on the water’s surface without breaking through, and can do so without complicated controls. Many natural organisms such as the water strider can perform extreme styles of locomotion – such as flying, floating, swimming, or jumping on water – with great ease despite a lack of complex cognitive skills.

From left, Seoul National University (SNU) professors Ho-Young Kim, Ph.D., and Kyu Jin Cho, Ph.D., observe the semi-aquatic jumping robotic insects developed by an SNU and Harvard team. Credit: Seoul National University.
This is due to their natural morphology,” said Cho. “It is a form of embodied or physical intelligence, and we can learn from this kind of physical intelligence to build robots that are similarly capable of performing extreme maneuvers without highly–complex controls or artificial intelligence.
The robotic insect was built using a “torque reversal catapult mechanism” inspired by the way a flea jumps, which allows this kind of extreme locomotion without intelligent control. It was first reported by Cho, Wood and Koh in 2013 in the International Conference on Intelligent Robots and Systems.
For the robotic insect to jump off water, the lightweight catapult mechanism uses a burst of momentum coupled with limited thrust to propel the robot off the water without breaking the water’s surface. An automatic triggering mechanism, built from composite materials and actuators, was employed to activate the catapult.
To produce the body of the robotic insect, “pop-up” manufacturing was used to create folded composite structures that self-assemble much like the foldable components that “pop–up” in 3D books. Devised by engineers at the Harvard Paulson School and the Wyss Institute, this ingenious layering and folding process enables the rapid fabrication of microrobots and a broad range of electromechanical devices.
The resulting robotic insects can achieve the same momentum and height that could be generated during a rapid jump on firm ground – but instead can do so on water – by spreading out the jumping thrust over a longer amount of time and in sustaining prolonged contact with the water’s surface,” said Wood.
This international collaboration of biologists and roboticists has not only looked into nature to develop a novel, semi–aquatic bioinspired robot that performs a new extreme form of robotic locomotion, but has also provided us with new insights on the natural mechanics at play in water striders,” said Wyss Institute Founding Director Donald Ingber, M.D., Ph.D.
Additional co–authors of the study include Gwang–Pil Jung, a Ph.D. candidate in SNU’s Biorobotics Laboratory; Sun–Pill Jung, an M.S. candidate in SNU’s Biorobotics Laboratory; Jae Hak Son, who earned his Ph.D. in SNU’s Laboratory of Behavioral Ecology and Evolution; Sang–Im Lee, Ph.D., who is Research Associate Professor at SNU’s Institute of Advanced Machines and Design and Adjunct Research Professor at the SNU’s Laboratory of Behavioral Ecology and Evolution; and Piotr Jablonski, Ph.D., who is Professor in SNU’s Laboratory of Behavioral Ecology and Evolution.
This work was supported by the National Research Foundation of Korea, Bio–Mimetic Robot Research Center funding from the Defense Acquisition Program Administration, and the Wyss Institute for Biologically Inspired Engineering at Harvard University.
IMAGE AND VIDEO AVAILABLE
###
PRESS CONTACTS
Seoul National University College of Engineering
Wyss Institute for Biologically Inspired Engineering at Harvard University
Harvard University John A. Paulson School of Engineering and Applies Sciences
The Seoul National University College of Engineering (SNU CE) (http://eng.snu.ac.kr/english/index.php) aims to foster leaders in global industry and society. In CE, professors from all over the world are applying their passion for education and research. Graduates of the college are taking on important roles in society as the CEOs of conglomerates, founders of venture businesses, and prominent engineers, contributing to the country’s industrial development. Globalization is the trend of a new era, and engineering in particular is a field of boundless competition and cooperation. The role of engineers is crucial to our 21st century knowledge and information society, and engineers contribute to the continuous development of Korea toward a central role on the world stage. CE, which provides enhanced curricula in a variety of major fields, has now become the environment in which future global leaders are cultivated.
The Wyss Institute for Biologically Inspired Engineering at Harvard University (http://wyss.harvard.edu) uses Nature’s design principles to develop bioinspired materials and devices that will transform medicine and create a more sustainable world. Wyss researchers are developing innovative new engineering solutions for healthcare, energy, architecture, robotics, and manufacturing that are translated into commercial products and therapies through collaborations with clinical investigators, corporate alliances, and formation of new start–ups. The Wyss Institute creates transformative technological breakthroughs by engaging in high risk research, and crosses disciplinary and institutional barriers, working as an alliance that includes Harvard’s Schools of Medicine, Engineering, Arts & Sciences and Design, and in partnership with Beth Israel Deaconess Medical Center, Brigham and Women’s Hospital, Boston Children’s Hospital, Dana–Farber Cancer Institute, Massachusetts General Hospital, the University of Massachusetts Medical School, Spaulding Rehabilitation Hospital, Boston University, Tufts University, and Charité – Universitätsmedizin Berlin, University of Zurich and Massachusetts Institute of Technology.
The Harvard University John A. Paulson School of Engineering and Applied Sciences (http://seas.harvard.edu) serves as the connector and integrator of Harvard’s teaching and research efforts in engineering, applied sciences, and technology. Through collaboration with researchers from all parts of Harvard, other universities, and corporate and foundational partners, we bring discovery and innovation directly to bear on improving human life and society.
ORIGINAL: Wyss Institute
Jul 30, 2015