Category: Biology

How a Japanese cucumber farmer is using deep learning and TensorFlow.

By Hugo Angel,

by Kaz Sato, Developer Advocate, Google Cloud Platform
August 31, 2016
It’s not hyperbole to say that use cases for machine learning and deep learning are only limited by our imaginations. About one year ago, a former embedded systems designer from the Japanese automobile industry named Makoto Koike started helping out at his parents’ cucumber farm, and was amazed by the amount of work it takes to sort cucumbers by size, shape, color and other attributes.
Makoto’s father is very proud of his thorny cucumber, for instance, having dedicated his life to delivering fresh and crispy cucumbers, with many prickles still on them. Straight and thick cucumbers with a vivid color and lots of prickles are considered premium grade and command much higher prices on the market.
But Makoto learned very quickly that sorting cucumbers is as hard and tricky as actually growing them.Each cucumber has different color, shape, quality and freshness,” Makoto says.
Cucumbers from retail stores
Cucumbers from Makoto’s farm
In Japan, each farm has its own classification standard and there’s no industry standard. At Makoto’s farm, they sort them into nine different classes, and his mother sorts them all herself — spending up to eight hours per day at peak harvesting times.
The sorting work is not an easy task to learn. You have to look at not only the size and thickness, but also the color, texture, small scratches, whether or not they are crooked and whether they have prickles. It takes months to learn the system and you can’t just hire part-time workers during the busiest period. I myself only recently learned to sort cucumbers well,” Makoto said.
Distorted or crooked cucumbers are ranked as low-quality product
There are also some automatic sorters on the market, but they have limitations in terms of performance and cost, and small farms don’t tend to use them.
Makoto doesn’t think sorting is an essential task for cucumber farmers. “Farmers want to focus and spend their time on growing delicious vegetables. I’d like to automate the sorting tasks before taking the farm business over from my parents.
Makoto Koike, center, with his parents at the family cucumber farm
Makoto Koike, family cucumber farm
The many uses of deep learning
Makoto first got the idea to explore machine learning for sorting cucumbers from a completely different use case: Google AlphaGo competing with the world’s top professional Go player.
When I saw the Google’s AlphaGo, I realized something really serious is happening here,” said Makoto. “That was the trigger for me to start developing the cucumber sorter with deep learning technology.
Using deep learning for image recognition allows a computer to learn from a training data set what the important “features” of the images are. By using a hierarchy of numerous artificial neurons, deep learning can automatically classify images with a high degree of accuracy. Thus, neural networks can recognize different species of cats, or models of cars or airplanes from images. Sometimes neural networks can exceed the performance of the human eye for certain applications. (For more information, check out my previous blog post Understanding neural networks with TensorFlow Playground.)

TensorFlow democratizes the power of deep learning
But can computers really learn mom’s art of cucumber sorting? Makoto set out to see whether he could use deep learning technology for sorting using Google’s open source machine learning library, TensorFlow.
Google had just open sourced TensorFlow, so I started trying it out with images of my cucumbers,” Makoto said. “This was the first time I tried out machine learning or deep learning technology, and right away got much higher accuracy than I expected. That gave me the confidence that it could solve my problem.
With TensorFlow, you don’t need to be knowledgeable about the advanced math models and optimization algorithms needed to implement deep neural networks. Just download the sample code and read the tutorials and you can get started in no time. The library lowers the barrier to entry for machine learning significantly, and since Google open-sourced TensorFlow last November, many “non ML” engineers have started playing with the technology with their own datasets and applications.

Cucumber sorting system design
Here’s a systems diagram of the cucumber sorter that Makoto built. The system uses Raspberry Pi 3 as the main controller to take images of the cucumbers with a camera, and 

  • in a first phase, runs a small-scale neural network on TensorFlow to detect whether or not the image is of a cucumber
  • It then forwards the image to a larger TensorFlow neural network running on a Linux server to perform a more detailed classification.
Systems diagram of the cucumber sorter
Makoto used the sample TensorFlow code Deep MNIST for Experts with minor modifications to the convolution, pooling and last layers, changing the network design to adapt to the pixel format of cucumber images and the number of cucumber classes.
Here’s Makoto’s cucumber sorter, which went live in July:
Here’s a close-up of the sorting arm, and the camera interface:

And here is the cucumber sorter in action:

Pushing the limits of deep learning
One of the current challenges with deep learning is that you need to have a large number of training datasets. To train the model, Makoto spent about three months taking 7,000 pictures of cucumbers sorted by his mother, but it’s probably not enough.
When I did a validation with the test images, the recognition accuracy exceeded 95%. But if you apply the system with real use cases, the accuracy drops down to about 70%. I suspect the neural network model has the issue of “overfitting” (the phenomenon in neural network where the model is trained to fit only to the small training dataset) because of the insufficient number of training images.
The second challenge of deep learning is that it consumes a lot of computing power. The current sorter uses a typical Windows desktop PC to train the neural network model. Although it converts the cucumber image into 80 x 80 pixel low-resolution images, it still takes two to three days to complete training the model with 7,000 images.
Even with this low-res image, the system can only classify a cucumber based on its shape, length and level of distortion. It can’t recognize color, texture, scratches and prickles,” Makoto explained. Increasing image resolution by zooming into the cucumber would result in much higher accuracy, but would also increase the training time significantly.
To improve deep learning, some large enterprises have started doing large-scale distributed training, but those servers come at an enormous cost. Google offers Cloud Machine Learning (Cloud ML), a low-cost cloud platform for training and prediction that dedicates hundreds of cloud servers to training a network with TensorFlow. With Cloud ML, Google handles building a large-scale cluster for distributed training, and you just pay for what you use, making it easier for developers to try out deep learning without making a significant capital investment.
These specialized servers were used in the AlphaGo match
Makoto is eagerly awaiting Cloud ML. “I could use Cloud ML to try training the model with much higher resolution images and more training data. Also, I could try changing the various configurations, parameters and algorithms of the neural network to see how that improves accuracy. I can’t wait to try it.

A lab founded by a tech billionaire just unveiled a major leap forward in cracking your brain’s code

By Hugo Angel,

This is definitely not a scene from “A Clockwork Orange.” Allen Brain Observatory
As the mice watched a computer screen, their glowing neurons pulsed through glass windows in their skulls.
Using a device called a two-photon microscope, researchers at the Allen Institute for Brain Science could peer through those windows and record, layer by layer, the workings of their little minds.
The result, announced July 13, is a real-time record of the visual cortex — a brain region shared in similar form across mammalian species — at work. The data set that emerged is so massive and complete that its creators have named it the Allen Brain Observatory.
Bred for the lab, the mice were genetically modified so that specific cells in their brains would fluoresce when they became active. Researchers had installed the brain-windows surgically, slicing away tiny chunks of the rodents’ skulls and replacing them with five-millimeter skylights.
Sparkling neurons of the mouse visual cortex shone through the glass as images and short films flashed across the screen. Each point of light the researchers saw translated, with hours of careful processing, into data: 
  • Which cell lit up? 
  • Where in the brain? 
  • How long did it glow? 
  • What was the mouse doing at the time? 
  • What was on the screen?

The researchers imaged the neurons in small groups, building a map of one microscopic layer before moving down to the next. When they were finished, the activities of 18,000 cells from several dozen mice were recorded in their database.

This is the first data set where we’re watching large populations of neurons’ activity in real time, at the cellular level,” said Saskia de Vries, a scientist who worked on the project, at the private research center launched by Microsoft co-founder Paul Allen.
The problem the Brain Observatory wants to solve is straightforward. Science still does not understand the brain’s underlying code very well, and individual studies may turn up odd results that are difficult to interpret in the context of the whole brain.
A decade ago, for example, a widely-reported study appeared to find a single neuron in a human brain that always — and only — winked on when presented with images of Halle Berry. Few scientists suggested that this single cell actually stored the subject’s whole knowledge of Berry’s face. But without more context about what the cells around it were doing, a more complete explanation remained out of reach.
When you’re listening to a cell with an electrode, all you’re hearing is [its activity level] spiking,” said Shawn Olsen, another researcher on the project. “And you don’t know where exactly that cell is, you don’t know its precise location, you don’t know its shape, you don’t know who it connects to.
Imagine trying to assemble a complete understanding of a computer given only facts like under certain circumstances, clicking the mouse makes lights on the printer blink.
To get beyond that kind of feeling around in the dark, the Allen Institute has taken what Olsen calls an “industrial” approach to mapping out the brain’s activity.
Our goal is to systematically march through the different cortical layers, and the different cell types, and the different areas of the cortex to produce a systematic, mostly comprehensive survey of the activity,” Olsen explained. “It doesn’t just describe how one cell type is responding or one particular area, but characterizes as much as we can a complete population of cells that will allow us to draw inferences that you couldn’t describe if you were just looking at one cell at a time.
In other words, this project makes its impact through the grinding power of time and effort.
A visualization of cells examined in the project. Allen Brain Observatory

Researchers showed the mice moving horizontal or vertical lines, light and dark dots on a surface, natural scenes, and even clips from Hollywood movies.

The more abstract displays target how the mind sees and interprets light and dark, lines, and motion, building on existing neuroscience. Researchers have known for decades that particular cells appear to correspond to particular kinds of motion or shape, or positions in the visual field. This research helps them place the activity of those cells in context.
One of the most obvious results was that the brain is noisy, messy, and confusing.
Even though we showed the same image, we could get dramatically different responses from the same cell. On one trial it may have a strong response, on another it may have a weak response,” Olsen said.
All that noise in their data is one of the things that differentiates it from a typical study, de Vries said.
If you’re inserting an electrode you’re going to keep advancing until you find a cell that kind of responds the way you want it to,” he said. “By doing a survey like this we’re going to see a lot of cells that don’t respond to the stimuli in the way that we think they should. We’re realizing that the cartoon model that we have of the cortex isn’t completely accurate.

Olsen said they suspect a lot of that noise emerges from whatever the mouse is thinking about or doing that has nothing to do with what’s on screen. They recorded videos of the mice during data collection to help researchers combing their data learn more about those effects.
The best evidence for this suspicion? When they showed the mice more interesting visuals, like pictures of animals or clips from the film “Touch of Evil,” the neurons behaved much more consistently.
We would present each [clip] ten different times,” de Vries said. “And we can see from trial to trial many cells at certain times almost always respond — reliable, repeatable, robust responses.
In other words, it appears the mice were paying attention.
Allen Brain Observatory

The Brain Observatory was turned loose on the internet Wednesday, with its data available for researchers and the public to comb through, explore, and maybe critique.

But the project isn’t over.
In the next year-and-a-half, the researchers intend to add more types of cells and more regions of the visual cortex to their observatory. And their long-term ambitions are even grander.
Ultimately,” Olson said,”we want to understand how this visual information in the mouse’s brain gets used to guide behavior and memory and cognition.
Right now, the mice just watch screens. But by training them to perform tasks based on what they see, he said they hope to crack the mysteries of memory, decision-making, and problem-solving. Another parallel observatory created using electrode arrays instead of light through windows will add new levels of richness to their data.
So the underlying code of mouse — and human — brains remains largely a mystery, but the map that we’ll need to unlock it grows richer by the day.
ORIGINAL: Tech Insider

Jul. 13, 2016

The Quest to Make Code Work Like Biology Just Took A Big Step

By Hugo Angel,


IN THE EARLY 1970s, at Silicon Valley’s Xerox PARC, Alan Kay envisioned computer software as something akin to a biological system, a vast collection of small cells that could communicate via simple messages. Each cell would perform its own discrete task. But in communicating with the rest, it would form a more complex whole. “This is an almost foolproof way of operating,” Kay once told me. Computer programmers could build something large by focusing on something small. That’s a simpler task, and in the end, the thing you build is stronger and more efficient. 
The result was a programming language called SmallTalk. Kay called it an object-oriented language—the “objects” were the cells—and it spawned so many of the languages that programmers use today, from Objective-C and Swiftwhich run all the apps on your Apple iPhone, to JavaGoogle’s language of choice on Android phones. Kay’s vision of code as biology is now the norm. It’s how the world’s programmers think about building software. 

In the ’70s, Alan Kay was a researcher at Xerox PARC, where he helped develop the notion of personal computing, the laptop, the now ubiquitous overlapping-window interface, and object-oriented programming.
But Kay’s big idea extends well beyond individual languages like Swift and Java. This is also how Google, Twitter, and other Internet giants now think about building and running their massive online services. The Google search engine isn’t software that runs on a single machine. Serving millions upon millions of people around the globe, it’s software that runs on thousands of machines spread across multiple computer data centers. Google runs this entire service like a biological system, as a vast collection of self-contained pieces that work in concert. It can readily spread those cells of code across all those machines, and when machines break—as they inevitably do—it can move code to new machines and keep the whole alive. 
Now, Adam Jacob wants to bring this notion to every other business on earth. Jacob is a bearded former comic-book-store clerk who, in the grand tradition of Alan Kay, views technology like a philosopher. He’s also the chief technology officer and co-founder of Chef, a Seattle company that has long helped businesses automate the operation of their online services through a techno-philosophy known as “DevOps.” Today, he and his company unveiled a new creation they call Habitat. Habitat is a way of packaging entire applications into something akin to Alan Kay’s biological cells, squeezing in not only the application code but everything needed to run, oversee, and update that code—all its “dependencies,” in programmer-speak. Then you can deploy hundreds or even thousands of these cells across a network of machines, and they will operate as a whole, with Habitat handling all the necessary communication between each cell. “With Habitat,” Jacob says, “all of the automation travels with the application itself.” 
That’s something that will at least capture the imagination of coders. And if it works, it will serve the rest of us too. If businesses push their services towards the biological ideal, then we, the people who use those services, will end up with technology that just works better—that coders can improve more easily and more quickly than before
Reduce, Reuse, Repackage 
Habitat is part of a much larger effort to remake any online business in the image of Google. Alex Polvi, CEO and founder of a startup called CoreOS, calls this movement GIFEE—or Google Infrastructure For Everyone Else—and it includes tools built by CoreOS as well as such companies as Docker and Mesosphere, not to mention Google itself. The goal: to create tools that more efficiently juggle software across the vast computer networks that drive the modern digital world. 
But Jacob seeks to shift this idea’s center of gravity. He wants to make it as easy as possible for businesses to run their existing applications in this enormously distributed manner. He wants businesses embrace this ideal even if they’re not willing to rebuild these applications or the computer platforms they run on. He aims to provide a way of wrapping any code—new or old—in an interface that can run on practically any machine. Rather than rebuilding your operation in the image of Google, Jacob says, you can simply repackage it. 
If what I want is an easier application to manage, why do I need to change the infrastructure for that application?” he says. It’s yet another extension of Alan Kay’s biological metaphor—as he himself will tell you. When I describe Habitat to Kay—now revered as one of the founding fathers of the PC, alongside so many other PARC researchers—he says it does what SmallTalk did so long go
The Unknown Programmer 
Kay traces the origins of SmallTalk to his time in the Air Force. In 1961, he was stationed at Randolph Air Force Base near San Antonio, Texas, and he worked as a programmer, building software for a vacuum-tube computer called the Burroughs 220. In those days, computers didn’t have operating systems. No Apple iOS. No Windows. No Unix. And data didn’t come packaged in standard file formats. No .doc. No .xls. No .txt. But the Air Force needed a way of sending files between bases so that different machines could read them. Sometime before Kay arrived, another Air Force programmer—whose name is lost to history—cooked up a good way. 
This unnamed programmer—“almost certainly an enlisted man,” Kay says, “because officers didn’t program back then”—would put data on a magnetic-tape reel along with all the procedures needed to read that data. Then, he tacked on a simple interface—a few “pointers,” in programmer-speak—that allowed the machine to interact with those procedures. To read the data, all the machine needed to understand were the pointers—not a whole new way of doing things. In this way, someone like Kay could read the tape from any machine on any Air Force base. 
Kay’s programming objects worked in a similar way. Each did its own thing, but could communicate with the outside world through a simple interface. That meant coders could readily plug an old object into a new program, or reuse it several times across the same program. Today, this notion is fundamental to software design. And now, Habitat wants to recreate this dynamic on a higher level: not within an application, but in a way that allows an application to run across as a vast computer network. 
Because Habitat wraps an application in a package that includes everything needed to run and oversee the application—while fronting this package with a simple interface—you can potentially run that application on any machine. Or, indeed, you can spread tens, hundreds, or even thousands of packages across a vast network of machines. Software called the Habitat Supervisor sits on each machine, running each package and ensuring it can communicate with the rest. Written in a new programming language called Rust which is suited to modern online systems, Chef designed this Supervisor specifically to juggle code on an enormous scale. 
Kay’s vision of code as biology is now the norm. It’s how the world’s programmers think about the software they build. 
But the important stuff lies inside those packages. Each package includes everything you need to orchestrate the application, as modern coders say, across myriad machines. Once you deploy your packages across a network, Jacob says, they can essentially orchestrate themselves. Instead of overseeing the application from one central nerve center, you can distribute the task—the ultimate aim of Kay’s biological system. That’s simpler and less likely to fail, at least in theory. 
What’s more, each package includes everything you need to modify the application—to, say, update the code or apply new security rules. This is what Jacob means when he says that all the automation travels with the application. “Having the management go with the package,” he says, “means I can manage in the same way, no matter where I choose to run it.” That’s vital in the modern world. Online code is constantly changing, and this system is designed for change.

‘Grownup Containers’ 
The idea at the heart of Habitat is similar to concepts that drive Mesosphere, Google’s Kubernetes, and Docker’s Swarm. All of these increasingly popular tools run software inside Linux “containers”—walled-off spaces within the Linux operating system that provide ways to orchestrate discrete pieces of code across myriad machines. Google uses containers in running its own online empire, and the rest of Silicon Valley is following suit. 
But Chef is taking a different tack. Rather than centering Habitat around Linux containers, they’ve built a new kind of package designed to run in other ways too. You can run Habitat packages atop Mesosphere or Kubernetes. You can also run them atop virtual machines, such as those offered by Amazon or Google on their cloud services. Or you can just run them on your own servers. “We can take all the existing software in the world, which wasn’t built with any of this new stuff in mind, and make it behave,” Jacob says. 
Jon Cowie, senior operations engineer at the online marketplace Etsy, is among the few outsiders who have kicked the tires on Habibat. He calls it “grownup containers.” Building an application around containers can be a complicated business, he explains. Habitat, he says, is simpler. You wrap your code, old or new, in a new interface and run it where you want to run it. “They are giving you a flexible toolkit,” he says. 
That said, container systems like Mesosphere and Kubernetes can still be a very important thing. These tools include “schedulers” that spread code across myriad machines in a hyper-efficient way, finding machines that have available resources and actually launching the code. Habitat doesn’t do that. It handles everything after the code is in place. 
Jacob sees Habitat as a tool that runs in tandem with a Mesophere or a Kubernetes—or atop other kinds of systems. He sees it as a single tool that can run any application on anything. But you may have to tweak Habitat so it will run on your infrastructure of choice. In packaging your app, Habitat must use a format that can speak to each type of system you want it to run on (the inputs and outputs for a virtual machine are different, say, from the inputs and outputs for Kubernetes), and at the moment, it only offers certain formats. If it doesn’t handle your format of choice, you’ll have to write a little extra code of your own. 
Jacob says writing this code is “trivial.” And for seasoned developers, it may be. Habitat’s overarching mission is to bring the biological imperative to as many businesses as possible. But of course, the mission isn’t everything. The importance of Habitat will really come down to how well it works.

Promise Theory 
Whatever the case, the idea behind Habitat is enormously powerful. The biological ideal has driven the evolution of computing systems for decades—and will continue to drive their evolution. Jacob and Chef are taking a concept that computer coders are intimately familiar with, and they’re applying it to something new. 
They’re trying to take away more of the complexity—and do this in a way that matches the cultural affiliation of developers,” says Mark Burgess, a computer scientist, physicist, and philosopher whose ideas helped spawn Chef and other DevOps projects. 
Burgess compares this phenomenon to what he calls Promise Theory, where humans and autonomous agents work together to solve problems by striving to fulfill certain intentions, or promises. He sees computer automation not just as a cooperation of code, but of people and code. That’s what Jacob is striving for. You share your intentions with Habitat, and its autonomous agents work to realize them—a flesh-and-blood biological system combining with its idealized counterpart in code. 

Inside Vicarious, the Secretive AI Startup Bringing Imagination to Computers

By Hugo Angel,

By reinventing the neural network, the company hopes to help computers make the leap from processing words and symbols to comprehending the real world.
Life would be pretty dull without imagination. In fact, maybe the biggest problem for computers is that they don’t have any.
That’s the belief motivating the founders of Vicarious, an enigmatic AI company backed by some of the most famous and successful names in Silicon Valley. Vicarious is developing a new way of processing data, inspired by the way information seems to flow through the brain. The company’s leaders say this gives computers something akin to imagination, which they hope will help make the machines a lot smarter.
Vicarious is also, essentially, betting against the current boom in AI. Companies including Google, Facebook, Amazon, and Microsoft have made stunning progress in the past few years by feeding huge quantities of data into large neural networks in a process called “deep learning.” When trained on enough examples, for instance, deep-learning systems can learn to recognize a particular face or type of animal with very high accuracy (see “10 Breakthrough Technologies 2013: Deep Learning”). But those neural networks are only very crude approximations of what’s found inside a real brain.
Illustration by Sophia Foster-Dimino
Vicarious has introduced a new kind of neural-network algorithm designed to take into account more of the features that appear in biology. An important one is the ability to picture what the information it’s learned should look like in different scenarios—a kind of artificial imagination. The company’s founders believe a fundamentally different design will be essential if machines are to demonstrate more human like intelligence. Computers will have to be able to learn from less data, and to recognize stimuli or concepts more easily.
Despite generating plenty of early excitement, Vicarious has been quiet over the past couple of years. But this year, the company says, it will publish details of its research, and it promises some eye-popping demos that will show just how useful a computer with an imagination could be.
The company’s headquarters don’t exactly seem like the epicenter of a revolution in artificial intelligence. Located in Union City, a short drive across the San Francisco Bay from Palo Alto, the offices are plain—a stone’s throw from a McDonald’s and a couple of floors up from a dentist. Inside, though, are all the trappings of a vibrant high-tech startup. A dozen or so engineers were hard at work when I visited, several using impressive treadmill desks. Microsoft Kinect 3-D sensors sat on top of some of the engineers’ desks.
D. Scott Phoenix, the company’s 33-year-old CEO, speaks in suitably grandiose terms. “We are really rapidly approaching the amount of computational power we need to be able to do some interesting things in AI,” he told me shortly after I walked through the door. “In 15 years, the fastest computer will do more operations per second than all the neurons in all the brains of all the people who are alive. So we are really close.
Vicarious is about more than just harnessing more computer power, though. Its mathematical innovations, Phoenix says, will more faithfully mimic the information processing found in the human brain. It’s true enough that the relationship between the neural networks currently used in AI and the neurons, dendrites, and synapses found in a real brain is tenuous at best.
One of the most glaring shortcomings of artificial neural networks, Phoenix says, is that information flows only one way. “If you look at the information flow in a classic neural network, it’s a feed-forward architecture,” he says. “There are actually more feedback connections in the brain than feed-forward connections—so you’re missing more than half of the information flow.
It’s undeniably alluring to think that imagination—a capability so fundamentally human it sounds almost mystical in a computer—could be the key to the next big advance in AI.
Vicarious has so far shown that its approach can create a visual system capable of surprisingly deft interpretation. In 2013 it showed that the system could solve any captcha (the visual puzzles that are used to prevent spam-bots from signing up for e-mail accounts and the like). As Phoenix explains it, the feedback mechanism built into Vicarious’s system allows it to imagine what a character would look like if it weren’t distorted or partly obscured (see “AI Startup Says It Has Defeated Captchas”).
Phoenix sketched out some of the details of the system at the heart of this approach on a whiteboard. But he is keeping further details quiet until a scientific paper outlining the captcha approach is published later this year.
In principle, this visual system could be put to many other practical uses, like recognizing objects on shelves more accurately or interpreting real-world scenes more intelligently. The founders of Vicarious also say that their approach extends to other, much more complex areas of intelligence, including language and logical reasoning.
Phoenix says his company may give a demo later this year involving robots. And indeed, the job listings on the company’s website include several postings for robotics experts. Currently robots are bad at picking up unfamiliar, oddly arranged, or partly obscured objects, because they have trouble recognizing what they are. “If you look at people who are picking up objects in an Amazon facility, most of the time they aren’t even looking at what they’re doing,” he explains. “And they’re imagining—using their sensory motor simulator—where the object is, and they’re imagining at what point their finger will touch it.
While Phoenix is the company’s leader, his cofounder, Dileep George, might be considered its technical visionary. George was born in India and received a PhD in electrical engineering from Stanford University, where he turned his attention to neuroscience toward the end of his doctoral studies. In 2005 he cofounded Numenta with Jeff Hawkins, the creator of Palm Computing. But in 2010 George left to pursue his own ideas about the mathematical principles behind information processing in the brain, founding Vicarious with Phoenix the same year.
I bumped into George in the elevator when I first arrived. He is unassuming and speaks quietly, with a thick accent. But he’s also quite matter-of-fact about what seem like very grand objectives.
George explained that imagination could help computers process language by tying words, or symbols, to low-level physical representations of real-world things. In theory, such a system might automatically understand the physical properties of something like water, for example, which would make it better able to discuss the weather. “When I utter a word, you know what it means because you can simulate the concept,” he says.
This ambitious vision for the future of AI has helped Vicarious raise an impressive $72 million so far. Its list of investors also reads like a who’s who of the tech world. Early cash came from Dustin Moskovitz, ex-CTO of Facebook, and Adam D’Angelo, cofounder of Quora. Further funding came from Peter Thiel, Mark Zuckerberg, Jeff Bezos, and Elon Musk.
Many people are itching to see what Vicarious has done beyond beating captchas. “I would love it if they showed us something new this year,” says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence in Seattle.
In contrast to the likes of Google, Facebook, or Baidu, Vicarious hasn’t published any papers or released any tools that researchers can play with. “The people [involved] are great, and the problems [they are working on] are great,” says Etzioni. “But it’s time to deliver.
For those who’ve put their money behind Vicarious, the company’s remarkable goals should make the wait well worth it. Even if progress takes a while, the potential payoffs seem so huge that the bet makes sense, says Matt Ocko, a partner at Data Collective, a venture firm that has backed Vicarious. A better machine-learning approach could be applied in just about any industry that handles large amounts of data, he says. “Vicarious sat us down and demonstrated the most credible pathway to reasoning machines that I have ever seen.
Ocko adds that Vicarious has demonstrated clear evidence it can commercialize what it’s working on. “We approached it with a crapload of intellectual rigor,” he says.
It will certainly be interesting to see if Vicarious can inspire this kind of confidence among other AI researchers and technologists with its papers and demos this year. If it does, then the company could quickly go from one of the hottest prospects in the Valley to one of its fastest-growing businesses.
That’s something the company’s founders would certainly like to imagine.
by Will Knight. Senior Editor, AI
May 19, 2016

The Rise of Artificial Intelligence and the End of Code

By Hugo Angel,

Soon We Won’t Program Computers. We’ll Train Them Like Dogs
Before the invention of the computer, most experimental psychologists thought the brain was an unknowable black box. You could analyze a subject’s behavior—ring bell, dog salivates—but thoughts, memories, emotions? That stuff was obscure and inscrutable, beyond the reach of science. So these behaviorists, as they called themselves, confined their work to the study of stimulus and response, feedback and reinforcement, bells and saliva. They gave up trying to understand the inner workings of the mind. They ruled their field for four decades.
Then, in the mid-1950s, a group of rebellious psychologists, linguists, information theorists, and early artificial-intelligence researchers came up with a different conception of the mind. People, they argued, were not just collections of conditioned responses. They absorbed information, processed it, and then acted upon it. They had systems for writing, storing, and recalling memories. They operated via a logical, formal syntax. The brain wasn’t a black box at all. It was more like a computer.
The so-called cognitive revolution started small, but as computers became standard equipment in psychology labs across the country, it gained broader acceptance. By the late 1970s, cognitive psychology had overthrown behaviorism, and with the new regime came a whole new language for talking about mental life. Psychologists began describing thoughts as programs, ordinary people talked about storing facts away in their memory banks, and business gurus fretted about the limits of mental bandwidth and processing power in the modern workplace. 
This story has repeated itself again and again. As the digital revolution wormed its way into every part of our lives, it also seeped into our language and our deep, basic theories about how things work. Technology always does this. During the Enlightenment, Newton and Descartes inspired people to think of the universe as an elaborate clock. In the industrial age, it was a machine with pistons. (Freud’s idea of psychodynamics borrowed from the thermodynamics of steam engines.) Now it’s a computer. Which is, when you think about it, a fundamentally empowering idea. Because if the world is a computer, then the world can be coded. 
Code is logical. Code is hackable. Code is destiny. These are the central tenets (and self-fulfilling prophecies) of life in the digital age. As software has eaten the world, to paraphrase venture capitalist Marc Andreessen, we have surrounded ourselves with machines that convert our actions, thoughts, and emotions into data—raw material for armies of code-wielding engineers to manipulate. We have come to see life itself as something ruled by a series of instructions that can be discovered, exploited, optimized, maybe even rewritten. Companies use code to understand our most intimate ties; Facebook’s Mark Zuckerberg has gone so far as to suggest there might be a “fundamental mathematical law underlying human relationships that governs the balance of who and what we all care about.In 2013, Craig Venter announced that, a decade after the decoding of the human genome, he had begun to write code that would allow him to create synthetic organisms. “It is becoming clear,” he said, “that all living cells that we know of on this planet are DNA-software-driven biological machines.” Even self-help literature insists that you can hack your own source code, reprogramming your love life, your sleep routine, and your spending habits.
In this world, the ability to write code has become not just a desirable skill but a language that grants insider status to those who speak it. They have access to what in a more mechanical age would have been called the levers of power. “If you control the code, you control the world,” wrote futurist Marc Goodman. (In Bloomberg Businessweek, Paul Ford was slightly more circumspect: “If coders don’t run the world, they run the things that run the world.” Tomato, tomahto.)
But whether you like this state of affairs or hate it—whether you’re a member of the coding elite or someone who barely feels competent to futz with the settings on your phone—don’t get used to it. Our machines are starting to speak a different language now, one that even the best coders can’t fully understand. 
Over the past several years, the biggest tech companies in Silicon Valley have aggressively pursued an approach to computing called machine learning. In traditional programming, an engineer writes explicit, step-by-step instructions for the computer to follow. With machine learning, programmers don’t encode computers with instructions. They train them. If you want to teach a neural network to recognize a cat, for instance, you don’t tell it to look for whiskers, ears, fur, and eyes. You simply show it thousands and thousands of photos of cats, and eventually it works things out. If it keeps misclassifying foxes as cats, you don’t rewrite the code. You just keep coaching it.
This approach is not new—it’s been around for decades—but it has recently become immensely more powerful, thanks in part to the rise of deep neural networks, massively distributed computational systems that mimic the multilayered connections of neurons in the brain. And already, whether you realize it or not, machine learning powers large swaths of our online activity. Facebook uses it to determine which stories show up in your News Feed, and Google Photos uses it to identify faces. Machine learning runs Microsoft’s Skype Translator, which converts speech to different languages in real time. Self-driving cars use machine learning to avoid accidents. Even Google’s search engine—for so many years a towering edifice of human-written rules—has begun to rely on these deep neural networks. In February the company replaced its longtime head of search with machine-learning expert John Giannandrea, and it has initiated a major program to retrain its engineers in these new techniques. “By building learning systems,” Giannandrea told reporters this fall, “we don’t have to write these rules anymore.
Our machines speak a different language now, one that even the best coders can’t fully understand. 
But here’s the thing: With machine learning, the engineer never knows precisely how the computer accomplishes its tasks. The neural network’s operations are largely opaque and inscrutable. It is, in other words, a black box. And as these black boxes assume responsibility for more and more of our daily digital tasks, they are not only going to change our relationship to technology—they are going to change how we think about ourselves, our world, and our place within it.
If in the old view programmers were like gods, authoring the laws that govern computer systems, now they’re like parents or dog trainers. And as any parent or dog owner can tell you, that is a much more mysterious relationship to find yourself in.
Andy Rubin is an inveterate tinkerer and coder. The cocreator of the Android operating system, Rubin is notorious in Silicon Valley for filling his workplaces and home with robots. He programs them himself. “I got into computer science when I was very young, and I loved it because I could disappear in the world of the computer. It was a clean slate, a blank canvas, and I could create something from scratch,” he says. “It gave me full control of a world that I played in for many, many years.
Now, he says, that world is coming to an end. Rubin is excited about the rise of machine learning—his new company, Playground Global, invests in machine-learning startups and is positioning itself to lead the spread of intelligent devices—but it saddens him a little too. Because machine learning changes what it means to be an engineer.
People don’t linearly write the programs,” Rubin says. “After a neural network learns how to do speech recognition, a programmer can’t go in and look at it and see how that happened. It’s just like your brain. You can’t cut your head off and see what you’re thinking.When engineers do peer into a deep neural network, what they see is an ocean of math: a massive, multilayer set of calculus problems that—by constantly deriving the relationship between billions of data points—generate guesses about the world. 
Artificial intelligence wasn’t supposed to work this way. Until a few years ago, mainstream AI researchers assumed that to create intelligence, we just had to imbue a machine with the right logic. Write enough rules and eventually we’d create a system sophisticated enough to understand the world. They largely ignored, even vilified, early proponents of machine learning, who argued in favor of plying machines with data until they reached their own conclusions. For years computers weren’t powerful enough to really prove the merits of either approach, so the argument became a philosophical one. “Most of these debates were based on fixed beliefs about how the world had to be organized and how the brain worked,” says Sebastian Thrun, the former Stanford AI professor who created Google’s self-driving car. “Neural nets had no symbols or rules, just numbers. That alienated a lot of people.
The implications of an unparsable machine language aren’t just philosophical. For the past two decades, learning to code has been one of the surest routes to reliable employment—a fact not lost on all those parents enrolling their kids in after-school code academies. But a world run by neurally networked deep-learning machines requires a different workforce. Analysts have already started worrying about the impact of AI on the job market, as machines render old skills irrelevant. Programmers might soon get a taste of what that feels like themselves.
Just as Newtonian physics wasn’t obviated by quantum mechanics, code will remain a powerful tool set to explore the world. 
I was just having a conversation about that this morning,” says tech guru Tim O’Reilly when I ask him about this shift. “I was pointing out how different programming jobs would be by the time all these STEM-educated kids grow up.” Traditional coding won’t disappear completely—indeed, O’Reilly predicts that we’ll still need coders for a long time yet—but there will likely be less of it, and it will become a meta skill, a way of creating what Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, calls the “scaffolding” within which machine learning can operate. Just as Newtonian physics wasn’t obviated by the discovery of quantum mechanics, code will remain a powerful, if incomplete, tool set to explore the world. But when it comes to powering specific functions, machine learning will do the bulk of the work for us. 
Of course, humans still have to train these systems. But for now, at least, that’s a rarefied skill. The job requires both a high-level grasp of mathematics and an intuition for pedagogical give-and-take. “It’s almost like an art form to get the best out of these systems,” says Demis Hassabis, who leads Google’s DeepMind AI team. “There’s only a few hundred people in the world that can do that really well.” But even that tiny number has been enough to transform the tech industry in just a couple of years.
Whatever the professional implications of this shift, the cultural consequences will be even bigger. If the rise of human-written software led to the cult of the engineer, and to the notion that human experience can ultimately be reduced to a series of comprehensible instructions, machine learning kicks the pendulum in the opposite direction. The code that runs the universe may defy human analysis. Right now Google, for example, is facing an antitrust investigation in Europe that accuses the company of exerting undue influence over its search results. Such a charge will be difficult to prove when even the company’s own engineers can’t say exactly how its search algorithms work in the first place.
This explosion of indeterminacy has been a long time coming. It’s not news that even simple algorithms can create unpredictable emergent behavior—an insight that goes back to chaos theory and random number generators. Over the past few years, as networks have grown more intertwined and their functions more complex, code has come to seem more like an alien force, the ghosts in the machine ever more elusive and ungovernable. Planes grounded for no reason. Seemingly unpreventable flash crashes in the stock market. Rolling blackouts.
These forces have led technologist Danny Hillis to declare the end of the age of Enlightenment, our centuries-long faith in logic, determinism, and control over nature. Hillis says we’re shifting to what he calls the age of Entanglement. “As our technological and institutional creations have become more complex, our relationship to them has changed,” he wrote in the Journal of Design and Science. “Instead of being masters of our creations, we have learned to bargain with them, cajoling and guiding them in the general direction of our goals. We have built our own jungle, and it has a life of its own.The rise of machine learning is the latest—and perhaps the last—step in this journey. 
This can all be pretty frightening. After all, coding was at least the kind of thing that a regular person could imagine picking up at a boot camp. Coders were at least human. Now the technological elite is even smaller, and their command over their creations has waned and become indirect. Already the companies that build this stuff find it behaving in ways that are hard to govern. Last summer, Google rushed to apologize when its photo recognition engine started tagging images of black people as gorillas. The company’s blunt first fix was to keep the system from labeling anything as a gorilla.

To nerds of a certain bent, this all suggests a coming era in which we forfeit authority over our machines. “One can imagine such technology 

  • outsmarting financial markets, 
  • out-inventing human researchers, 
  • out-manipulating human leaders, and 
  • developing weapons we cannot even understand,” 

wrote Stephen Hawking—sentiments echoed by Elon Musk and Bill Gates, among others. “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.” 

But don’t be too scared; this isn’t the dawn of Skynet. We’re just learning the rules of engagement with a new technology. Already, engineers are working out ways to visualize what’s going on under the hood of a deep-learning system. But even if we never fully understand how these new machines think, that doesn’t mean we’ll be powerless before them. In the future, we won’t concern ourselves as much with the underlying sources of their behavior; we’ll learn to focus on the behavior itself. The code will become less important than the data we use to train it.
This isn’t the dawn of Skynet. We’re just learning the rules of engagement with a new technology. 
If all this seems a little familiar, that’s because it looks a lot like good old 20th-century behaviorism. In fact, the process of training a machine-learning algorithm is often compared to the great behaviorist experiments of the early 1900s. Pavlov triggered his dog’s salivation not through a deep understanding of hunger but simply by repeating a sequence of events over and over. He provided data, again and again, until the code rewrote itself. And say what you will about the behaviorists, they did know how to control their subjects.
In the long run, Thrun says, machine learning will have a democratizing influence. In the same way that you don’t need to know HTML to build a website these days, you eventually won’t need a PhD to tap into the insane power of deep learning. Programming won’t be the sole domain of trained coders who have learned a series of arcane languages. It’ll be accessible to anyone who has ever taught a dog to roll over. “For me, it’s the coolest thing ever in programming,” Thrun says, “because now anyone can program.
For much of computing history, we have taken an inside-out view of how machines work. First we write the code, then the machine expresses it. This worldview implied plasticity, but it also suggested a kind of rules-based determinism, a sense that things are the product of their underlying instructions. Machine learning suggests the opposite, an outside-in view in which code doesn’t just determine behavior, behavior also determines code. Machines are products of the world.
Ultimately we will come to appreciate both the power of handwritten linear code and the power of machine-learning algorithms to adjust it—the give-and-take of design and emergence. It’s possible that biologists have already started figuring this out. Gene-editing techniques like Crispr give them the kind of code-manipulating power that traditional software programmers have wielded. But discoveries in the field of epigenetics suggest that genetic material is not in fact an immutable set of instructions but rather a dynamic set of switches that adjusts depending on the environment and experiences of its host. Our code does not exist separate from the physical world; it is deeply influenced and transmogrified by it. Venter may believe cells are DNA-software-driven machines, but epigeneticist Steve Cole suggests a different formulation: “A cell is a machine for turning experience into biology.
A cell is a machine for turning experience into biology.” 
Steve Cole
And now, 80 years after Alan Turing first sketched his designs for a problem-solving machine, computers are becoming devices for turning experience into technology. For decades we have sought the secret code that could explain and, with some adjustments, optimize our experience of the world. But our machines won’t work that way for much longer—and our world never really did. We’re about to have a more complicated but ultimately more rewarding relationship with technology. We will go from commanding our devices to parenting them.

What the AI Behind AlphaGo Teaches Us About Humanity. Watch this on The Scene.
Editor at large Jason Tanz (@jasontanz) wrote about Andy Rubin’s new company, Playground, in issue 24.03.
This article appears in the June issue. Go Back to Top. Skip To: Start of Article.

First Human Tests of Memory Boosting Brain Implant—a Big Leap Forward

By Hugo Angel,

You have to begin to lose your memory, if only bits and pieces, to realize that memory is what makes our lives. Life without memory is no life at all.” — Luis Buñuel Portolés, Filmmaker
Image Credit:
Every year, hundreds of millions of people experience the pain of a failing memory.
The reasons are many:

  • traumatic brain injury, which haunts a disturbingly high number of veterans and football players; 
  • stroke or Alzheimer’s disease, which often plagues the elderly; or 
  • even normal brain aging, which inevitably touches us all.
Memory loss seems to be inescapable. But one maverick neuroscientist is working hard on an electronic cure. Funded by DARPA, Dr. Theodore Berger, a biomedical engineer at the University of Southern California, is testing a memory-boosting implant that mimics the kind of signal processing that occurs when neurons are laying down new long-term memories.
The revolutionary implant, already shown to help memory encoding in rats and monkeys, is now being tested in human patients with epilepsy — an exciting first that may blow the field of memory prosthetics wide open.
To get here, however, the team first had to crack the memory code.

Deciphering Memory
From the very onset, Berger knew he was facing a behemoth of a problem.
We weren’t looking to match everything the brain does when it processes memory, but to at least come up with a decent mimic, said Berger.
Of course people asked: can you model it and put it into a device? Can you get that device to work in any brain? It’s those things that lead people to think I’m crazy. They think it’s too hard,” he said.
But the team had a solid place to start.
The hippocampus, a region buried deep within the folds and grooves of the brain, is the critical gatekeeper that transforms memories from short-lived to long-term. In dogged pursuit, Berger spent most of the last 35 years trying to understand how neurons in the hippocampus accomplish this complicated feat.
At its heart, a memory is a series of electrical pulses that occur over time that are generated by a given number of neurons, said Berger. This is important — it suggests that we can reduce it to mathematical equations and put it into a computational framework, he said.
Berger hasn’t been alone in his quest.
By listening to the chatter of neurons as an animal learns, teams of neuroscientists have begun to decipher the flow of information within the hippocampus that supports memory encoding. Key to this process is a strong electrical signal that travels from CA3, the “input” part of the hippocampus, to CA1, the “output” node.
This signal is impaired in people with memory disabilities, said Berger, so of course we thought if we could recreate it using silicon, we might be able to restore — or even boost — memory.

Bridging the Gap
Yet this brain’s memory code proved to be extremely tough to crack.
The problem lies in the non-linear nature of neural networks: signals are often noisy and constantly overlap in time, which leads to some inputs being suppressed or accentuated. In a network of hundreds and thousands of neurons, any small change could be greatly amplified and lead to vastly different outputs.
It’s a chaotic black box, laughed Berger.
With the help of modern computing techniques, however, Berger believes he may have a crude solution in hand. His proof?
Use his mathematical theorems to program a chip, and then see if the brain accepts the chip as a replacement — or additional — memory module.
Berger and his team began with a simple task using rats. They trained the animals to push one of two levers to get a tasty treat, and recorded the series of CA3 to CA1 electronic pulses in the hippocampus as the animals learned to pick the correct lever. The team carefully captured the way the signals were transformed as the session was laid down into long-term memory, and used that information — the electrical “essence” of the memory — to program an external memory chip.
They then injected the animals with a drug that temporarily disrupted their ability to form and access long-term memories, causing the animals to forget the reward-associated lever. Next, implanting microelectrodes into the hippocampus, the team pulsed CA1, the output region, with their memory code.
The results were striking — powered by an external memory module, the animals regained their ability to pick the right lever.
Encouraged by the results, Berger next tried his memory implant in monkeys, this time focusing on a brain region called the prefrontal cortex, which receives and modulates memories encoded by the hippocampus.
Placing electrodes into the monkey’s brains, the team showed the animals a series of semi-repeated images, and captured the prefrontal cortex’s activity when the animals recognized an image they had seen earlier. Then with a hefty dose of cocaine, the team inhibited that particular brain region, which disrupted the animal’s recall.
Next, using electrodes programmed with the “memory code,” the researchers guided the brain’s signal processing back on track — and the animal’s performance improved significantly.
A year later, the team further validated their memory implant by showing it could also rescue memory deficits due to hippocampal malfunction in the monkey brain.

A Human Memory Implant
Last year, the team cautiously began testing their memory implant prototype in human volunteers.
Because of the risks associated with brain surgery, the team recruited 12 patients with epilepsy, who already have electrodes implanted into their brain to track down the source of their seizures.
Repeated seizures steadily destroy critical parts of the hippocampus needed for long-term memory formation, explained Berger. So if the implant works, it could benefit these patients as well.
The team asked the volunteers to look through a series of pictures, and then recall which ones they had seen 90 seconds later. As the participants learned, the team recorded the firing patterns in both CA1 and CA3 — that is, the input and output nodes.
Using these data, the team extracted an algorithm — a specific human “memory code” — that could predict the pattern of activity in CA1 cells based on CA3 input. Compared to the brain’s actual firing patterns, the algorithm generated correct predictions roughly 80% of the time.
It’s not perfect, said Berger, but it’s a good start.
Using this algorithm, the researchers have begun to stimulate the output cells with an approximation of the transformed input signal.
We have already used the pattern to zap the brain of one woman with epilepsy, said Dr. Dong Song, an associate professor working with Berger. But he remained coy about the result, only saying that although promising, it’s still too early to tell.
Song’s caution is warranted. Unlike the motor cortex, with its clear structured representation of different body parts, the hippocampus is not organized in any obvious way.
It’s hard to understand why stimulating input locations can lead to predictable results, said Dr. Thoman McHugh, a neuroscientist at the RIKEN Brain Science Institute. It’s also difficult to tell whether such an implant could save the memory of those who suffer from damage to the output node of the hippocampus.
That said, the data is convincing,” McHugh acknowledged.
Berger, on the other hand, is ecstatic. “I never thought I’d see this go into humans,” he said.
But the work is far from done. Within the next few years, Berger wants to see whether the chip can help build long-term memories in a variety of different situations. After all, the algorithm was based on the team’s recordings of one specific task — what if the so-called memory code is not generalizable, instead varying based on the type of input that it receives?
Berger acknowledges that it’s a possibility, but he remains hopeful.
I do think that we will find a model that’s a pretty good fit for most conditions, he said. After all, the brain is restricted by its own biophysics — there’s only so many ways that electrical signals in the hippocampus can be processed, he said.
The goal is to improve the quality of life for somebody who has a severe memory deficit,” said Berger. “If I can give them the ability to form new long-term memories for half the conditions that most people live in, I’ll be happy as hell, and so will be most patients.
ORIGINAL: Singularity Hub

“AI & The Future Of Civilization” A Conversation With Stephen Wolfram

By Hugo Angel,

“AI & The Future Of Civilization” A Conversation With Stephen Wolfram [3.1.16]
Stephen Wolfram
What makes us different from all these things? What makes us different is the particulars of our history, which gives us our notions of purpose and goals. That’s a long way of saying when we have the box on the desk that thinks as well as any brain does, the thing it doesn’t have, intrinsically, is the goals and purposes that we have. Those are defined by our particulars—our particular biology, our particular psychology, our particular cultural history.

The thing we have to think about as we think about the future of these things is the goals. That’s what humans contribute, that’s what our civilization contributes—execution of those goals; that’s what we can increasingly automate. We’ve been automating it for thousands of years. We will succeed in having very good automation of those goals. I’ve spent some significant part of my life building technology to essentially go from a human concept of a goal to something that gets done in the world.

There are many questions that come from this. For example, we’ve got these great AIs and they’re able to execute goals, how do we tell them what to do?…

STEPHEN WOLFRAM, distinguished scientist, inventor, author, and business leader, is Founder & CEO, Wolfram Research; Creator, Mathematica, Wolfram|Alpha & the Wolfram Language; Author, A New Kind of Science. Stephen Wolfram’s EdgeBio Page


Some tough questions. One of them is about the future of the human condition. That’s a big question. I’ve spent some part of my life figuring out how to make machines automate stuff. It’s pretty obvious that we can automate many of the things that we humans have been proud of for a long time. What’s the future of the human condition in that situation?

More particularly, I see technology as taking human goals and making them able to be automatically executed by machines. The human goals that we’ve had in the past have been things like moving objects from here to there and using a forklift rather than our own hands. Now, the things that we can do automatically are more intellectual kinds of things that have traditionally been the professions’ work, so to speak. These are things that we are going to be able to do by machine. The machine is able to execute things, but something or someone has to define what its goals should be and what it’s trying to execute.

People talk about the future of the intelligent machines, and whether intelligent machines are going to take over and decide what to do for themselves. What one has to figure out, while given a goal, how to execute it into something that can meaningfully be automated, the actual inventing of the goal is not something that in some sense has a path to automation.

How do we figure out goals for ourselves? How are goals defined? They tend to be defined for a given human by their own personal history, their cultural environment, the history of our civilization. Goals are something that are uniquely human. It’s something that almost doesn’t make any sense. We ask, what’s the goal of our machine? We might have given it a goal when we built the machine.

The thing that makes this more poignant for me is that I’ve spent a lot of time studying basic science about computation, and I’ve realized something from that. It’s a little bit of a longer story, but basically, if we think about intelligence and things that might have goals, things that might have purposes, what kinds of things can have intelligence or purpose? Right now, we know one great example of things with intelligence and purpose and that’s us, and our brains, and our own human intelligence. What else is like that? The answer, I had at first assumed, is that there are the systems of nature. They do what they do, but human intelligence is far beyond anything that exists naturally in the world. It’s something that’s the result of all of this elaborate process of evolution. It’s a thing that stands apart from the rest of what exists in the universe. What I realized, as a result of a whole bunch of science that I did, was that is not the case.

Research on largest network of cortical neurons to date published in Nature

By Hugo Angel,

Robust network of connections between neurons performing similar tasks shows fundamentals of how brain circuits are wired
Even the simplest networks of neurons in the brain are composed of millions of connections, and examining these vast networks is critical to understanding how the brain works. An international team of researchers, led by R. Clay Reid, Wei Chung Allen Lee and Vincent Bonin from the Allen Institute for Brain Science, Harvard Medical School and Neuro-Electronics Research Flanders (NERF), respectively, has published the largest network to date of connections between neurons in the cortex, where high-level processing occurs, and have revealed several crucial elements of how networks in the brain are organized. The results are published this week in the journal Nature.
A network of cortical neurons whose connections were traced from a multi-terabyte 3D data set. The data were created by an electron microscope designed and built at Harvard Medical School to collect millions of images in nanoscopic detail, so that every one of the “wires” could be seen, along with the connections between them. Some of the neurons are color-coded according to their activity patterns in the living brain. This is the newest example of functional connectomics, which combines high-throughput functional imaging, at single-cell resolution, with terascale anatomy of the very same neurons. Image credit: Clay Reid, Allen Institute; Wei-Chung Lee, Harvard Medical School; Sam Ingersoll, graphic artist
This is a culmination of a research program that began almost ten years ago. Brain networks are too large and complex to understand piecemeal, so we used high-throughput techniques to collect huge data sets of brain activity and brain wiring,” says R. Clay Reid, M.D., Ph.D., Senior Investigator at the Allen Institute for Brain Science. “But we are finding that the effort is absolutely worthwhile and that we are learning a tremendous amount about the structure of networks in the brain, and ultimately how the brain’s structure is linked to its function.
Although this study is a landmark moment in a substantial chapter of work, it is just the beginning,” says Wei-Chung Lee, Ph.D., Instructor in Neurobiology at Harvard Medicine School and lead author on the paper. “We now have the tools to embark on reverse engineering the brain by discovering relationships between circuit wiring and neuronal and network computations.” 
For decades, researchers have studied brain activity and wiring in isolation, unable to link the two,” says Vincent Bonin, Principal Investigator at Neuro-Electronics Research Flanders. “What we have achieved is to bridge these two realms with unprecedented detail, linking electrical activity in neurons with the nanoscale synaptic connections they make with one another.
We have found some of the first anatomical evidence for modular architecture in a cortical network as well as the structural basis for functionally specific connectivity between neurons,” Lee adds. “The approaches we used allowed us to define the organizational principles of neural circuits. We are now poised to discover cortical connectivity motifs, which may act as building blocks for cerebral network function.
Lee and Bonin began by identifying neurons in the mouse visual cortex that responded to particular visual stimuli, such as vertical or horizontal bars on a screen. Lee then made ultra-thin slices of brain and captured millions of detailed images of those targeted cells and synapses, which were then reconstructed in three dimensions. Teams of annotators on both coasts of the United States simultaneously traced individual neurons through the 3D stacks of images and located connections between individual neurons.
Analyzing this wealth of data yielded several results, including the first direct structural evidence to support the idea that neurons that do similar tasks are more likely to be connected to each other than neurons that carry out different tasks. Furthermore, those connections are larger, despite the fact that they are tangled with many other neurons that perform entirely different functions.
Part of what makes this study unique is the combination of functional imaging and detailed microscopy,” says Reid. “The microscopic data is of unprecedented scale and detail. We gain some very powerful knowledge by first learning what function a particular neuron performs, and then seeing how it connects with neurons that do similar or dissimilar things.
It’s like a symphony orchestra with players sitting in random seats,” Reid adds. “If you listen to only a few nearby musicians, it won’t make sense. By listening to everyone, you will understand the music; it actually becomes simpler. If you then ask who each musician is listening to, you might even figure out how they make the music. There’s no conductor, so the orchestra needs to communicate.
This combination of methods will also be employed in an IARPA contracted project with the Allen Institute for Brain Science, Baylor College of Medicine, and Princeton University, which seeks to scale these methods to a larger segment of brain tissue. The data of the present study is being made available online for other researchers to investigate.
This work was supported by the National Institutes of Health (R01 EY10115, R01 NS075436 and R21 NS085320); through resources provided by the National Resource for Biomedical Supercomputing at the Pittsburgh Supercomputing Center (P41 RR06009) and the National Center for Multiscale Modeling of Biological Systems (P41 GM103712); the Harvard Medical School Vision Core Grant (P30 EY12196); the Bertarelli Foundation; the Edward R. and Anne G. Lefler Center; the Stanley and Theodora Feldberg Fund; Neuro-Electronics Research Flanders (NERF); and the Allen Institute for Brain Science.
About the Allen Institute for Brain Science
The Allen Institute for Brain Science, a division of the Allen Institute (, is an independent, 501(c)(3) nonprofit medical research organization dedicated to accelerating the understanding of how the human brain works in health and disease. Using a big science approach, the Allen Institute generates useful public resources used by researchers and organizations around the globe, drives technological and analytical advances, and discovers fundamental brain properties through integration of experiments, modeling and theory. Launched in 2003 with a seed contribution from founder and philanthropist Paul G. Allen, the Allen Institute is supported by a diversity of government, foundation and private funds to enable its projects. Given the Institute’s achievements, Mr. Allen committed an additional $300 million in 2012 for the first four years of a ten-year plan to further propel and expand the Institute’s scientific programs, bringing his total commitment to date to $500 million. The Allen Institute’s data and tools are publicly available online at
About Harvard Medical School
HMS has more than 7,500 full-time faculty working in 10 academic departments located at the School’s Boston campus or in hospital-based clinical departments at 15 Harvard-affiliated teaching hospitals and research institutes: Beth Israel Deaconess Medical Center, Boston Children’s Hospital, Brigham and Women’s Hospital, Cambridge Health Alliance, Dana-Farber Cancer Institute, Harvard Pilgrim Health Care Institute, Hebrew SeniorLife, Joslin Diabetes Center, Judge Baker Children’s Center, Massachusetts Eye and Ear/Schepens Eye Research Institute, Massachusetts General Hospital, McLean Hospital, Mount Auburn Hospital, Spaulding Rehabilitation Hospital and VA Boston Healthcare System.
About NERF
Neuro-Electronics Research Flanders (NERF; is a neurotechnology research initiative is headquartered in Leuven, Belgium initiated by imec, KU Leuven and VIB to unravel how electrical activity in the brain gives rise to mental function and behaviour. Imec performs world-leading research in nanoelectronics and has offices in Belgium, the Netherlands, Taiwan, USA, China, India and Japan. Its staff of about 2,200 people includes almost 700 industrial residents and guest researchers. In 2014, imec’s revenue (P&L) totaled 363 million euro. VIB is a life sciences research institute in Flanders, Belgium. With more than 1470 scientists from over 60 countries, VIB performs basic research into the molecular foundations of life. KU Leuven is one of the oldest and largest research universities in Europe with over 10,000 employees and 55,000 students.
ORIGINAL: Allen Institute
March 28th, 2016

Brain waves may be spread by weak electrical field

By Hugo Angel,

The research team says the electrical fields could be behind the spread of sleep and theta waves, along with epileptic seizure waves (Credit:Shutterstock)
Mechanism tied to waves associated with epilepsy
Researchers at Case Western Reserve University may have found a new way information is communicated throughout the brain.
Their discovery could lead to identifying possible new targets to investigate brain waves associated with memory and epilepsy and better understand healthy physiology.
They recorded neural spikes traveling at a speed too slow for known mechanisms to circulate throughout the brain. The only explanation, the scientists say, is the wave is spread by a mild electrical field they could detect. Computer modeling and in-vitro testing support their theory.
Others have been working on such phenomena for decades, but no one has ever made these connections,” said Steven J. Schiff, director of the Center for Neural Engineering at Penn State University, who was not involved in the study. “The implications are that such directed fields can be used to modulate both pathological activities, such as seizures, and to interact with cognitive rhythms that help regulate a variety of processes in the brain.
Scientists Dominique Durand, Elmer Lincoln Lindseth Professor in Biomedical Engineering at Case School of Engineering and leader of the research, former graduate student Chen Sui and current PhD students Rajat Shivacharan and Mingming Zhang, report their findings in The Journal of Neuroscience.
Researchers have thought that the brain’s endogenous electrical fields are too weak to propagate wave transmission,” Durand said. “But it appears the brain may be using the fields to communicate without synaptic transmissions, gap junctions or diffusion.
How the fields may work
Computer modeling and testing on mouse hippocampi (the central part of the brain associated with memory and spatial navigation) in the lab indicate the field begins in one cell or group of cells.
Although the electrical field is of low amplitude, the field excites and activates immediate neighbors, which, in turn, excite and activate immediate neighbors, and so on across the brain at a rate of about 0.1 meter per second.
Blocking the endogenous electrical field in the mouse hippocampus and increasing the distance between cells in the computer model and in-vitro both slowed the speed of the wave.
These results, the researchers say, confirm that the propagation mechanism for the activity is consistent with the electrical field.
Because sleep waves and theta waves–which are associated with forming memories during sleep–and epileptic seizure waves travel at about 1 meter per second, the researchers are now investigating whether the electrical fields play a role in normal physiology and in epilepsy.
If so, they will try to discern what information the fields may be carrying. Durand’s lab is also investigating where the endogenous spikes come from.
ORIGINAL: Eurkalert

Bridging the Bio-Electronic Divide

By Hugo Angel,

New effort aims for fully implantable devices able to connect with up to one million neurons
A new DARPA program aims to develop an implantable neural interface able to provide unprecedented signal resolution and data-transfer bandwidth between the human brain and the digital world. The interface would serve as a translator, converting between the electrochemical language used by neurons in the brain and the ones and zeros that constitute the language of information technology. The goal is to achieve this communications link in a biocompatible device no larger than one cubic centimeter in size, roughly the volume of two nickels stacked back to back.
The program, Neural Engineering System Design (NESD), stands to dramatically enhance research capabilities in neurotechnology and provide a foundation for new therapies.
“Today’s best brain-computer interface systems are like two supercomputers trying to talk to each other using an old 300-baud modem,” said Phillip Alvelda, the NESD program manager. “Imagine what will become possible when we upgrade our tools to really open the channel between the human brain and modern electronics.
Among the program’s potential applications are devices that could compensate for deficits in sight or hearing by feeding digital auditory or visual information into the brain at a resolution and experiential quality far higher than is possible with current technology.
Neural interfaces currently approved for human use squeeze a tremendous amount of information through just 100 channels, with each channel aggregating signals from tens of thousands of neurons at a time. The result is noisy and imprecise. In contrast, the NESD program aims to develop systems that can communicate clearly and individually with any of up to one million neurons in a given region of the brain.
Achieving the program’s ambitious goals and ensuring that the envisioned devices will have the potential to be practical outside of a research setting will require integrated breakthroughs across numerous disciplines including 
  • neuroscience, 
  • synthetic biology, 
  • low-power electronics, 
  • photonics, 
  • medical device packaging and manufacturing, systems engineering, and 
  • clinical testing.
In addition to the program’s hardware challenges, NESD researchers will be required to develop advanced mathematical and neuro-computation techniques to first transcode high-definition sensory information between electronic and cortical neuron representations and then compress and represent those data with minimal loss of fidelity and functionality.
To accelerate that integrative process, the NESD program aims to recruit a diverse roster of leading industry stakeholders willing to offer state-of-the-art prototyping and manufacturing services and intellectual property to NESD researchers on a pre-competitive basis. In later phases of the program, these partners could help transition the resulting technologies into research and commercial application spaces.
To familiarize potential participants with the technical objectives of NESD, DARPA will host a Proposers Day meeting that runs Tuesday and Wednesday, February 2-3, 2016, in Arlington, Va. The Special Notice announcing the Proposers Day meeting is available at More details about the Industry Group that will support NESD is available at A Broad Agency Announcement describing the specific capabilities sought will be forthcoming on
NESD is part of a broader portfolio of programs within DARPA that support President Obama’s brain initiative. For more information about DARPA’s work in that domain, please visit:

Scientists have discovered brain networks linked to intelligence for the first time

By Hugo Angel,

Neurons Shutterstock 265323554_1024
And we may even be able to manipulate them.
For the first time ever, scientists have identified clusters of genes in the brain that are believed to be linked to human intelligence.
The two clusters, called M1 and M3, are networks each consisting of hundreds of individual genes, and are thought to influence our

  • cognitive functions, including 
  • memory, 
  • attention, 
  • processing speed, and 
  • reasoning.
Most provocatively, the researchers who identified M1 and M3 say that these clusters are probably under the control of master switches that regulate how the gene networks function. If this hypothesis is correct and scientists can indeed find these switches, we might even be able to manipulate our genetic intelligence and boost our cognitive capabilities.
“We know that genetics plays a major role in intelligence but until now haven’t known which genes are relevant,said neurologist Michael Johnson, at Imperial College London in the UK. “This research highlights some of the genes involved in human intelligence, and how they interact with each other.
The researchers made their discovery by examining the brains of patients who had undergone neurosurgery for the treatment of epilepsy. They analysed thousands of genes expressed in the brain and combined the findings with two sets of data: genetic information from healthy people who had performed IQ tests, and from people with neurological disorders and intellectual disability.
Comparing the results, the researchers discovered that some of the genes that influence human intelligence in healthy people can also cause significant neurological problems if they end up mutating.
Traits such as intelligence are governed by large groups of genes working together – like a football team made up of players in different positions,said Johnson. “We used computer analysis to identify the genes in the human brain that work together to influence our cognitive ability to make new memories or sensible decisions when faced with lots of complex information. We found that some of these genes overlap with those that cause severe childhood onset epilepsy or intellectual disability.
The research, which is reported in Nature Neuroscience, is at an early stage, but the authors believe their analysis could have a significant impact – not only on how we understand and treat brain diseases, but one day perhaps altering brainpower itself.
Eventually, we hope that this sort of analysis will provide new insights into better treatments for neurodevelopmental diseases such as epilepsy, and ameliorate or treat the cognitive impairments associated with these devastating diseases,” said Johnson. “Our research suggests that it might be possible to work with these genes to modify intelligence, but that is only a theoretical possibility at the moment – we have just taken a first step along that road.
ORIGINAL: Science Alert
22 DEC 2015

Forward to the Future: Visions of 2045

By Hugo Angel,

DARPA asked the world and our own researchers what technologies they expect to see 30 years from now—and received insightful, sometimes funny predictions
Today—October 21, 2015—is famous in popular culture as the date 30 years in the future when Marty McFly and Doc Brown arrive in their time-traveling DeLorean in the movie “Back to the Future Part II.” The film got some things right about 2015, including in-home videoconferencing and devices that recognize people by their voices and fingerprints. But it also predicted trunk-sized fusion reactors, hoverboards and flying cars—game-changing technologies that, despite the advances we’ve seen in so many fields over the past three decades, still exist only in our imaginations.
A big part of DARPA’s mission is to envision the future and make the impossible possible. So ten days ago, as the “Back to the Future” day approached, we turned to social media and asked the world to predict: What technologies might actually surround us 30 years from now? We pointed people to presentations from DARPA’s Future Technologies Forum, held last month in St. Louis, for inspiration and a reality check before submitting their predictions.
Well, you rose to the challenge and the results are in. So in honor of Marty and Doc (little known fact: he is a DARPA alum) and all of the world’s innovators past and future, we present here some highlights from your responses, in roughly descending order by number of mentions for each class of futuristic capability:
  • Space: Interplanetary and interstellar travel, including faster-than-light travel; missions and permanent settlements on the Moon, Mars and the asteroid belt; space elevators
  • Transportation & Energy: Self-driving and electric vehicles; improved mass transit systems and intercontinental travel; flying cars and hoverboards; high-efficiency solar and other sustainable energy sources
  • Medicine & Health: Neurological devices for memory augmentation, storage and transfer, and perhaps to read people’s thoughts; life extension, including virtual immortality via uploading brains into computers; artificial cells and organs; “Star Trek”-style tricorder for home diagnostics and treatment; wearable technology, such as exoskeletons and augmented-reality glasses and contact lenses
  • Materials & Robotics: Ubiquitous nanotechnology, 3-D printing and robotics; invisibility and cloaking devices; energy shields; anti-gravity devices
  • Cyber & Big Data: Improved artificial intelligence; optical and quantum computing; faster, more secure Internet; better use of data analytics to improve use of resources
A few predictions inspired us to respond directly:
  • Pizza delivery via teleportation”—DARPA took a close look at this a few years ago and decided there is plenty of incentive for the private sector to handle this challenge.
  • Time travel technology will be close, but will be closely guarded by the military as a matter of national security”—We already did this tomorrow.
  • Systems for controlling the weather”—Meteorologists told us it would be a job killer and we didn’t want to rain on their parade.
  • Space colonies…and unlimited cellular data plans that won’t be slowed by your carrier when you go over a limit”—We appreciate the idea that these are equally difficult, but they are not. We think likable cell-phone data plans are beyond even DARPA and a total non-starter.
So seriously, as an adjunct to this crowd-sourced view of the future, we asked three DARPA researchers from various fields to share their visions of 2045, and why getting there will require a group effort with players not only from academia and industry but from forward-looking government laboratories and agencies:

Pam Melroy, an aerospace engineer, former astronaut and current deputy director of DARPA’s Tactical Technologies Office (TTO), foresees technologies that would enable machines to collaborate with humans as partners on tasks far more complex than those we can tackle today:
Justin Sanchez, a neuroscientist and program manager in DARPA’s Biological Technologies Office (BTO), imagines a world where neurotechnologies could enable users to interact with their environment and other people by thought alone:
Stefanie Tompkins, a geologist and director of DARPA’s Defense Sciences Office, envisions building substances from the atomic or molecular level up to create “impossible” materials with previously unattainable capabilities.
Check back with us in 2045—or sooner, if that time machine stuff works out—for an assessment of how things really turned out in 30 years.
# # #
Associated images posted on and video posted at may be reused according to the terms of the DARPA User Agreement, available here:
Tweet @darpa