Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts

ORIGINAL: IEEE Spectrum
By Lee Gomes
20 Oct 2014
Big-data boondoggles and brain-inspired chips are just two of the things we’re really getting wrong
Photo-Illustration: Randi Klett
The overeager adoption of big data is likely to result in catastrophes of analysis comparable to a national epidemic of collapsing bridges. 

Hardware designers creating chips based on the human brain are engaged in a faith-based undertaking likely to prove a fool’s errand. 

Despite recent claims to the contrary, we are no further along with computer vision than we were with physics when Isaac Newton sat under his apple tree.

Those may sound like the Luddite ravings of a crackpot who breached security at an IEEE conference. In fact, the opinions belong to IEEE Fellow Michael I. Jordan, Pehong Chen Distinguished Professor at the University of California, Berkeley. Jordan is one of the world’s most respected authorities on machine learning and an astute observer of the field. His CV would require its own massive database, and his standing in the field is such that he was chosen to write the introduction to the 2013 National Research Council report “Frontiers in Massive Data Analysis.” San Francisco writer Lee Gomes interviewed him for IEEE Spectrum on 3 October 2014.
Michael Jordan on…

 

1- Why We Should Stop Using Brain Metaphors When We Talk About Computing
IEEE Spectrum: I infer from your writing that you believe there’s a lot of misinformation out there about deep learning, big data, computer vision, and the like.
Michael Jordan: Well, on all academic topics there is a lot of misinformation. The media is trying to do its best to find topics that people are going to read about. Sometimes those go beyond where the achievements actually are. Specifically on the topic of deep learning, it’s largely a rebranding of neural networks, which go back to the 1980s. They actually go back to the 1960s; it seems like every 20 years there is a new wave that involves them. In the current wave, the main success story is the convolutional neural network, but that idea was already present in the previous wave. And one of the problems with both the previous wave, that has unfortunately persisted in the current wave, is that people continue to infer that something involving neuroscience is behind it, and that deep learning is taking advantage of an understanding of how the brain processes information, learns, makes decisions, or copes with large amounts of data. And that is just patently false.
Spectrum: As a member of the media, I take exception to what you just said, because it’s very often the case that academics are desperate for people to write stories about them.
Michael Jordan: Yes, it’s a partnership.
Spectrum: It’s always been my impression that when people in computer science describe how the brain works, they are making horribly reductionist statements that you would never hear from neuroscientists. You called these “cartoon models” of the brain.
Michael Jordan: I wouldn’t want to put labels on people and say that all computer scientists work one way, or all neuroscientists work another way. But it’s true that with neuroscience, it’s going to require decades or even hundreds of years to understand the deep principles. There is progress at the very lowest levels of neuroscience. But for issues of higher cognition—how we perceive, how we remember, how we act—we have no idea on

  • how neurons are storing information
  • how they are computing, 
  • what the rules are, 
  • what the algorithms are, 
  • what the representations are, and 
  • the like. 

So we are not yet in an era in which we can be using an understanding of the brain to guide us in the construction of intelligent systems.

Spectrum: In addition to criticizing cartoon models of the brain, you actually go further and criticize the whole idea of “neural realism”—the belief that just because a particular hardware or software system shares some putative characteristic of the brain, it’s going to be more intelligent. What do you think of computer scientists who say, for example, “My system is brainlike because it is massively parallel.
Michael Jordan: Well, these are metaphors, which can be useful. Flows and pipelines are metaphors that come out of circuits of various kinds. I think in the early 1980s, computer science was dominated by sequential architectures, by the von Neumann paradigm of a stored program that was executed sequentially, and as a consequence, there was a need to try to break out of that. And so people looked for metaphors of the highly parallel brain. And that was a useful thing.
But as the topic evolved, it was not neural realism that led to most of the progress. The algorithm that has proved the most successful for deep learning is based on a technique called back propagation. You have these layers of processing units, and you get an output from the end of the layers, and you propagate a signal backwards through the layers to change all the parameters. It’s pretty clear the brain doesn’t do something like that. This was definitely a step away from neural realism, but it led to significant progress. But people tend to lump that particular success story together with all the other attempts to build brainlike systems that haven’t been nearly as successful.
Spectrum: Another point you’ve made regarding the failure of neural realism is that there is nothing very neural about neural networks.
Michael Jordan: There are no spikes in deep-learning systems. There are no dendrites. And they have bidirectional signals that the brain doesn’t have.
We don’t know how neurons learn. Is it actually just a small change in the synaptic weight that’s responsible for learning? That’s what these artificial neural networks are doing. In the brain, we have precious little idea how learning is actually taking place.
Spectrum: I read all the time about engineers describing their new chip designs in what seems to me to be an incredible abuse of language. They talk about the “neurons” or the “synapses” on their chips. But that can’t possibly be the case; a neuron is a living, breathing cell of unbelievable complexity. Aren’t engineers appropriating the language of biology to describe structures that have nothing remotely close to the complexity of biological systems?
Michael Jordan: Well, I want to be a little careful here. I think it’s important to distinguish two areas where the word neural is currently being used.
One of them is in deep learning. And there, each “neuron” is really a cartoon. It’s a linear-weighted sum that’s passed through a nonlinearity. Anyone in electrical engineering would recognize those kinds of nonlinear systems. Calling that a neuron is clearly, at best, a shorthand. It’s really a cartoon. There is a procedure called logistic regression in statistics that dates from the 1950s, which had nothing to do with neurons but which is exactly the same little piece of architecture.
A second area involves what you were describing and is aiming to get closer to a simulation of an actual brain, or at least to a simplified model of actual neural circuitry, if I understand correctly. But the problem I see is that the research is not coupled with any understanding of what algorithmically this system might do. It’s not coupled with a learning system that takes in data and solves problems, like in vision. It’s really just a piece of architecture with the hope that someday people will discover algorithms that are useful for it. And there’s no clear reason that hope should be borne out. It is based, I believe, on faith, that if you build something like the brain, that it will become clear what it can do.
Spectrum: If you could, would you declare a ban on using the biology of the brain as a model in computation?
Michael Jordan: No. You should get inspiration from wherever you can get it. As I alluded to before, back in the 1980s, it was actually helpful to say, “Let’s move out of the sequential, von Neumann paradigm and think more about highly parallel systems.” But in this current era, where it’s clear that the detailed processing the brain is doing is not informing algorithmic process, I think it’s inappropriate to use the brain to make claims about what we’ve achieved. We don’t know how the brain processes visual information.
2- Our Foggy Vision About Machine Vision
Spectrum: You’ve used the word hype in talking about vision system research. Lately there seems to be an epidemic of stories about how computers have tackled the vision problem, and that computers have become just as good as people at vision. Do you think that’s even close to being true?
Michael Jordan: Well, humans are able to deal with cluttered scenes. They are able to deal with huge numbers of categories. They can deal with inferences about the scene: “What if I sit down on that?” “What if I put something on top of something?” These are far beyond the capability of today’s machines. Deep learning is good at certain kinds of image classification. “What object is in this scene?
But the computational vision problem is vast. It’s like saying when that apple fell out of the tree, we understood all of physics. Yeah, we understood something more about forces and acceleration. That was important. In vision, we now have a tool that solves a certain class of problems. But to say it solves all problems is foolish.
Spectrum: How big of a class of problems in vision are we able to solve now, compared with the totality of what humans can do?
Michael Jordan: With face recognition, it’s been clear for a while now that it can be solved. Beyond faces, you can also talk about other categories of objects: “There’s a cup in the scene.” “There’s a dog in the scene.” But it’s still a hard problem to talk about many kinds of different objects in the same scene and how they relate to each other, or how a person or a robot would interact with that scene. There are many, many hard problems that are far from solved.
Spectrum: Even in facial recognition, my impression is that it still only works if you’ve got pretty clean images to begin with.
Michael Jordan: Again, it’s an engineering problem to make it better. As you will see over time, it will get better. But this business about “revolutionary” is overwrought.
3- Why Big Data Could Be a Big Fail
Spectrum: If we could turn now to the subject of big data, a theme that runs through your remarks is that there is a certain fool’s gold element to our current obsession with it. For example, you’ve predicted that society is about to experience an epidemic of false positives coming out of big-data projects.
Michael Jordan: When you have large amounts of data, your appetite for hypotheses tends to get even larger. And if it’s growing faster than the statistical strength of the data, then many of your inferences are likely to be false. They are likely to be white noise.
Spectrum: How so?
Michael Jordan: In a classical database, you have maybe a few thousand people in them. You can think of those as the rows of the database. And the columns would be the features of those people: their age, height, weight, income, et cetera.
Now, the number of combinations of these columns grows exponentially with the number of columns. So if you have many, many columns—and we do in modern databases—you’ll get up into millions and millions of attributes for each person.
Now, if I start allowing myself to look at all of the combinations of these features—if you live in Beijing, and you ride bike to work, and you work in a certain job, and are a certain age—what’s the probability you will have a certain disease or you will like my advertisement? Now I’m getting combinations of millions of attributes, and the number of such combinations is exponential; it gets to be the size of the number of atoms in the universe.
Those are the hypotheses that I’m willing to consider. And for any particular database, I will find some combination of columns that will predict perfectly any outcome, just by chance alone. If I just look at all the people who have a heart attack and compare them to all the people that don’t have a heart attack, and I’m looking for combinations of the columns that predict heart attacks, I will find all kinds of spurious combinations of columns, because there are huge numbers of them.
So it’s like having billions of monkeys typing. One of them will write Shakespeare.
Spectrum: Do you think this aspect of big data is currently underappreciated?
Michael Jordan: Definitely.
Spectrum: What are some of the things that people are promising for big data that you don’t think they will be able to deliver?
Michael Jordan: I think data analysis can deliver inferences at certain levels of quality. But we have to be clear about what levels of quality. We have to have error bars around all our predictions. That is something that’s missing in much of the current machine learning literature.
Spectrum: What will happen if people working with data don’t heed your advice?
Michael Jordan: I like to use the analogy of building bridges. If I have no principles, and I build thousands of bridges without any actual science, lots of them will fall down, and great disasters will occur.
Similarly here, if people use data and inferences they can make with the data without any concern

  • about error bars, 
  • about heterogeneity, 
  • about noisy data, 
  • about the sampling pattern, 
  • about all the kinds of things that you have to be serious about if you’re an engineer and a statistician—

then you will make lots of predictions, and there’s a good chance that you will occasionally solve some real interesting problems. But you will occasionally have some disastrously bad decisions. And you won’t know the difference a priori. You will just produce these outputs and hope for the best.

And so that’s where we are currently. A lot of people are building things hoping that they work, and sometimes they will. And in some sense, there’s nothing wrong with that; it’s exploratory. But society as a whole can’t tolerate that; we can’t just hope that these things work. Eventually, we have to give real guarantees. Civil engineers eventually learned to build bridges that were guaranteed to stand up. So with big data, it will take decades, I suspect, to get a real engineering approach, so that you can say with some assurance that you are giving out reasonable answers and are quantifying the likelihood of errors.
Spectrum: Do we currently have the tools to provide those error bars?
Michael Jordan: We are just getting this engineering science assembled. We have many ideas that come from hundreds of years of statistics and computer science. And we’re working on putting them together, making them scalable. A lot of the ideas for controlling what are called familywise errors, where I have many hypotheses and want to know my error rate, have emerged over the last 30 years. But many of them haven’t been studied computationally. It’s hard mathematics and engineering to work all this out, and it will take time.
It’s not a year or two. It will take decades to get right. We are still learning how to do big data well.
Spectrum: When you read about big data and health care, every third story seems to be about all the amazing clinical insights we’ll get almost automatically, merely by collecting data from everyone, especially in the cloud.
Michael Jordan: You can’t be completely a skeptic or completely an optimist about this. It is somewhere in the middle. But if you list all the hypotheses that come out of some analysis of data, some fraction of them will be useful. You just won’t know which fraction. So if you just grab a few of them—say, if you eat oat bran you won’t have stomach cancer or something, because the data seem to suggest that—there’s some chance you will get lucky. The data will provide some support.
But unless you’re actually doing the full-scale engineering statistical analysis to provide some error bars and quantify the errors, it’s gambling. It’s better than just gambling without data. That’s pure roulette. This is kind of partial roulette.
Spectrum: What adverse consequences might await the big-data field if we remain on the trajectory you’re describing?
Michael Jordan: The main one will be a “big-data winter.” After a bubble, when people invested and a lot of companies overpromised without providing serious analysis, it will bust. And soon, in a two- to five-year span, people will say, “The whole big-data thing came and went. It died. It was wrong.” I am predicting that. It’s what happens in these cycles when there is too much hype, i.e., assertions not based on an understanding of what the real problems are or on an understanding that solving the problems will take decades, that we will make steady progress but that we haven’t had a major leap in technical progress. And then there will be a period during which it will be very hard to get resources to do data analysis. The field will continue to go forward, because it’s real, and it’s needed. But the backlash will hurt a large number of important projects.
4- What He’d Do With $1 Billion
Spectrum: Considering the amount of money that is spent on it, the science behind serving up ads still seems incredibly primitive. I have a hobby of searching for information about silly Kickstarter projects, mostly to see how preposterous they are, and I end up getting served ads from the same companies for many months.
Michael Jordan: Well, again, it’s a spectrum. It depends on how a system has been engineered and what domain we’re talking about. In certain narrow domains, it can be very good, and in very broad domains, where the semantics are much murkier, it can be very poor. I personally find Amazon’s recommendation system for books and music to be very, very good. That’s because they have large amounts of data, and the domain is rather circumscribed. With domains like shirts or shoes, it’s murkier semantically, and they have less data, and so it’s much poorer.
There are still many problems, but the people who build these systems are hard at work on them. What we’re getting into at this point is semantics and human preferences. If I buy a refrigerator, that doesn’t show that I am interested in refrigerators in general. I’ve already bought my refrigerator, and I’m probably not likely to still be interested in them. Whereas if I buy a song by Taylor Swift, I’m more likely to buy more songs by her. That has to do with the specific semantics of singers and products and items. To get that right across the wide spectrum of human interests requires a large amount of data and a large amount of engineering.
Spectrum: You’ve said that if you had an unrestricted $1 billion grant, you would work on natural language processing. What would you do that Google isn’t doing with Google Translate?
Michael Jordan: I am sure that Google is doing everything I would do. But I don’t think Google Translate, which involves machine translation, is the only language problem. Another example of a good language problem is question answering, like “What’s the second-biggest city in California that is not near a river?” If I typed that sentence into Google currently, I’m not likely to get a useful response.
Spectrum: So are you saying that for a billion dollars, you could, at least as far as natural language is concerned, solve the problem of generalized knowledge and end up with the big enchilada of AI: machines that think like people?
Michael Jordan: So you’d want to carve off a smaller problem that is not about everything, but which nonetheless allows you to make progress. That’s what we do in research. I might take a specific domain. In fact, we worked on question-answering in geography. That would allow me to focus on certain kinds of relationships and certain kinds of data, but not everything in the world.
Spectrum: So to make advances in question answering, will you need to constrain them to a specific domain?
Michael Jordan: It’s an empirical question about how much progress you could make. It has to do with how much data is available in these domains. How much you could pay people to actually start to write down some of those things they knew about these domains. How many labels you have.
Spectrum: It seems disappointing that even with a billion dollars, we still might end up with a system that isn’t generalized, but that only works in just one domain.
Michael Jordan: That’s typically how each of these technologies has evolved. We talked about vision earlier. The earliest vision systems were face-recognition systems. That’s domain bound. But that’s where we started to see some early progress and had a sense that things might work. Similarly with speech, the earliest progress was on single detached words. And then slowly, it started to get to be where you could do whole sentences. It’s always that kind of progression, from something circumscribed to something less and less so.
Spectrum: Why do we even need better question-answering? Doesn’t Google work well enough as it is?
Michael Jordan: Google has a very strong natural language group working on exactly this, because they recognize that they are very poor at certain kinds of queries. For example, using the word not. Humans want to use the word not. For example, “Give me a city that is not near a river.” In the current Google search engine, that’s not treated very well.
5- How Not to Talk About the Singularity
Spectrum: Turning now to some other topics, if you were talking to someone in Silicon Valley, and they said to you, “You know, Professor Jordan, I’m a really big believer in the singularity,” would your opinion of them go up or down?
Michael Jordan: I luckily never run into such people.
Spectrum: Oh, come on.
Michael Jordan: I really don’t. I live in an intellectual shell of engineers and mathematicians.
Spectrum: But if you did encounter someone like that, what would you do?
Michael Jordan: I would take off my academic hat, and I would just act like a human being thinking about what’s going to happen in a few decades, and I would be entertained just like when I read science fiction. It doesn’t inform anything I do academically.
Spectrum: Okay, but knowing what you do academically, what do you think about it?
Michael Jordan: My understanding is that it’s not an academic discipline. Rather, it’s partly philosophy about how society changes, how individuals change, and it’s partly literature, like science fiction, thinking through the consequences of a technology change. But they don’t produce algorithmic ideas as far as I can tell, because I don’t ever see them, that inform us about how to make technological progress.
6- What He Cares About More Than Whether P = NP
Spectrum: Do you have a guess about whether P = NP? Do you care?
Michael Jordan: I tend to be not so worried about the difference between polynomial and exponential. I’m more interested in low-degree polynomial—linear time, linear space. P versus NP has to do with categorization of algorithms as being polynomial, which means they are tractable and exponential, which means they’re not.
I think most people would agree that probably P is not equal to NP. As a piece of mathematics, it’s very interesting to know. But it’s not a hard and sharp distinction. There are many exponential time algorithms that, partly because of the growth of modern computers, are still viable in certain circumscribed domains. And moreover, for the largest problems, polynomial is not enough. Polynomial just means that it grows at a certain superlinear rate, like quadric or cubic. But it really needs to grow linearly. So if you get five more data points, you need five more amounts of processing. Or even sublinearly, like logarithmic. As I get 100 new data points, it grows by two; if I get 1,000, it grows by three.
That’s the ideal. Those are the kinds of algorithms we have to focus on. And that is very far away from the P versus NP issue. It’s a very important and interesting intellectual question, but it doesn’t inform that much about what we work on.
Spectrum: Same question about quantum computing

.

Michael Jordan: I am curious about all these things academically. It’s real. It’s interesting. It doesn’t really have an impact on my area of research.
7- What the Turing Test Really Means
Spectrum: Will a machine pass the Turing test in your lifetime?
Michael Jordan: I think you will get a slow accumulation of capabilities, including in domains like speech and vision and natural language. There will probably not ever be a single moment in which we would want to say, “There is now a new intelligent entity in the universe.” I think that systems like Google already provide a certain level of artificial intelligence.
Spectrum: They are definitely useful, but they would never be confused with being a human being.
Michael Jordan: No, they wouldn’t be. I don’t think most of us think the Turing test is a very clear demarcation. Rather, we all know intelligence when we see it, and it emerges slowly in all the devices around us. It doesn’t have to be embodied in a single entity. I can just notice that the infrastructure around me got more intelligent. All of us are noticing that all of the time.
Spectrum: When you say “intelligent,” are you just using it as a synonym for “useful”?
Michael Jordan: Yes. What our generation finds surprising—that a computer recognizes our needs and wants and desires, in some ways—our children find less surprising, and our children’s children will find even less surprising. It will just be assumed that the environment around us is adaptive; it’s predictive; it’s robust. That will include the ability to interact with your environment in natural language. At some point, you’ll be surprised by being able to have a natural conversation with your environment. Right now we can sort of do that, within very limited domains. We can access our bank accounts, for example. They are very, very primitive. But as time goes on, we will see those things get more subtle, more robust, more broad. As some point, we’ll say, “Wow, that’s very different when I was a kid.” The Turing test has helped get the field started, but in the end, it will be sort of like Groundhog Day—a media event, but something that’s not really important.
About the Author

, a former Wall Street Journal reporter, has been covering Silicon Valley for more than two decades.

Tagged , , , , , , ,

Artificial Intelligence Planning Course at Coursera by U of Edimurgh

ORIGINAL: Coursera

The course aims to provide a foundation in artificial intelligence techniques for planning, with an overview of the wide spectrum of different problems and approaches, including their underlying theory and their applications.


About the Course

The course aims to provide a foundation in artificial intelligence techniques for planning, with an overview of the wide spectrum of different problems and approaches, including their underlying theory and their applications. It will allow you to:

  • Understand different planning problems
  • Have the basic know how to design and implement AI planning systems
  • Know how to use AI planning technology for projects in different application domains
  • Have the ability to make use of AI planning literature

Planning is a fundamental part of intelligent systems. In this course, for example, you will learn the basic algorithms that are used in robots to deliberate over a course of actions to take. Simpler, reactive robots don’t need this, but if a robot is to act intelligently, this type of reasoning about actions is vital.

Course Syllabus

Week 1: Introduction and Planning in Context
Week 2: State-Space Search: Heuristic Search and STRIPS
Week 3: Plan-Space Search and HTN Planning
One week catch up break
Week 4: Graphplan and Advanced Heuristics

Week 5: Plan Execution and ApplicationsM

Exam week

Recommended Background

The MOOC is based on a Masters level course at the University of Edinburgh but is designed to be accessible at several levels of engagement from an “Awareness Level”, through the core “Foundation Level” requiring a basic knowledge of logic and mathematical reasoning, to a more involved “Performance Level” requiring programming and other assignments.

Suggested Readings

The course follows a text book, but this is not required for the course:
Automated Planning: Theory & Practice (The Morgan Kaufmann Series in Artificial Intelligence) by M. Ghallab, D. Nau, and P. Traverso (Elsevier, ISBN 1-55860-856-7) 2004.

Course Format

Five weeks of study comprising 10 hours of video lecture material and special features videos. Quizzes and assessments throughout the course will assist in learning. Some weeks will involve recommended readings. Discussion on the course forum and via other social media will be encouraged. A mid-course catch up break week and a final week for exams and completion of assignments allows for flexibility in study.

You can engage with the course at a number of levels to suit your interests and the time you have available:

  • Awareness Level – gives an overview of the topic, along with introductory videos and application related features. This level is likely to require 2-3 hours of study per week.
  • Foundation Level – is the core taught material on the course and gives a grounding in AI planning technology and algorithms. This level is likely to require 5-6 hours of study per week of study.
  • Performance Level – is for those interested in carrying out additional programming assignments and engaging in creative challenges to understand the subject more deeply. This level is likely to require 8 hours or more of study per week.

FAQ

  • Will I get a certificate after completing this class?Students who complete the class will be offered a Statement of Accomplishment signed by the instructors.
  • Do I earn University of Edinburgh credits upon completion of this class?The Statement of Accomplishment is not part of a formal qualification from the University. However, it may be useful to demonstrate prior learning and interest in your subject to a higher education institution or potential employer.
  • What resources will I need for this class?Nothing is required, but if you want to try out implementing some of the algorithms described in the lectures you’ll need access to a programming environment. No specific programming language is required. Also, you may want to download existing planners and try those out. This may require you to compile them first.
  • Can I contact the course lecturers directly?You will appreciate that such direct contact would be difficult to manage. You are encouraged to use the course social network and discussion forum to raise questions and seek inputs. The tutors will participate in the forums, and will seek to answer frequently asked questions, in some cases by adding to the course FAQ area.
  • What Twitter hash tag should I use?Use the hash tag #aiplan for tweets about the course.
  • How come this is free?We are passionate about open on-line collaboration and education. Our taught AI planning course at Edinburgh has always published its course materials, readings and resources on-line for anyone to view. Our own on-campus students can access these materials at times when the course is not available if it is relevant to their interests and projects. We want to make the materials available in a more accessible form that can reach a broader audience who might be interested in AI planning technology. This achieves our primary objective of getting such technology into productive use. Another benefit for us is that more people get to know about courses in AI in the School of Informatics at the University of Edinburgh, or get interested in studying or collaborating with us.
  • When will the course run again?It is likely that the 2015 session will be the final time this course runs as a Coursera MOOC, but we intend to leave the course wiki open for further study and use across course instances.
Tagged , ,

How IBM Got Brainlike Efficiency From the TrueNorth Chip

ORIGINAL: IEEE Spectrum
By Jeremy Hsu
Posted 29 Sep 2014 | 19:01 GMT


TrueNorth takes a big step toward using the brain’s architecture to reduce computing’s power consumption

Photo: IBM

Neuromorphic computer chips meant to mimic the neural network architecture of biological brains have generally fallen short of their wetware counterparts in efficiency—a crucial factor that has limited practical applications for such chips. That could be changing. At a power density of just 20 milliwatts per square centimeter, IBM’s new brain-inspired chip comes tantalizingly close to such wetware efficiency. The hope is that it could bring brainlike intelligence to the sensors of smartphones, smart cars, and—if IBM has its way—everything else.

The latest IBM neurosynaptic computer chip, called TrueNorth, consists of 1 million programmable neurons and 256 million programmable synapses conveying signals between the digital neurons. Each of the chip’s 4,096 neurosynaptic cores includes the entire computing package:

  • memory, 
  • computation, and 
  • communication. 

Such architecture helps to bypass the bottleneck in traditional von Neumann computing, where program instructions and operation data cannot pass through the same route simultaneously.
This is literally a supercomputer the size of a postage stamp, light like a feather, and low power like a hearing aid,” says Dharmendra Modha, IBM fellow and chief scientist for brain-inspired computing at IBM Research-Almaden, in San Jose, Calif.

Such chips can emulate the human brain’s ability to recognize different objects in real time; TrueNorth showed it could distinguish among pedestrians, bicyclists, cars, and trucks. IBM envisions its new chips working together with traditional computing devices as hybrid machines, providing a dose of brainlike intelligence. The chip’s architecture, developed together by IBM and Cornell University, was first detailed in August in the journal Science.

Tagged , , , , , ,

Meet Amelia: the computer that’s after your job

29 Sep 2014
A new artificially intelligent computer system called ‘Amelia’ – that can read and understand text, follow processes, solve problems and learn from experience – could replace humans in a wide range of low-level jobs

Amelia aims to answer the question, can machines think? Photo: IPsoft

In February 2011 an artificially intelligent computer system called IBM Watson astonished audiences worldwide by beating the two all-time greatest Jeopardy champions at their own game.

Thanks to its ability to apply

  • advanced natural language processing,
  • information retrieval,
  • knowledge representation,
  • automated reasoning, and
  • machine learning technologies, Watson consistently outperformed its human opponents on the American quiz show Jeopardy.

Watson represented an important milestone in the development of artificial intelligence, but the field has been progressing rapidly – particularly with regard to natural language processing and machine learning.

In 2012, Google used 16,000 computer processors to build a simulated brain that could correctly identify cats in YouTube videos; the Kinect, which provides a 3D body-motion interface for Microsoft’s Xbox, uses algorithms that emerged from artificial intelligence research, as does the iPhone’s Siri virtual personal assistant.

Today a new artificial intelligence computing system has been unveiled, which promises to transform the global workforce. Named ‘Amelia‘ after American aviator and pioneer Amelia Earhart, the system is able to shoulder the burden of often tedious and laborious tasks, allowing human co-workers to take on more creative roles.

Watson is perhaps the best data analytics engine that exists on the planet; it is the best search engine that exists on the planet; but IBM did not set out to create a cognitive agent. It wanted to build a program that would win Jeopardy, and it did that,” said Chetan Dube, chief executive Officer of IPsoft, the company behind Amelia.

Amelia, on the other hand, started out not with the intention of winning Jeopardy, but with the pure intention of answering the question posed by Alan Turing in 1950 – can machines think?


Amelia learns by following the same written instructions as her human colleagues, but is able to absorb information in a matter of seconds.
She understands the full meaning of what she reads rather than simply recognising individual words. This involves

  • understanding context,
  • applying logic and
  • inferring implications.

When exposed to the same information as any new employee in a company, Amelia can quickly apply her knowledge to solve queries in a wide range of business processes. Just like any smart worker she learns from her colleagues and, by observing their work, she continually builds her knowledge.

While most ‘smart machines’ require humans to adapt their behaviour in order to interact with them, Amelia is intelligent enough to interact like a human herself. She speaks more than 20 languages, and her core knowledge of a process needs only to be learned once for her to be able to communicate with customers in their language.

Independently, rather than through time-intensive programming, Amelia creates her own ‘process map’ of the information she is given so that she can work out for herself what actions to take depending on the problem she is solving.

Intelligence is the ability to acquire and apply knowledge. If a system claims to be intelligent, it must be able to read and understand documents, and answer questions on the basis of that. It must be able to understand processes that it observes. It must be able to solve problems based on the knowledge it has acquired. And when it cannot solve a problem, it must be capable of learning the solution through noticing how a human did it,” said Dube.

IPsoft has been working on this technology for 15 years with the aim of developing a platform that does not simply mimic human thought processes but can comprehend the underlying meaning of what is communicated – just like a human.

Just as machines transformed agriculture and manufacturing, IPsoft believes that cognitive technologies will drive the next evolution of the global workforce, so that in the future companies will have digital workforces that comprise a mixture of human and virtual employees.

Amelia has already been trialled within a number of Fortune 1000 companies, in areas such as manning technology help desks, procurement processing, financial trading operations support and providing expert advice for field engineers.

In each of these environments, she has learnt not only from reading existing manuals and situational context but also by observing and working with her human colleagues and discerning for herself a map of the business processes being followed.

In a help desk situation, for example, Amelia can understand what a caller is looking for, ask questions to clarify the issue, find and access the required information and determine which steps to follow in order to solve the problem.

As a knowledge management advisor, she can help engineers working in remote locations who are unable to carry detailed manuals, by diagnosing the cause of failed machinery and guiding them towards the best steps to rectifying the problem.

During these trials, Amelia was able to go from solving very few queries independently to 42 per cent of the most common queries within one month. By the second month she could answer 64 per cent of those queries independently.

That’s a true learning cognitive agent. Learning is the key to the kingdom, because humans learn from experience. A child may need to be told five times before they learn something, but Amelia needs to be told only once,” said Dube.

Amelia is that Mensa kid, who personifies a major breakthrough in cognitive technologies.

Analysts at Gartner predict that, by 2017, managed services offerings that make use of autonomics and cognitive platforms like Amelia will drive a 60 per cent reduction in the cost of services, enabling organisations to apply human talent to higher level tasks requiring creativity, curiosity and innovation.

IPsoft even has plans to start embedding Amelia into humanoid robots such as Softbank’s Pepper, Honda’s Asimo or Rethink Robotics’ Baxter, allowing her to take advantage of their mechanical functions.

The robots have got a fair degree of sophistication in all the mechanical functions – the ability to climb up stairs, the ability to run, the ability to play ping pong. What they don’t have is the brain, and we’ll be supplementing that brain part with Amelia,” said Dube.

I am convinced that in the next decade you’ll pass someone in the corridor and not be able to discern if it’s a human or an android.

Given the premise of IPsoft’s artificial intelligence system, it seems logical that the ultimate measure of Amelia’s success would be passing the Turing Test – which sets out to see whether humans can discern whether they are interacting with a human or a machine.

Earlier this year, a chatbot named Eugene Goostman became the first machine to pass the Turing Test by convincingly imitating a 13-year-old boy. In a five-minute keyboard conversation with a panel of human judges, Eugene managed to convince 33 per cent that it was human.

Interestingly, however, IPsoft believes that the Turing Test needs reframing, to redefine what it means to ‘think’. While Eugene was able to immitate natural language, he was only mimicking understanding. He did not learn from the interaction, nor did he demonstrate problem solving skills.

Natural language understanding is a big step up from parsing. Parsing is syntactic, understanding is semantic, and there’s a big cavern between the two,” said Dube.

The aim of Amelia is not just to get an accolade for managing to fool one in three people on a panel. The assertion is to create something that can answer to the fundamental need of human beings – particularly after a certain age – of companionship. That is our intent.

Tagged , , , ,

AI For Everyone: Startups Democratize Deep Learning So Google And Facebook Don’t Own It All

ORIGINAL: Forbes
9/17/2014When I arrived at a Stanford University auditorium Tuesday night for what I thought would be a pretty nerdy panel on deep learning, a fast-growing branch of artificial intelligence, I figured I must be in the wrong place–maybe a different event for all the new Stanford students and their parents visiting the campus. Nope. Despite the highly technical nature of deep learning, some 600 people had shown up for the sold-out AI event, presented by VLAB, a Stanford-based chapter of the MIT Enterprise Forum. The turnout was a stark sign of the rising popularity of deep learning, an approach to AI that tries to mimic the activity of the brain in so-called neural networks. In just the last couple of years, deep learning software from giants like Google GOOGL +1.27%, Facebook, and China’s Baidu as well as a raft of startups, has led to big advances in image and speech recognition, medical diagnostics, stock trading, and more. “There’s quite a bit of excitement in this area,” panel moderator Steve Jurvetson, a partner with the venture firm DFJ, said with uncustomary understatement.

In the past year or two, big companies have been locked in a land grab for talent, paying big bucks for startups and even hiring away deep learning experts from each other. But this event, focused mostly on startups, including several that demonstrated their products before the panel, also revealed there’s still a lot of entrepreneurial activity. In particular, several companies aim to democratize deep learning by offering it as a service or coming up with cheaper hardware to make it more accessible to businesses.

Jurvetson explained why deep learning has pushed the boundaries of AI so much further recently.

  • For one, there’s a lot more data around because of the Internet, there’s metadata such as tags and translations, and there’s even services such as Amazon’s Mechanical Turk, which allows for cheap labeling or tagging.
  • There are also algorithmic advances, especially for using unlabeled data.
  • And computing has advanced enough to allow much larger neural networks with more synapses–in the case of Google Brain, for instance, 1 billion synapses (though that’s still a very long way form the 100 trillion synapses in the adult human brain).

Adam Berenzweig, cofounder and CTO of image recognition firm Clarifai and former engineer at Google for 10 years, made the case that deep learning is “adding a new primary sense to computing” in the form of useful computer vision. “Deep learning is forming that bridge between the physical world and the world of computing,” he said.

And it’s allowing that to happen in real time. “Now we’re getting into a world where we can take measurements of the physical world, like pixels in a picture, and turn them into symbols that we can sort,” he said. Clarifai has been working on taking an image and producing a meaningful description very quickly, like 80 milliseconds, and show very similar images.

One interesting application relevant to advertising and marketing, he noted: Once you can recognize key objects in images, you can target ads not just on keywords but on objects in an image.

DFJ’s Steve Jurvetson led a panel of AI experts at a Stanford even Sept. 16.

Even more sweeping, said Naveen Rao, cofounder and CEO of deep-learning hardware and software startup Nervana Systems and former researcher in neuromorphic computing at Qualcomm, deep learning is “that missing link between computing and what the brain does.” Instead of doing specific computations very fast, as conventional computers do, “we can start building new hardware to take computer processing in a whole new direction,” assessing probabilities, like the brain does. “Now there’s actually a business case for this kind of computing,” he said.

And not just for big businesses. Elliot Turner, founder and CEO of AlchemyAPI, a deep-learning platform in the cloud, said his company’s mission is to “democratize deep learning.” The company is working in 10 industries from advertising to business intelligence, helping companies apply it to their businesses. “I look forward to the day that people actually stop talking about deep learning, because that will be when it has really succeeded,” he added.

Despite the obvious advantages of large companies such as Google, which have untold amounts of both data and computer power that deep learning requires to be useful, startups can still have a big impact, a couple of the panelists said. “There’s data in a lot of places. There’s a lot of nooks and crannies that Google doesn’t have access to,” Berenzweig said hopefully. “Also, you can trade expertise for data. There’s also a question of how much data is enough.

Turner agreed. “It’s not just a matter of stockpiling data,” he said. “Better algorithms can help an application perform better.” He noted that even Facebook, despite its wealth of personal data, found this in its work on image recognition.

Those algorithms may have broad applicability, too. Even if they’re initially developed for specific applications such as speech recognition, it looks like they can be used on a wide variety of applications. “These algorithms are extremely fungible,” said Rao. And he said companies such as Google aren’t keeping them as secret as expected, often publishing them in academic journals and at conferences–though Berenzweig noted that “it takes more than what they publish to do what they do well.

For all that, it’s not yet clear how much deep learning will actually emulate the brain, even if they are intelligent. But Ilya Sutskever, research scientist at Google Brain and a protege of Geoffrey Hinton, the University of Toronto deep learning guru since the 1980s who’s now working part-time at Google, said it almost doesn’t matter. “You can still do useful predictions” using them. And while the learning principles for dealing with all the unlabeled data out there remain primitive, he said he and many others are working on this and likely will make even more progress.

Rao said he’s unworried that we’ll end up creating some kind of alien intelligence that could run amok if only because advances will be driven by market needs. Besides, he said, “I think a lot of the similarities we’re seeing in computation and brain functions is coincidental. It’s driven that way because we constrain it that way.

OK, so how are these companies planning to make money on this stuff? Jurvetson wondered. Of course, we’ve already seen improvements in speech and image recognition that make smartphones and apps more useful, leading more people to buy them. “Speech recognition is useful enough that I use it,” said Sutskever. “I’d be happy if I didn’t press a button ever again. And language translation could have a very large impact.

Beyond that, Berenzweig said, “we’re looking for the low-hanging fruit,” common use cases such as visual search for shopping, organizing your personal photos, and various business niches such as security.

Follow me on Twitter and Google+, and read the rest of my Forbes posts here.

Tagged , , , , , , ,

Danko Nikolic on Singularity 1 on 1: Practopoiesis Tells Us Machine Learning Is Not Enough!

If there’s ever been a case when I just wanted to jump on a plane and go interview someone in person, not because they are famous but because they have created a totally unique and arguably seminal theory, it has to be Danko Nikolic. I believe Danko’s theory of Practopoiesis is that good and he should and probably eventually would become known around the world for it. Unfortunately, however, I don’t have a budget of thousands of dollars per interview which will allow me to pay for my audio and video team to travel to Germany and produce the quality that Nikolic deserves. So, I’ve had to settle with Skype. And Skype refused to cooperate on that day even though both me and Danko have pretty much the fastest internet connections money can buy. Luckily, despite the poor video quality, our audio was very good and I would urge that if there’s ever been an interview where you ought to disregard the video quality and focus on the content – it has to be this one.
During our 67 min conversation with Danko we cover a variety of interesting topics such as:

As always you can listen to or download the audio file above or scroll down and watch the video interview in full.
To show your support you can write a review on iTunes or make a donation.
Who is Danko Nikolic?
The main motive for my studies is the explanatory gap between the brain and the mind. My interest is in how the physical world of neuronal activity produces the mental world of perception and cognition. I am associated with

  • the Max-Planck Institute for Brain Research,
  • Ernst Strüngmann Institute,
  • Frankfurt Institute for Advanced Studies, and
  • the University of Zagreb.
I approach the problem of explanatory gap from both sides, bottom-up and top-down. The bottom-up approach investigates brain physiology. The top-down approach investigates the behavior and experiences. Each of the two approaches led me to develop a theory: The work on physiology resulted in the theory of practopoiesis. The work on behavior and experiences led to the phenomenon of ideasthesia.
The empirical work in the background of those theories involved

  • simultaneous recordings of activity of 100+ neurons in the visual cortex (extracellular recordings),
  • behavioral and imaging studies in visual cognition (attention, working memory, long-term memory), and
  • empirical investigations of phenomenal experiences (synesthesia).
The ultimate goal of my studies is twofold.

  • First, I would like to achieve conceptual understanding of how the dynamics of physical processes creates the mental ones. I believe that the work on practopoiesis presents an important step in this direction and that it will help us eventually address the hard problem of consciousness and the mind-body problem in general.
  • Second, I would like to use this theoretical knowledge to create artificial systems that are biologically-like intelligent and adaptive. This would have implications for our technology.
A reason why one would be interested in studying the brain in the first place is described here: Why brain?
Tagged , , , , , , , ,

Neurons in human skin perform advanced calculations

[2014-09-01] Neurons in human skin perform advanced calculations, previously believed that only the brain could perform. This is according to a study from Umeå University in Sweden published in the journal Nature Neuroscience.

A fundamental characteristic of neurons that extend into the skin and record touch, so-called first-order neurons in the tactile system, is that they branch in the skin so that each neuron reports touch from many highly-sensitive zones on the skin.
According to researchers at the Department of Integrative Medical Biology, IMB, Umeå University, this branching allows first-order tactile neurons not only to send signals to the brain that something has touched the skin, but also process geometric data about the object touching the skin.
- Our work has shown that two types of first-order tactile neurons that supply the sensitive skin at our fingertips not only signal information about when and how intensely an object is touched, but also information about the touched object’s shape, says Andrew Pruszynski, who is one of the researchers behind the study.
The study also shows that the sensitivity of individual neurons to the shape of an object depends on the layout of the neuron’s highly-sensitive zones in the skin.
- Perhaps the most surprising result of our study is that these peripheral neurons, which are engaged when a fingertip examines an object, perform the same type of calculations done by neurons in the cerebral cortex. Somewhat simplified, it means that our touch experiences are already processed by neurons in the skin before they reach the brain for further processing, says Andrew Pruszynski.
For more information about the study, please contact Andrew Pruszynski, post doc at the Department of Integrative Medical Biology, IMB, Umeå University. He is English-speaking and can be reached at:
Phone: +46 90 786 51 09; Mobile: +46 70 610 80 96
Tagged , , , , , , ,

This AI-Powered Calendar Is Designed to Give You Me-Time

ORIGINAL: Wired
09.02.14 |
Timeful intelligently schedules to-dos and habits on your calendar. Timeful
No one on their death bed wishes they’d taken a few more meetings. Instead, studies find people consistently say things like: 
  • I wish I’d spent more time with my friends and family;
  • I wish I’d focused more on my health; 
  • I wish I’d picked up more hobbies.
That’s what life’s all about, after all. So, question: Why don’t we ever put any of that stuff on our calendar?
That’s precisely what the folks behind Timeful want you to do. Their app (iPhone, free) is a calendar designed to handle it all. You don’t just put in the things you need to do—meeting on Thursday; submit expenses; take out the trash—but also the things you want to do, like going running more often or brushing up on your Spanish. Then, the app algorithmically generates a schedule to help you find time for it all. The more you use it, the smarter that schedule gets.
Even in the crowded categories of calendars and to-do lists, Timeful stands out. Not many iPhone calendar apps were built by renowned behavioral psychologists and machine learning experts, nor have many attracted investor attention to the tune of $7 million.
It was born as a research project at Stanford, where Jacob Bank, a computer science PhD candidate, and his advisor, AI expert Yoav Shoham, started exploring how machine learning could be applied to time management. To help with their research, they brought on Dan Ariely, the influential behavior psychologist and author of the book Predictably Irrational. It didn’t take long for the group to realize that there was an opportunity to bring time management more in step with the times. “It suddenly occurred to me that my calendar and my grandfather’s calendar are essentially the same,” Shoham recalls.
A Tough Problem and an Artifically Intelligent Solution
Like all of Timeful’s founders, Shoham sees time as our most valuable resource–far more valuable, even, than money. And yet he says the tools we have for managing money are far more sophisticated than the ones we have for managing time. In part, that’s because time poses a tricky problem. Simply put, it’s tough to figure out the best way to plan your day. On top of that, people are lazy, and prone to distraction. “We have a hard computational problem compounded by human mistakes,” Shoham says.
To address that lazy human bit, Timeful is designed around a simple fact: When you schedule something, you’re far more likely to get it done. Things you put in the app don’t just live in some list. Everything shows up on the calendar. Meetings and appointments get slotted at the times they take place, as you’d expect. But at the start of the day, the app also blocks off time for your to-dos and habits, rendering them as diagonally-slatted rectangles on your calendar which you can accept, dismiss, or move around as you desire.
Suggestions have diagonal slats. Timeful
Suggestions have diagonal slats. Timeful
In each case, Timeful takes note of how you respond and adjusts its “intention rank,” as the company calls its scheduling algorithm. This is the special sauce that elevates Timeful from dumb calendar to something like an assistant. As Bank sees it, the more nebulous lifestyle events we’d never think to put on our calendar are a perfect subject for some machine learning smarts. “Habits have the really nice property that they repeat over time with very natural patterns,” he says. “So if you put in, ‘run three times a week,’ we can quickly learn what times you like to run and when you’re most likely to do it.
The other machine learning challenge involved with Timeful is the problem of input. Where many other to-do apps try to make the input process as frictionless as possible, Timeful often needs to ask a few follow-up questions to schedule tasks properly, like how long you expect them to take, and if there’s a deadline for completion. As with all calendars and to-do apps, Timeful’s only as useful as the stuff you put on it, and here that interaction’s a fairly heavy one. For many, it could simply be too much work for the reward. Plus, isn’t it a little weird to block off sixty minutes to play with your kid three times a week?
Bank admits that it takes longer to put things into Timeful than some other apps, and the company’s computer scientists are actively trying to come up with new ways to offload the burden algorithmically. In future versions, Bank hopes to be able to automatically pull in data from other apps and services. A forthcoming web version could also make input easier (an Android version is on the way too). But as Bank sees it, there may be an upside to having a bit of friction here. By going through the trouble of putting something in the app, you’re showing that you truly want to get it done, and that could help keep Timeful from becoming a “list of shame” like other to-do apps. (And as far as the kid thing goes, it might feel weird, but if scheduling family time on your calendar results in more family time, then it’s kinda hard to knock, no?)
How Much Scheduling Is Too Much?
Perhaps the bigger question is how much day-to-day optimization people can really swallow. Having been conditioned to see the calendar as a source of responsibilities and obligations, opening up one’s preferred scheduling application and seeing a long white column stretching down for the day can be the source of an almost embarrassing degree of relief. Thank God, now I can finally get something done! With Timeful, that feeling becomes extinct. Every new dawn brings a whole bunch of new stuff to do.
Two of Timeful’s co-founders, Jacob Bank (top) and Yoav Shoham Timeful
Bank and Shoham are acutely aware of this thorny problem. “Sometimes there’s a tension between what’s best for a user and what the user wants to accept, and we need to be really delicate about that,” Bank says. In the app, you can fine-tune just how aggressive you want it to be in its planning, and a significant part of the design process was making sure the app’s suggestions felt like suggestions, not demands. Still, we might crave that structure more than we think. After some early user tests, the company actually cranked up the pushiness of Timeful’s default setting; the overwhelming response from beta testers was “give me more!
The vision is for Timeful to become something akin to a polite assistant. Shoham likens it to Google Now for your schedule–a source of informed suggestions about what to do next. Whether you take those suggestions or leave them is entirely up to you. “This is not your paternalistic dad telling you, ‘thou shall do this!’” he says. “It’s not your guilt-abusing mom. Well, maybe there’s a little bit of that.”
Tagged , , , , , ,

Practopoiesis: How cybernetics of biology can help AI

By creating any form of AI we must copy from biology. The argument goes as follows. A brain is a biological product. And so must be then its products such as perception, insight, inference, logic, mathematics, etc. By creating AI we inevitably tap into something that biology has already invented on its own. It follows thus that the more we want the AI system to be similar to a human—e.g., to get a better grade on the Turing test—the more we need to copy the biology.
When it comes to describing living systems, traditionally, we assume the approach of different explanatory principles for different levels of system organization.

  1. One set of principles is used for “low-level” biology such as the evolution of our genome through natural selection, which is a completely different set of principles than the one used for describing the expression of those genes. 
  2. A yet different type of story is used to explain what our neural networks do. Needless to say, 
  3. the descriptions at the very top of that organizational hierarchy—at the level of our behavior—are made by concepts that again live in their own world.
But what if it was possible to unify all these different aspects of biology and describe them all by a single set of principles? What if we could use the same fundamental rules to talk about the physiology of a kidney and the process of a conscious thought? What if we had concepts that could give us insights into mental operations underling logical inferences on one hand and the relation between the phenotype and genotype on the other hand? This request is not so outrageous. After all, all those phenomena are biological.
One can argue that such an all-embracing theory of the living would be beneficial also for further developments of AI. The theory could guide us on what is possible and what is not. Given a certain technological approach, what are its limitations? Maybe it could answer the question of what the unitary components of intelligence are. And does my software have enough of them?
For more inspiration, let us look into Shannon-Wiener theory of information and appreciate how much helpful this theory is for dealing with various types of communication channels (including memory storage, which is also a communication channel, only over time rather than space). We can calculate how much channel capacity is needed to transmit (store) certain contents. Also, we can easily compare two communication channels and determine which one has more capacity. This allows us to directly compare devices that are otherwise incomparable. For example, an interplanetary communication system based on satellites can be compared to DNA located within a nucleus of a human cell. Only thanks to the information theory can we calculate whether a given satellite connection has enough capacity to transfer the DNA information about human person to a hypothetical recipient at another planet. (The answer is: yes, easily.) Thus, information theory is invaluable in making these kinds of engineering decisions.
So, how about intelligence? Wouldn’t it be good to come into possession of a similar general theory for adaptive intelligent behavior? Maybe we could use certain quantities other than bits that could tell us why the intelligence of plants is lagging behind that of primates? Also, we may be able to know better what the essential ingredients are that distinguish human intelligence from that of a chimpanzee? Using the same theory we could compare

  • an abacus, 
  • a hand-held calculator, 
  • a supercomputer, and 
  • a human intellect.
The good news is that, since recently, such an overarching biological theory exists, and it is called practopoiesis. Derived from Ancient Greek praxis + poiesis, practopoiesis means creation of actions. The name reflects the fundamental presumption on what the common property can be found across all the different levels of organization of biological systems

  • Gene expression mechanisms act; 
  • bacteria act; 
  • organs act; 
  • organisms as a whole act.
Due to this focus on biological action, practopoiesis has a strong cybernetic flavor as it has to deal with the need of acting systems to close feedback loops. Input is needed to trigger actions and to determine whether more actions are needed. For that reason, the theory is founded in the basic theorems of cybernetics, namely that of requisite variety and good regulator theorem.
The key novelty of practopoiesis is that it introduces the mechanisms explaining how different levels of organization mutually interact. These mechanisms help explain how genes create anatomy of the nervous system, or how anatomy creates behavior.
When practopoiesis is applied to human mind and to AI algorithms, the results are quite revealing.
To understand those, we need to introduce the concept of practopoietic traverse. Without going into details on what a traverse is, let us just say that this is a quantity with which one can compare different capabilities of systems to adapt. Traverse is a kind of a practopoietic equivalent to the bit of information in Shannon-Wiener theory. If we can compare two communication channels according to the number of bits of information transferred, we can compare two adaptive systems according to the number of traverses. Thus, a traverse is not a measure of how much knowledge a system has (for that the good old bit does the job just fine). It is rather a measure of how much capability the system has to adjust its existing knowledge for example, when new circumstances emerge in the surrounding world.
To the best of my knowledge no artificial intelligence algorithm that is being used today has more than two traverses. That means that these algorithms interact with the surrounding world at a maximum of two levels of organization. For example, an AI algorithm may receive satellite images at one level of organization and the categories to which to learn to classify those images at another level of organization. We would say that this algorithm has two traverses of cybernetic knowledge. In contrast, biological behaving systems (that is, animals, homo sapiens) operate with three traverses.
This makes a whole lot of difference in adaptive intelligence. Two-traversal systems can be super-fast and omni-knowledgeable, and their tech-specs may list peta-everything, which they sometimes already do, but these systems nevertheless remain comparably dull when compared to three-traversal systems, such as a three-year old girl, or even a domestic cat.
To appreciate the difference between two and three traverses, let us go one step lower and consider systems with only one traverse. An example would be a PC computer without any advanced AI algorithm installed.
This computer is already light speed faster than I am in calculations, way much better in memory storage, and beats me in spell checking without the processor even getting warm. And, paradoxically, I am still the smarter one around. Thus, computational capacity and adaptive intelligence are not the same.
Importantly, this same relationship “me vs. the computer” holds for “me vs. a modern advanced AI-algorithm”. I am still the more intelligent one although the computer may have more computational power. But also the relationship holds “AI-algorithm vs. non-AI computer”. Even a small AI algorithm, implemented say on a single PC, is in many ways more intelligent than a petaflop supercomputer without AI. Thus, there is a certain hierarchy in adaptive intelligence that is not determined by memory size or the number of floating point operations executed per second but by the ability to learn and adapt to the environment.
A key requirement for adaptive intelligence is the capacity to observe how well one is doing towards a certain goal combined with the capacity to make changes and adjust in light of the feedback obtained. Practopoiesis tells us that there is not only one step possible from non-adaptive to adaptive, but that multiple adaptive steps are possible. Multiple traverses indicate a potential for adapting the ways in which we adapt.
We can go even one step further down the adaptive hierarchy and consider the least adaptive systems e.g., a book: Provided that the book is large enough, it can contain all of the knowledge about the world and yet it is not adaptive as it cannot for example, rewrite itself when something changes in that world. Typical computer software can do much more and administer many changes, but there is also a lot left that cannot be adjusted without a programmer. A modern AI-system is even smarter and can reorganize its knowledge to a much higher degree. Still, nevertheless, these systems are incapable of doing a certain types of adjustments that a human person can do, or that animals can do. Practopoisis tells us that these systems fall into different adaptive categories, which are independent of the raw information processing capabilities of the systems. Rather, these adaptive categories are defined by the number of levels of organization at which the system receives feedback from the environment — also referred to as traverses.
We can thus make the following hierarchical list of the best exemplars in each adaptive category:
  • A book: dumbest; zero traverses
  • A computer: somewhat smarter; one traverse
  • An AI system: much smarter; two traverses
  • A human: rules them all; three traverses
Most importantly for creation of strong AI, practopoiesis tells us in which direction the technological developments should be heading:
Engineering creativity should be geared towards empowering the machines with one more traverse. To match a human, a strong AI system has to have three traverses.
Practopoietic theory explains also what is so special about the third traverse. Systems with three traverses (referred to as T3-systems) are capable of storing their past experiences in an abstract, general form, which can be used in a much more efficient way than in two-traversal systems. This general knowledge can be applied to interpretation of specific novel situations such that quick and well-informed inferences are made about what is currently going on and what actions should be executed next. This process, unique for T3-systems, is referred to as anapoiesis, and can be generally described as a capability to reconstruct cybernetic knowledge that the system once had and use this knowledge efficiently in a given novel situation.
If biology has invented T3-systems and anapoiesis and has made a good use of them, there is no reason why we should not be able to do the same in machines.
5 Robots Booking It to a Classroom Near You
Danko Nikolić is a brain and mind scientist, running an electrophysiology lab at the Max Planck Institute for Brain Research, and is the creator of the concept of ideasthesia. More about practopoiesis can be read here
ORIGINAL: Singularity Web
Tagged , , , , , , ,

5 Robots Booking It to a Classroom Near You

IMAGE: ANDY BAKER/GETTY IMAGES

Robots are the new kids in school.
The technological creations are taking on serious roles in the classroom. With the accelerating rate of robotic technology, school administrators all over the world are plotting how to implement them in education, from elementary through high school.
In South Korea, robots are replacing English teachers entirely, entrusted with leading and teaching entire classrooms. In Alaska, some robots are replacing the need for teachers to physically be present at all.
Robotics 101 is now in session. Here are five ways robots are being introduced into schools.
1. Nao Robot as math teacher
IMAGE: WIKIPEDIA
In Harlem school PS 76, a Nao robot created in France, nicknamed Projo helps students improve their math skills. It’s small, about the size of a stuffed animal, and sits by a computer to assist students working on math and science problems online.
Sandra Okita, a teacher at the school, told The Wall Street Journal the robot gauges how students interact with non-human teachers. The students have taken to the humanoid robotic peer, who can speak and react, saying it’s helpful and gives the right amount of hints to help them get their work done.
2. Aiding children with autism
The Nao Robot also helps improve social interaction and communication for children with autism. The robots were introduced in a classroom in Birmingham, England in 2012, to play with children in elementary school. Though the children were intimidated at first, they’ve taken to the robotic friend, according to The Telegraph.
3. VGo robot for ill children

Sick students will never have to miss class again if the VGo robot catches on. Created by VGo Communications, the rolling robot has a webcam and can be controlled and operated remotely via computer. About 30 students with special needs nationwide have been using the $6,000 robot to attend classes.
For example, a 12-year-old Texas student with leukemia kept up with classmates by using a VGo robot. With a price tag of about $6,000, the robots aren’t easily accessible, but they’re a promising sign of what’s to come.

4. Robots over teachers
In the South Korean town of Masan, robots are starting to replace teachers entirely. The government started using the robots to teach students English in 2010. The robots operate under supervision, but the plan is to have them lead a room exclusively in a few years, as robot technology develops.
5. Virtual teachers


IMAGE: FLICKR, SEAN MACENTEE
South Korea isn’t the only place getting virtual teachers. A school in Kodiak, Alaska has started using telepresence robots to beam teachers into the classroom. The tall, rolling robots have iPads attached to the top, which teachers will use to video chat with students.
The Kodiak Island Borough School District‘s superintendent, Stewart McDonald, told The Washington Times he was inspired to do this because of the show The Big Bang Theory, which stars a similar robot. Each robot costs about $2,000; the school bought 12 total in early 2014.

Tagged , , , , ,

DARPA Project Starts Building Human Memory Prosthetics

ORIGINAL: IEES Spectrum
By Eliza Strickland
Posted 27 Aug 2014
The first memory-enhancing devices could be implanted within four years
Photo: Lawrence Livermore National LaboratoryRemember This? Lawrence Livermore engineer Vanessa Tolosa holds up a silicon wafer containing micromachined implantable neural devices for use in experimental memory prostheses.
They’re trying to do 20 years of research in 4 years,” says Michael Kahana in a tone that’s a mixture of excitement and disbelief. Kahana, director of the Computational Memory Lab at the University of Pennsylvania, is mulling over the tall order from the U.S. Defense Advanced Research Projects Agency (DARPA). In the next four years, he and other researchers are charged with understanding the neuroscience of memory and then building a prosthetic memory device that’s ready for implantation in a human brain.
DARPA’s first contracts under its Restoring Active Memory (RAM) program challenge two research groups to construct implants for veterans with traumatic brain injuries that have impaired their memories. Over 270,000 U.S. military service members have suffered such injuries since 2000, according to DARPA, and there are no truly effective drug treatments. This program builds on an earlier DARPA initiative focused on building a memory prosthesis, under which a different group of researchers had dramatic success in improving recall in mice and monkeys.
Kahana’s team will start by searching for biological markers of memory formation and retrieval. For this early research, the test subjects will be hospitalized epilepsy patients who have already had electrodes implanted to allow doctors to study their seizures. Kahana will record the electrical activity in these patients’ brains while they take memory tests.
The memory is like a search engine,” Kahana says. “In the initial memory encoding, each event has to be tagged. Then in retrieval, you need to be able to search effectively using those tags.” He hopes to find the electric signals associated with these two operations.
Once they’ve found the signals, researchers will try amplifying them using sophisticated neural stimulation devices. Here Kahana is working with the medical device maker Medtronic, in Minneapolis, which has already developed one experimental implant that can both record neural activity and stimulate the brain. Researchers have long wanted such a “closed-loop” device, as it can use real-time signals from the brain to define the stimulation parameters.
Kahana notes that designing such closed-loop systems poses a major engineering challenge. Recording natural neural activity is difficult when stimulation introduces new electrical signals, so the device must have special circuitry that allows it to quickly switch between the two functions. What’s more, the recorded information must be interpreted with blistering speed so it can be translated into a stimulation command. “We need to take analyses that used to occupy a personal computer for several hours and boil them down to a 10-millisecond algorithm,” he says.
In four years’ time, Kahana hopes his team can show that such systems reliably improve memory in patients who are already undergoing brain surgery for epilepsy or Parkinson’s. That, he says, will lay the groundwork for future experiments in which medical researchers can try out the hardware in people with traumatic brain injuries—people who would not normally receive invasive neurosurgery.
The second research team is led by Itzhak Fried, director of the Cognitive Neurophysiology Laboratory at the University of California, Los Angeles. Fried’s team will focus on a part of the brain called the entorhinal cortex, which is the gateway to the hippocampus, the primary brain region associated with memory formation and storage. “Our approach to the RAM program is homing in on this circuit, which is really the golden circuit of memory,” Fried says. In a 2012 experiment, he showed that stimulating the entorhinal regions of patients while they were learning memory tasks improved their performance.
Fried’s group is working with Lawrence Livermore National Laboratory, in California, to develop more closed-loop hardware. At Livermore’s Center for Bioengineering, researchers are leveraging semiconductor manufacturing techniques to make tiny implantable systems. They first print microelectrodes on a polymer that sits atop a silicon wafer, then peel the polymer off and mold it into flexible cylinders about 1 millimeter in diameter. The memory prosthesis will have two of these cylindrical arrays, each studded with up to 64 hair-thin electrodes, which will be capable of both recording the activity of individual neurons and stimulating them. Fried believes his team’s device will be ready for tryout in patients with traumatic brain injuries within the four-year span of the RAM program.
Outside observers say the program’s goals are remarkably ambitious. Yet Steven Hyman, director of psychiatric research at the Broad Institute of MIT and Harvard, applauds its reach. “The kind of hardware that DARPA is interested in developing would be an extraordinary advance for the whole field,” he says. Hyman says DARPA’s funding for device development fills a gap in existing research. Pharmaceutical companies have found few new approaches to treating psychiatric and neurodegenerative disorders in recent years, he notes, and have therefore scaled back drug discovery efforts. “I think that approaches that involve devices and neuromodulation have greater near-term promise,” he says.
This article originally appeared in print as “Making a Human Memory Chip.
Tagged , , , , , , , , ,

Everybody Relax: An MIT Economist Explains Why Robots Won’t Steal Our Jobs

Living together in harmony. Photo by Oli Scarff/Getty Images
If you’ve ever found yourself fretting about the possibility that software and robotics are on the verge of thieving away all our jobs, renowned MIT labor economist David Autor is out with a new paper that might ease your nerves. Presented Friday at the Federal Reserve Bank of Kansas City’s big annual conference in Jackson Hole, Wyoming, the paper argues that humanity still has two big points in its favor: People have “common sense,” and they’re “flexible.
Neil Irwin already has a lovely writeup of the paper at the New York Times, but let’s run down the basics. There’s no question machines are getting smarter, and quickly acquiring the ability to perform work that once seemed uniquely human. Think self-driving cars that might one day threaten cabbies, or computer programs that can handle the basics of legal research.
But artificial intelligence is still just that: artificial. We haven’t untangled all the mysteries of human judgment, and programmers definitely can’t translate the way we think entirely into code. Instead, scientists at the forefront of AI have found workarounds like machine-learning algorithms. As Autor points out, a computer might not have any abstract concept of a chair, but show it enough Ikea catalogs, and it can eventually suss out the physical properties statistically associated with a seat. Fortunately for you and me, this approach still has its limits.
For example, both a toilet and a traffic cone look somewhat like a chair, but a bit of reasoning about their shapes vis-à-vis the human anatomy suggests that a traffic cone is unlikely to make a comfortable seat. Drawing this inference, however, requires reasoning about what an object is “for” not simply what it looks like. Contemporary object recognition programs do not, for the most part, take this reasoning-based approach to identifying objects, likely because the task of developing and generalizing the approach to a large set of objects would be extremely challenging.
That’s what Autor means when he says machines lack for common sense. They don’t think. They just do math.
And that leaves lots of room for human workers in the future.
Technology has already whittled away at middle class jobs, from factory workers replaced by robotic arms to secretaries made redundant by Outlook, over the past few decades. But Autor argues that plenty of today’s middle-skill occupations, such as construction trades and medical technicians, will stick around, because “many of the tasks currently bundled into these jobs cannot readily be unbundled … without a substantial drop in quality.
These aren’t jobs that require performing a single task over and over again, but instead demand that employees handle some technical work while dealing with other human beings and improvising their way through unexpected problems. Machine learning algorithms can’t handle all of that. Human beings, Swiss-army knives that we are, can. We’re flexible.
Just like the dystopian arguments that machines are about to replace a vast swath of the workforce, Autor’s paper is very much speculative. It’s worth highlighting, though, because it cuts through the silly sense of inevitability that sometimes clouds this subject. Predictions about the future of technology and the economy are made to be dashed. And while Noah Smith makes a good point that we might want to be prepared for mass, technology-driven unemployment even if there’s just a slim chance of it happening, there’s also no reason to take it for granted.
Jordan Weissmann is Slate’s senior business and economics correspondent.
ORIGINAL: Slate
Tagged , , , , , ,

It’s Time to Take Artificial Intelligence Seriously

By CHRISTOPHER MIMS
Aug. 24, 2014
No Longer an Academic Curiosity, It Now Has Measurable Impact on Our Lives
A still from “2001: A Space Odyssey” with Keir Dullea reflected in the lens of HAL’s “eye.” MGM / POLARIS / STANLEY KUBRICK
 
The age of intelligent machines has arrived—only they don’t look at all like we expected. Forget what you’ve seen in movies; this is no HAL from “2001: A Space Odyssey,” and it’s certainly not Scarlett Johansson‘s disembodied voice in “Her.It’s more akin to what happens when insects, or even fungi, do when they “think.” (What, you didn’t know that slime molds can solve mazes?)
Artificial intelligence has lately been transformed from an academic curiosity to something that has measurable impact on our lives. Google Inc. used it to increase the accuracy of voice recognition in Android by 25%. The Associated Press is printing business stories written by it. Facebook Inc. is toying with it as a way to improve the relevance of the posts it shows you.
What is especially interesting about this point in the history of AI is that it’s no longer just for technology companies. Startups are beginning to adapt it to problems where, at least to me, its applicability is genuinely surprising.
Take advertising copywriting. Could the “Mad Men” of Don Draper‘s day have predicted that by the beginning of the next century, they would be replaced by machines? Yet a company called Persado aims to do just that.
Persado does one thing, and judging by its client list, which includes Citigroup Inc. and Motorola Mobility, it does it well. It writes advertising emails and “landing pages” (where you end up if you click on a link in one of those emails, or an ad).
Here’s an example: Persado’s engine is being used across all of the types of emails a top U.S. wireless carrier sends out when it wants to convince its customers to renew their contracts, upgrade to a better plan or otherwise spend money.
Traditionally, an advertising copywriter would pen these emails; perhaps the company would test a few variants on a subset of its customers, to see which is best.
But Persado’s software deconstructs advertisements into five components, including emotion words, characteristics of the product, the “call to action” and even the position of text and the images accompanying it. By recombining them in millions of ways and then distilling their essential characteristics into eight or more test emails that are sent to some customers, Persado says it can effectively determine the best possible come-on.
A creative person is good but random,” says Lawrence Whittle, head of sales at Persado. “We’ve taken the randomness out by building an ontology of language.
The results speak for themselves: In the case of emails intended to convince mobile subscribers to renew their plans, initial trials with Persado increased click-through rates by 195%, the company says.
Here’s another example of AI becoming genuinely useful: X.ai is a startup aimed, like Persado, at doing one thing exceptionally well. In this case, it’s scheduling meetings. X.ai’s virtual assistant, Amy, isn’t a website or an app; she’s simply a “person” whom you cc: on emails to anyone with whom you’d like to schedule a meeting. Her sole “interface” is emails she sends and receives—just like a real assistant. Thus, you don’t have to bother with back-and-forth emails trying to find a convenient time and available place for lunch. Amy can correspond fluidly with anyone, but only on the subject of his or her calendar. This sounds like a simple problem to crack, but it isn’t, because Amy must communicate with a human being who might not even know she’s an AI, and she must do it flawlessly, says X.ai founder Dennis Mortensen.
E-mail conversations with Amy are already quite smooth. Mr. Mortensen used her to schedule our meeting, naturally, and it worked even though I purposely threw in some ambiguous language about the times I was available. But that is in part because Amy is still in the “training” stage, where anything she doesn’t understand gets handed to humans employed by X.ai.
It sounds like cheating, but every artificially intelligent system needs a body of data on which to “train” initially. For Persado, that body of data was text messages sent to prepaid cellphone customers in Europe, urging them to re-up their minutes or opt into special plans. For Amy, it’s a race to get a body of 100,000 email meeting requests. Amusingly, engineers at X.ai thought about using one of the biggest public database of emails available, the Enron emails, but there is too much scheming in them to be a good sample.
Both of these systems, and others like them, work precisely because their makers have decided to tackle problems that are as narrowly defined as possible. Amy doesn’t have to have a conversation about the weather—just when and where you’d like to schedule a meeting. And Persado’s system isn’t going to come up with the next “Just Do It” campaign.
This is where some might object that the commercialized vision for AI isn’t intelligent at all. But academics can’t even agree on where the cutoff for “intelligence” is in living things, so the fact that these first steps toward economically useful artificial intelligence lie somewhere near the bottom of the spectrum of things that think shouldn’t bother us.
We’re also at a time when it seems that advances in the sheer power of computers will lead to AI that becomes progressively smarter. So-called deep-learning algorithms allow machines to learn unsupervised, whereas both Persado and X.ai’s systems require training guided by humans.
Last year Google showed that its own deep-learning systems could learn to recognize a cat from millions of images scraped from the Internet, without ever being told what a cat was in the first place. It’s a parlor trick, but it isn’t hard to see where this is going—the enhancement of the effectiveness of knowledge workers. Mr. Mortensen estimates there are 87 million of them in the world already, and they schedule 10 billion meetings a year. As more tools tackling specific portions of their job become available, their days could be filled with the things that only humans can do, like creativity.
I think the next Siri is not Siri; it’s 100 companies like ours mashed into one,” says Mr. Mortensen.
—Follow Christopher Mims on Twitter @Mims or write to him atchristopher.mims@wsj.com.
Tagged , , , , ,

Ray Kurzweil: Get ready for hybrid thinking

ORIGINAL: TED
Jun 2, 2014
Two hundred million years ago, our mammal ancestors developed a new brain feature: the neocortex. This stamp-sized piece of tissue (wrapped around a brain the size of a walnut) is the key to what humanity has become. Now, futurist Ray Kurzweil suggests, we should get ready for the next big leap in brain power, as we tap into the computing power in the cloud.

 

Tagged , , , , ,

Why a deep-learning genius left Google & joined Chinese tech shop Baidu (interview)

ORIGINAL: VentureBeat
July 30, 2014 8:03 AM
Image Credit: Jordan Novet/VentureBeat
SUNNYVALE, California — Chinese tech company Baidu has yet to make its popular search engine and other web services available in English. But consider yourself warned: Baidu could someday wind up becoming a favorite among consumers.
The strength of Baidu lies not in youth-friendly marketing or an enterprise-focused sales team. It lives instead in Baidu’s data centers, where servers run complex algorithms on huge volumes of data and gradually make its applications smarter, including not just Web search but also Baidu’s tools for music, news, pictures, video, and speech recognition.
Despite lacking the visibility (in the U.S., at least) of Google and Microsoft, in recent years Baidu has done a lot of work on deep learning, one of the most promising areas of artificial intelligence (AI) research in recent years. This work involves training systems called artificial neural networks on lots of information derived from audio, images, and other inputs, and then presenting the systems with new information and receiving inferences about it in response.
Two months ago, Baidu hired Andrew Ng away from Google, where he started and led the so-called Google Brain project. Ng, whose move to Baidu follows Hugo Barra’s jump from Google to Chinese company Xiaomi last year, is one of the world’s handful of deep-learning rock stars.
Ng has taught classes on machine learning, robotics, and other topics at Stanford University. He also co-founded massively open online course startup Coursera.
He makes a strong argument for why a person like him would leave Google and join a company with a lower public profile. His argument can leave you feeling like you really ought to keep an eye on Baidu in the next few years.
I thought the best place to advance the AI mission is at Baidu,” Ng said in an interview with VentureBeat.
Baidu’s search engine only runs in a few countries, including China, Brazil, Egypt, and Thailand. The Brazil service was announced just last week. Google’s search engine is far more popular than Baidu’s around the globe, although Baidu has already beaten out Yahoo and Microsoft’s Bing in global popularity, according to comScore figures.
And Baidu co-founder and chief executive Robin Li, a frequent speaker on Stanford’s campus, has said he wants Baidu to become a brand name in more than half of all the world’s countries. Presumably, then, Baidu will one day become something Americans can use.
Above: Baidu co-founder and chief executive Robin Li.
Image Credit: Baidu

 

Now that Ng leads Baidu’s research arm as the company’s chief scientist out of the company’s U.S. R&D Center here, it’s not hard to imagine that Baidu’s tools in English, if and when they become available, will be quite brainy — perhaps even eclipsing similar services from Apple and other tech giants. (Just think of how many people are less than happy with Siri.)

A stable full of AI talent

But this isn’t a story about the difference a single person will make. Baidu has a history in deep learning.
A couple years ago, Baidu hired Kai Yu, a engineer skilled in artificial intelligence. Based in Beijing, he has kept busy.
I think Kai ships deep learning to an incredible number of products across Baidu,” Ng said. Yu also developed a system for providing infrastructure that enables deep learning for different kinds of applications.
That way, Kai personally didn’t have to work on every single application,” Ng said.
In a sense, then, Ng joined a company that had already built momentum in deep learning. He wasn’t starting from scratch.
Above: Baidu’s Kai Yu.
Image Credit: Kai Yu
Only a few companies could have appealed to Ng, given his desire to push artificial intelligence forward. It’s capital-intensive, as it requires lots of data and computation. Baidu, he said, can provide those things.
Baidu is nimble, too. Unlike Silicon Valley’s tech giants, which measure activity in terms of monthly active users, Chinese Internet companies prefer to track usage by the day, Ng said.
It’s a symptom of cadence,” he said. “What are you doing today?” And product cycles in China are short; iteration happens very fast, Ng said.
Plus, Baidu is willing to get infrastructure ready to use on the spot.
Frankly, Kai just made decisions, and it just happened without a lot of committee meetings,” Ng said. “The ability of individuals in the company to make decisions like that and move infrastructure quickly is something I really appreciate about this company.
That might sound like a kind deference to Ng’s new employer, but he was alluding to a clear advantage Baidu has over Google.
He ordered 1,000 GPUs [graphics processing units] and got them within 24 hours,Adam Gibson, co-founder of deep-learning startup Skymind, told VentureBeat. “At Google, it would have taken him weeks or months to get that.
Not that Baidu is buying this type of hardware for the first time. Baidu was the first company to build a GPU cluster for deep learning, Ng said — a few other companies, like Netflix, have found GPUs useful for deep learning — and Baidu also maintains a fleet of servers packing ARM-based chips.
Above: Baidu headquarters in Beijing.
Image Credit: Baidu
Now the Silicon Valley researchers are using the GPU cluster and also looking to add to it and thereby create still bigger artificial neural networks.
But the efforts have long since begun to weigh on Baidu’s books and impact products. “We deepened our investment in advanced technologies like deep learning, which is already yielding near term enhancements in user experience and customer ROI and is expected to drive transformational change over the longer term,” Li said in a statement on the company’s earnings the second quarter of 2014.
Next step: Improving accuracy
What will Ng do at Baidu? The answer will not be limited to any one of the company’s services. Baidu’s neural networks can work behind the scenes for a wide variety of applications, including those that handle text, spoken words, images, and videos. Surely core functions of Baidu like Web search and advertising will benefit, too.
All of these are domains Baidu is looking at using deep learning, actually,” Ng said.
Ng’s focus now might best be summed up by one word: accuracy.
That makes sense from a corporate perspective. Google has the brain trust on image analysis, and Microsoft has the brain trust on speech, said Naveen Rao, co-founder and chief executive of deep-learning startup Nervana. Accuracy could potentially be the area where Ng and his colleagues will make the most substantive progress at Baidu, Rao said.
Matthew Zeiler, founder and chief executive of another deep learning startup, Clarifai, was more certain. “I think you’re going to see a huge boost in accuracy,” said Zeiler, who has worked with Hinton and LeCun and spent two summers on the Google Brain project.
One thing is for sure: Accuracy is on Ng’s mind.
Above: The lobby at Baidu’s office in Sunnyvale, Calif.
Image Credit: Jordan Novet/VentureBeat
Here’s the thing. Sometimes changes in accuracy of a system will cause changes in the way you interact with the device,” Ng said. For instance, more accurate speech recognition could translate into people relying on it much more frequently. Think “Her”-level reliance, where you just talk to your computer as a matter of course rather than using speech recognition in special cases.
Speech recognition today doesn’t really work in noisy environments,” Ng said. But that could change if Baidu’s neural networks become more accurate under Ng.
Ng picked up his smartphone, opened the Baidu Translate app, and told it that he needed a taxi. A female voice said that in Mandarin and displayed Chinese characters on screen. But it wasn’t a difficult test, in some ways: This was no crowded street in Beijing. This was a quiet conference room in a quiet office.
There’s still work to do,” Ng said.
‘The future heroes of deep learning’
Meanwhile, researchers at companies and universities have been hard at work on deep learning for decades.
Google has built up a hefty reputation for applying deep learning to images from YouTube videos, data center energy use, and other areas, partly thanks to Ng’s contributions. And recently Microsoft made headlines for deep-learning advancements with its Project Adam work, although Li Deng of Microsoft Research has been working with neural networks for more than 20 years.
In academia, deep learning research groups all over North America and Europe. Key figures in the past few years include Yoshua Bengio at the University of Montreal, Geoff Hinton of the University of Toronto (Google grabbed him last year through its DNNresearch acquisition), Yann LeCun from New York University (Facebook pulled him aboard late last year), and Ng.
But Ng’s strong points differ from those of his contemporaries. Whereas Bengio made strides in training neural networks, LeCun developed convolutional neural networks, and Hinton popularized restricted Boltzmann machines, Ng takes the best, implements it, and makes improvements.
Andrew is neutral in that he’s just going to use what works,” Gibson said. “He’s very practical, and he’s neutral about the stamp on it.
Not that Ng intends to go it alone. To create larger and more accurate neural networks, Ng needs to look around and find like-minded engineers.
He’s going to be able to bring a lot of talent over,Dave Sullivan, co-founder and chief executive of deep-learning startup Ersatz Labs, told VentureBeat. “This guy is not sitting down and writing mountains of code every day.
And truth be told, Ng has had no trouble building his team.
Hiring for Baidu has been easier than I’d expected,” he said.
A lot of engineers have always wanted to work on AI. … My job is providing the team with the best possible environment for them to do AI, for them to be the future heroes of deep learning.

More information:

Google’s innovative search technologies connect millions of people around the world with information every day. Founded in 1998 by Stanford Ph.D. students Larry Page and Sergey Brin, Google today is a top web property in all major glob… read more »

Powered by VBProfiles

Tagged , , , , , , , , , ,

How Watson Changed IBM

ORIGINAL: HBR
by Brad Power
August 22, 2014

Remember when IBM’s “Watson” computer competed on the TV game show “Jeopardy” and won? Most people probably thought “Wow, that’s cool,” or perhaps were briefly reminded of the legend of John Henry and the ongoing contest between man and machine. Beyond the media splash it caused, though, the event was viewed as a breakthrough on many fronts. Watson demonstrated that machines could understand and interact in a natural language, question-and-answer format and learn from their mistakes. This meant that machines could deal with the exploding growth of non-numeric information that is getting hard for humans to keep track of: to name two prominent and crucially important examples,

  • keeping up with all of the knowledge coming out of human genome research, or 
  • keeping track of all the medical information in patient records.
So IBM asked the question: How could the fullest potential of this breakthrough be realized, and how could IBM create and capture a significant portion of that value? They knew the answer was not by relying on traditional internal processes and practices for R&D and innovation. Advances in technology — especially digital technology and the increasing role of software in products and services — are demanding that large, successful organizations increase their pace of innovation and make greater use of resources outside their boundaries. This means internal R&D activities must increasingly shift towards becoming crowdsourced, taking advantage of the wider ecosystem of customers, suppliers, and entrepreneurs.
IBM, a company with a long and successful tradition of internally-focused R&D activities, is adapting to this new world of creating platforms and enabling open innovation. Case in point, rather than keep Watson locked up in their research labs, they decided to release it to the world as a platform, to run experiments with a variety of organizations to accelerate development of natural language applications and services. In January 2014 IBM announced they were spending $1 billion to launch the Watson Group, including a $100 million venture fund to support start-ups and businesses that are building Watson-powered apps using the “Watson Developers Cloud.” More than 2,500 developers and start-ups have reached out to the IBM Watson Group since the Watson Developers Cloud was launched in November 2013.

So how does it work? First, with multiple business models. Mike Rhodin, IBM’s senior vice president responsible for Watson, told me, “There are three core business models that we will run in parallel. 

  • The first is around industries that we think will go through a big change in “cognitive” [natural language] computing, such as financial services and healthcare. For example, in healthcare we’re working with The Cleveland Clinic on how medical knowledge is taught. 
  • The second is where we see similar patterns across industries, such as how people discover and engage with organizations and how organizations make different kinds of decisions. 
  • The third business model is creating an ecosystem of entrepreneurs. We’re always looking for companies with brilliant ideas that we can partner with or acquire. With the entrepreneur ecosystem, we are behaving more like a Silicon Valley startup. We can provide the entrepreneurs with access to early adopter customers in the 170 countries in which we operate. If entrepreneurs are successful, we keep a piece of the action.
IBM also had to make some bold structural moves in order to create an organization that could both function as a platform as well as collaborate with outsiders for open innovation. They carved out The Watson Group as a new, semi-autonomous, vertically integrated unit, reporting to the CEO. They brought in 2000 people, a dozen projects, a couple of Big Data and content analytics tools, and a consulting unit (outside of IBM Global Services). IBM’s traditional annual budget cycle and business unit financial measures weren’t right for Watson’s fast pace, so, as Mike Rhodin told me, “I threw out the annual planning cycle and replaced it with a looser, more agile management system. In monthly meetings with CEO Ginni Rometty, we’ll talk one time about technology, and another time about customer innovations. I have to balance between strategic intent and tactical, short-term decision-making. Even though we’re able to take the long view, we still have to make tactical decisions.

More and more, organizations will need to make choices in their R&D activities to either create platforms or take advantage of them. 

Those with deep technical and infrastructure skills, like IBM, can shift the focus of their internal R&D activities toward building platforms that can connect with ecosystems of outsiders to collaborate on innovation.

The second and more likely option for most companies is to use platforms like IBM’s or Amazon’s to create their own apps and offerings for customers and partners. In either case, new, semi-autonomous agile units, like IBM’s Watson Group, can help to create and capture huge value from these new customer and entrepreneur ecosystems.

More blog posts by Brad Power
Tagged , , , , , ,

“Brain” In A Dish Acts As Autopilot Living Computer

ORIGINAL: U of Florida
by Jennifer Viegas
Nov 27, 2012
A glass dish contains a “brain” — a living network of 25,000 rat brain cells connected to an array of 60 electrodes.University of Florida/Ray Carson

downloadable pdf

A University of Florida scientist has grown a living “brain” that can fly a simulated plane, giving scientists a novel way to observe how brain cells function as a network.The “brain” — a collection of 25,000 living neurons, or nerve cells, taken from a rat’s brain and cultured inside a glass dish — gives scientists a unique real-time window into the brain at the cellular level. By watching the brain cells interact, scientists hope to understand what causes neural disorders such as epilepsy and to determine noninvasive ways to intervene.


2012 U of Florida - Brain Test

Thomas DeMarse holds a glass dish containing a living network of 25,000 rat brain cells connected to an array of 60 electrodes that can interact with a computer to fly a simulated F-22 fighter plane.

As living computers, they may someday be used to fly small unmanned airplanes or handle tasks that are dangerous for humans, such as search-and-rescue missions or bomb damage assessments.”

We’re interested in studying how brains compute,” said Thomas DeMarse, the UF assistant professor of biomedical engineering who designed the study. “If you think about your brain, and learning and the memory process, I can ask you questions about when you were 5 years old and you can retrieve information. That’s a tremendous capacity for memory. In fact, you perform fairly simple tasks that you would think a computer would easily be able to accomplish, but in fact it can’t.
Continue reading

Tagged , , , , , , , , , ,

Siri’s Inventors Are Building a Radical New AI That Does Anything You Ask

Viv was named after the Latin root meaning live. Its San Jose, California, offices are decorated with tsotchkes bearing the numbers six and five (VI and V in roman numerals). Ariel Zambelich

When Apple announced the iPhone 4S on October 4, 2011, the headlines were not about its speedy A5 chip or improved camera. Instead they focused on an unusual new feature: an intelligent assistant, dubbed Siri. At first Siri, endowed with a female voice, seemed almost human in the way she understood what you said to her and responded, an advance in artificial intelligence that seemed to place us on a fast track to the Singularity. She was brilliant at fulfilling certain requests, like “Can you set the alarm for 6:30?” or “Call Diane’s mobile phone.” And she had a personality: If you asked her if there was a God, she would demur with deft wisdom. “My policy is the separation of spirit and silicon,” she’d say.Over the next few months, however, Siri’s limitations became apparent. Ask her to book a plane trip and she would point to travel websites—but she wouldn’t give flight options, let alone secure you a seat. Ask her to buy a copy of Lee Child’s new book and she would draw a blank, despite the fact that Apple sells it. Though Apple has since extended Siri’s powers—to make an OpenTable restaurant reservation, for example—she still can’t do something as simple as booking a table on the next available night in your schedule. She knows how to check your calendar and she knows how to use Open­Table. But putting those things together is, at the moment, beyond her.
Continue reading

Tagged , , , , , , ,

Joi Ito: Want to innovate? Become a “now-ist”

Remember before the internet?” asks Joi Ito. “Remember when people used
to try to predict the future?
” In this engaging talk, the head of the
MIT Media Lab skips the future predictions and instead shares a new approach to creating in the moment: building quickly and improving constantly, without waiting for permission or for proof that you have the right idea
. This kind of bottom-up innovation is seen in the most fascinating, futuristic projects emerging today, and it starts, he says, with being open and alert to what’s going on around you right now.
Don’t be a futurist, he suggests: be a now-ist.
Tagged , , , , , ,

Preparing Your Students for the Challenges of Tomorrow

ORIGINAL: Edutopia
August 20, 2014

Right now, you have students. Eventually, those students will become the citizens — employers, employees, professionals, educators, and caretakers of our planet in 21st century. Beyond mastery of standards, what can you do to help prepare them? What can you promote to be sure they are equipped with the skill sets they will need to take on challenges and opportunities that we can’t yet even imagine?

Following are six tips to guide you in preparing your students for what they’re likely to face in the years and decades to come.

1. Teach Collaboration as a Value and Skill Set Students of today need new skills for the coming century that will make them ready to collaborate with others on a global level. Whatever they do, we can expect their work to include finding creative solutions to emerging challenges.
2. Evaluate Information Accuracy 
New information is being discovered and disseminated at a phenomenal rate. It is predicted that 50 percent of the facts students are memorizing today will no longer be accurate or complete in the near future. Students need to know
  • how to find accurate information, and
  • how to use critical analysis for
  • assessing the veracity or bias and
  • the current or potential uses of new information.
These are the executive functions that they need to develop and practice in the home and at school today, because without them, students will be unprepared to find, analyze, and use the information of tomorrow.
3. Teach Tolerance 
In order for collaboration to happen within a global community, job applicants of the future will be evaluated by their ability for communication with, openness to, and tolerance for unfamiliar cultures and ideas. To foster these critical skills, today’s students will need open discussions and experiences that can help them learn about and feel comfortable communicating with people of other cultures.
4. Help Students Learn Through Their Strengths 
Children are born with brains that want to learn. They’re also born with different strengths — and they grow best through those strengths. One size does not fit all in assessment and instruction. The current testing system and the curriculum that it has spawned leave behind the majority of students who might not be doing their best with the linear, sequential instruction required for this kind of testing. Look ahead on the curriculum map and help promote each student’s interest in the topic beforehand. Use clever “front-loading” techniques that will pique their curiosity.
5. Use Learning Beyond the Classroom
New “learning” does not become permanent memory unless there is repeated stimulation of the new memory circuits in the brain pathways
. This is the “practice makes permanent” aspect of neuroplasticity where neural networks that are the most stimulated develop more dendrites, synapses, and thicker myelin for more efficient information transmission. These stronger networks are less susceptible to pruning, and they become long-term memory holders. Students need to use what they learn repeatedly and in different, personally meaningful ways for short-term memory to become permanent knowledge that can be retrieved and used in the future. Help your students make memories permanent by providing opportunities for them to “transfer” school learning to real-life situations.
6. Teach Students to Use Their Brain Owner’s Manual
The most important manual that you can share with your students is the owner’s manual to their own brains. When they understand how their brains take in and store information (PDF, 139KB), they hold the keys to successfully operating the most powerful tool they’ll ever own. When your students understand that, through neuroplasticity, they can change their own brains and intelligence, together you can build their resilience and willingness to persevere through the challenges that they will undoubtedly face in the future.How are you preparing your students to thrive in the world they’ll inhabit as adults?

Tagged , , , , , , ,

Brainstorming Doesn’t Work; Try This Technique Instead

ORIGINAL: FastCompany
Ever been in a meeting where one loudmouth’s mediocre idea dominates?
Then you know brainstorming needs an overhaul.

 

Brainstorming, in its current form and by manymetrics, doesn’t work as well as the frequency of “team brainstorming meetings” would suggests it does. Early ideas tend to have disproportionate influence over the rest of the conversation.
Sharing ideas in groups isn’t the problem, it’s the “out-loud” part that, ironically, leads to groupthink, instead of unique ideas. “As sexy as brainstorming is, with people popping like champagne with ideas, what actually happens is when one person is talking you’re not thinking of your own ideas,Leigh Thompson, a management professor at the Kellogg School, told Fast Company. “Sub-consciously you’re already assimilating to my ideas.”
That process is called “anchoring,” and it crushes originality. “Early ideas tend to have disproportionate influence over the rest of the conversation,Loran Nordgren, also a professor at Kellogg, explained. “They establish the kinds of norms, or cement the idea of what are appropriate examples or potential solutions for the problem.

Continue reading

Tagged , , , ,

A Thousand Kilobots Self-Assemble Into Complex Shapes

ORIGINAL: IEEE Spectrum
By Evan Ackerman
14 Aug 2014
 Photo: Michael Rubenstein/Harvard Universit

When Harvard roboticists first introduced their Kilobots in 2011, they’d only made 25 of them. When we next saw the robots in 2013, they’d made 100. Now the researchers have built one thousand of them. That’s a whole kilo of Kilobots, and probably the most robots that have ever been in the same place at the same time, ever.

The researchers—Michael Rubenstein, Alejandro Cornejo, and Professor Radhika Nagpal of Harvard’s Self-Organizing Systems Research Group—describe their thousand-robot swarm in a paper published today in Science (they actually built 1024 robots, apparently following the computer science definition of “kilo”).

Despite their menacing name (KILL-O-BOTS!) and the robot swarm nightmares they may induce in some people, these little guys are harmless. Each Kilobot [pictured below] is a small, cheap-ish ($14) device that can move around by vibrating their legs and communicate with other robots with infrared transmitters and receivers.

Continue reading

Tagged , , , , , , , ,

IBM Chip Processes Data Similar to the Way Your Brain Does

A chip that uses a million digital neurons and 256 million synapses may signal the beginning of a new era of more intelligent computers.
WHY IT MATTERS

Computers that can comprehend messy data such as images could revolutionize what technology can do for us.

New thinking: IBM has built a processor designed using principles at work in your brain.
A new kind of computer chip, unveiled by IBM today, takes design cues from the wrinkled outer layer of the human brain. Though it is no match for a conventional microprocessor at crunching numbers, the chip consumes significantly less power, and is vastly better suited to processing images, sound, and other sensory data.
IBM’s SyNapse chip, as it is called, processes information using a network of just over one million “neurons,” which communicate with one another using electrical spikes—as actual neurons do. The chip uses the same basic components as today’s commercial chips—silicon transistors. But its transistors are configured to mimic the behavior of both neurons and the connections—synapses—between them.
The SyNapse chip breaks with a design known as the Von Neuman architecture that has underpinned computer chips for decades. Although researchers have been experimenting with chips modeled on brains—known as neuromorphic chips—since the late 1980s, until now all have been many times less complex, and not powerful enough to be practical (see “Thinking in Silicon”). Details of the chip were published today in the journal Science.
The new chip is not yet a product, but it is powerful enough to work on real-world problems. In a demonstration at IBM’s Almaden research center, MIT Technology Review saw one recognize cars, people, and bicycles in video of a road intersection. A nearby laptop that had been programed to do the same task processed the footage 100 times slower than real time, and it consumed 100,000 times as much power as the IBM chip. IBM researchers are now experimenting with connecting multiple SyNapse chips together, and they hope to build a supercomputer using thousands.
When data is fed into a SyNapse chip it causes a stream of spikes, and its neurons react with a storm of further spikes. The just over one million neurons on the chip are organized into 4,096 identical blocks of 250, an arrangement inspired by the structure of mammalian brains, which appear to be built out of repeating circuits of 100 to 250 neurons, says Dharmendra Modha, chief scientist for brain-inspired computing at IBM. Programming the chip involves choosing which neurons are connected, and how strongly they influence one another. To recognize cars in video, for example, a programmer would work out the necessary settings on a simulated version of the chip, which would then be transferred over to the real thing.
In recent years, major breakthroughs in image analysis and speech recognition have come from using large, simulated neural networks to work on data (see “Deep Learning”). But those networks require giant clusters of conventional computers. As an example, Google’s famous neural network capable of recognizing cat and human faces required 1,000 computers with 16 processors apiece (see “Self-Taught Software”).
Although the new SyNapse chip has more transistors than most desktop processors, or any chip IBM has ever made, with over five billion, it consumes strikingly little power. When running the traffic video recognition demo, it consumed just 63 milliwatts of power. Server chips with similar numbers of transistors consume tens of watts of power—around 10,000 times more.
The efficiency of conventional computers is limited because they store data and program instructions in a block of memory that’s separate from the processor that carries out instructions. As the processor works through its instructions in a linear sequence, it has to constantly shuttle information back and forth from the memory store—a bottleneck that slows things down and wastes energy.
IBM’s new chip doesn’t have separate memory and processing blocks, because its neurons and synapses intertwine the two functions. And it doesn’t work on data in a linear sequence of operations; individual neurons simply fire when the spikes they receive from other neurons cause them to.
Horst Simon, the deputy director of Lawrence Berkeley National Lab and an expert in supercomputing, says that until now the industry has focused on tinkering with the Von Neuman approach rather than replacing it, for example by using multiple processors in parallel, or using graphics processors to speed up certain types of calculations. The new chip “may be a historic development,” he says. “The very low power consumption and scalability of this architecture are really unique.”
One downside is that IBM’s chip requires an entirely new approach to programming. Although the company announced a suite of tools geared toward writing code for its forthcoming chip last year (see “IBM Scientists Show Blueprints for Brainlike Computing”), even the best programmers find learning to work with the chip bruising, says Modha: “It’s almost always a frustrating experience.” His team is working to create a library of ready-made blocks of code to make the process easier.
Asking the industry to adopt an entirely new kind of chip and way of coding may seem audacious. But IBM may find a receptive audience because it is becoming clear that current computers won’t be able to deliver much more in the way of performance gains. “This chip is coming at the right time,” says Simon.
ORIGINAL: Tech Review
August 7, 2014
Tagged , , , , , , , , ,

Google buys city guides app Jetpac, support to end on September 15

ORIGINAL: The Next Web
By Josh Ong

Google has acquired the team behind Jetpac, an iPhone app for crowdsourcing city guides from public Instagram photos. The app will be pulled from the App Store in coming days, and support for the service will be discontinued on September 15.

Jetpac’s deep learning software used a nifty trick of scanning our photos to evaluate businesses and venues around town. As MIT Technology Review notes, the app could tell whether visitors were tourists, whether a bar is dog-friendly and how fancy a place was.

It even employed humans to find hipster spots by training the system to count the number of mustaches and plaid shirts.

Interestingly, Jetpac’s technology was inspired by Google researcher Geoffrey Hinton, so it makes perfect sense for Google to bring the startup into its fold. If this means that Google Now will gain the ability to automatically alert me when I’m entering a hipster-infested area, then I’m an instant fan.

Jetpac also built two iOS apps that tapped into its Deep Belief neural network to offer users object recognition.

Imagine all photos tagged automatically, the ability to search the world by knowing what is in the world’s shared photos, and robots that can see like humans,” the App Store description for its Spotter app reads. If that’s not a Googly description, I don’t know what is.

Jetpac

(h/t Ouriel Ohayon)

Thumbnail image credit: GEORGES GOBET/AFP/Getty Images

Tagged , , , , ,

Building Mind-Controlled Gadgets Just Got Easier

ORIGINAL: IEEE Spectrum
By Eliza Strickland
11 Aug 2014
A new brain-computer interface lets DIYers access their brain waves
Photo: Chip AudetteEngineer Chip Audette used the OpenBCI system to control a robot spider with his mind.
The guys who decided to make a mind-reading tool for the masses are not neuroscientists. In fact, they’re artists who met at Parsons the New School for Design, in New York City. In this day and age, you don’t have to be a neuroscientist to muck around with brain signals.
With Friday’s launch of an online store selling their brain-computer interface (BCI) gear, Joel Murphy and Conor Russomanno hope to unleash a wave of neurotech creativity. Their system enables DIYers to use brain waves to control anything they can hack—a video game, a robot, you name it. “It feels like there’s going to be a surge,” says Russomanno. “The floodgates are about to open.” And since their technology is open source, the creators hope hackers will also help improve the BCI itself.

Photo: OpenBCI The OpenBCI board takes in data from up to eight electrodes.

Their OpenBCI system makes sense of an electroencephalograph (EEG), signal, a general measure of electrical activity in the brain captured via electrodes on the scalp. The fundamental hardware component is a relatively new chip from Texas Instruments, which takes in analog data from up to eight electrodes and converts it to a digital signal. Russomanno and Murphy used the chip and an Arduino board to create OpenBCI, which essentially amplifies the brain signal and sends it via Bluetooth to a computer for processing. “The big issue is getting the data off the chip and making it accessible,” Murphy says. Once it’s accessible, Murphy expects makers to build things he hasn’t even imagined yet.
The project got its start in 2011, when Russomanno was a student in Murphy’s physical computing class at Parsons and told his professor he wanted to hack an EEG toy made by Mattel. The toy’s EEG-enabled headset supposedly registered the user’s concentrated attention (which in the game activated a fan that made a ball float upward). But the technology didn’t seem very reliable, and since it wasn’t open source, Russomanno couldn’t study the game’s method of collecting and analyzing the EEG data. He decided that an open-source alternative was necessary if he wanted to have any real fun.
Happily, Russomanno and his professor soon connected with engineer Chip Audette, of the New Hampshire R&D firm Creare, who already had a grant from the U.S. Defense Advanced Research Projects Agency (DARPA) to develop a low-cost, high-quality EEG system for “nontraditional users.” Once the team had cobbled together a prototype of their OpenBCI system, they decided to offer their gear to the world with a Kickstarter campaign, which ended in January and raised more than twice the goal of US $100,000.
Murphy and Russomanno soon found that production would be more difficult and take longer than expected (as is the case with so many Kickstarter projects), so they had to push back their shipping date by several months. Now, though, they’re in business—and Russomanno says that shipping a product is only the beginning. “We don’t just want to sell something; we want to teach people how to use it and also develop a community,” he says. OpenBCI wants to be an online portal where experimenters can swap tips and post research projects.
So once a person’s brain-wave data is streaming into a computer, what is to be done with it? OpenBCI will make some simple software available, but mostly Russomanno and Murphy plan to watch as inventors come up with new applications for BCIs.
Audette, the engineer from Creare, is already hacking robotic “battle spiders” that are typically steered by remote control. Audette used an OpenBCI prototype to identify three distinct brain-wave patterns that he can reproduce at will, and he sent those signals to a battle spider to command it to turn left or right or to walk straight ahead. “The first time you get something to move with your brain, the satisfaction is pretty amazing,” Audette says. “It’s like, ‘I am king of the world because I got this robot to move.’
In Los Angeles, a group is using another prototype to give a paralyzed graffiti artist the ability to practice his craft again. The artist, Tempt One, was diagnosed with Lou Gehrig’s disease in 2003 and gradually progressed to the nightmarish “locked in” state. By 2010 he couldn’t move or speak and lay inert in a hospital bed—but with unimpaired consciousness, intellect, and creativity trapped inside his skull. Now his supporters are developing a system called the BrainWriter: They’re using OpenBCI to record the artist’s brain waves and are devising ways to use those brain waves to control the computer cursor so Tempt can sketch his designs on the screen.
Another early collaborator thinks that OpenBCI will be useful in mainstream medicine. David Putrino, director of telemedicine and virtual rehabilitation at the Burke Rehabilitation Center, in White Plains, N.Y., says he’s comparing the open-source system to the $60,000 clinic-grade EEG devices he typically works with. He calls the OpenBCI system robust and solid, saying, “There’s no reason why it shouldn’t be producing good signal.
Putrino hopes to use OpenBCI to build a low-cost EEG system that patients can take home from the hospital, and he imagines a host of applications. Stroke patients, for example, could use it to determine when their brains are most receptive to physical therapy, and Parkinson’s patients could use it to find the optimal time to take their medications. “I’ve been playing around with these ideas for a decade,” Putrino says, “but they kept failing because the technology wasn’t quite there.” Now, he says, it’s time to start building.
Tagged , , , , , , , ,
%d bloggers like this: