How IBM Got Brainlike Efficiency From the TrueNorth Chip

By Jeremy Hsu
Posted 29 Sep 2014 | 19:01 GMT

TrueNorth takes a big step toward using the brain’s architecture to reduce computing’s power consumption

Photo: IBM

Neuromorphic computer chips meant to mimic the neural network architecture of biological brains have generally fallen short of their wetware counterparts in efficiency—a crucial factor that has limited practical applications for such chips. That could be changing. At a power density of just 20 milliwatts per square centimeter, IBM’s new brain-inspired chip comes tantalizingly close to such wetware efficiency. The hope is that it could bring brainlike intelligence to the sensors of smartphones, smart cars, and—if IBM has its way—everything else.

The latest IBM neurosynaptic computer chip, called TrueNorth, consists of 1 million programmable neurons and 256 million programmable synapses conveying signals between the digital neurons. Each of the chip’s 4,096 neurosynaptic cores includes the entire computing package:

  • memory, 
  • computation, and 
  • communication. 

Such architecture helps to bypass the bottleneck in traditional von Neumann computing, where program instructions and operation data cannot pass through the same route simultaneously.
This is literally a supercomputer the size of a postage stamp, light like a feather, and low power like a hearing aid,” says Dharmendra Modha, IBM fellow and chief scientist for brain-inspired computing at IBM Research-Almaden, in San Jose, Calif.

Such chips can emulate the human brain’s ability to recognize different objects in real time; TrueNorth showed it could distinguish among pedestrians, bicyclists, cars, and trucks. IBM envisions its new chips working together with traditional computing devices as hybrid machines, providing a dose of brainlike intelligence. The chip’s architecture, developed together by IBM and Cornell University, was first detailed in August in the journal Science.

Tagged , , , , , ,

Meet Amelia: the computer that’s after your job

29 Sep 2014
A new artificially intelligent computer system called ‘Amelia’ – that can read and understand text, follow processes, solve problems and learn from experience – could replace humans in a wide range of low-level jobs

Amelia aims to answer the question, can machines think? Photo: IPsoft

In February 2011 an artificially intelligent computer system called IBM Watson astonished audiences worldwide by beating the two all-time greatest Jeopardy champions at their own game.

Thanks to its ability to apply

  • advanced natural language processing,
  • information retrieval,
  • knowledge representation,
  • automated reasoning, and
  • machine learning technologies, Watson consistently outperformed its human opponents on the American quiz show Jeopardy.

Watson represented an important milestone in the development of artificial intelligence, but the field has been progressing rapidly – particularly with regard to natural language processing and machine learning.

In 2012, Google used 16,000 computer processors to build a simulated brain that could correctly identify cats in YouTube videos; the Kinect, which provides a 3D body-motion interface for Microsoft’s Xbox, uses algorithms that emerged from artificial intelligence research, as does the iPhone’s Siri virtual personal assistant.

Today a new artificial intelligence computing system has been unveiled, which promises to transform the global workforce. Named ‘Amelia‘ after American aviator and pioneer Amelia Earhart, the system is able to shoulder the burden of often tedious and laborious tasks, allowing human co-workers to take on more creative roles.

Watson is perhaps the best data analytics engine that exists on the planet; it is the best search engine that exists on the planet; but IBM did not set out to create a cognitive agent. It wanted to build a program that would win Jeopardy, and it did that,” said Chetan Dube, chief executive Officer of IPsoft, the company behind Amelia.

Amelia, on the other hand, started out not with the intention of winning Jeopardy, but with the pure intention of answering the question posed by Alan Turing in 1950 – can machines think?

Amelia learns by following the same written instructions as her human colleagues, but is able to absorb information in a matter of seconds.
She understands the full meaning of what she reads rather than simply recognising individual words. This involves

  • understanding context,
  • applying logic and
  • inferring implications.

When exposed to the same information as any new employee in a company, Amelia can quickly apply her knowledge to solve queries in a wide range of business processes. Just like any smart worker she learns from her colleagues and, by observing their work, she continually builds her knowledge.

While most ‘smart machines’ require humans to adapt their behaviour in order to interact with them, Amelia is intelligent enough to interact like a human herself. She speaks more than 20 languages, and her core knowledge of a process needs only to be learned once for her to be able to communicate with customers in their language.

Independently, rather than through time-intensive programming, Amelia creates her own ‘process map’ of the information she is given so that she can work out for herself what actions to take depending on the problem she is solving.

Intelligence is the ability to acquire and apply knowledge. If a system claims to be intelligent, it must be able to read and understand documents, and answer questions on the basis of that. It must be able to understand processes that it observes. It must be able to solve problems based on the knowledge it has acquired. And when it cannot solve a problem, it must be capable of learning the solution through noticing how a human did it,” said Dube.

IPsoft has been working on this technology for 15 years with the aim of developing a platform that does not simply mimic human thought processes but can comprehend the underlying meaning of what is communicated – just like a human.

Just as machines transformed agriculture and manufacturing, IPsoft believes that cognitive technologies will drive the next evolution of the global workforce, so that in the future companies will have digital workforces that comprise a mixture of human and virtual employees.

Amelia has already been trialled within a number of Fortune 1000 companies, in areas such as manning technology help desks, procurement processing, financial trading operations support and providing expert advice for field engineers.

In each of these environments, she has learnt not only from reading existing manuals and situational context but also by observing and working with her human colleagues and discerning for herself a map of the business processes being followed.

In a help desk situation, for example, Amelia can understand what a caller is looking for, ask questions to clarify the issue, find and access the required information and determine which steps to follow in order to solve the problem.

As a knowledge management advisor, she can help engineers working in remote locations who are unable to carry detailed manuals, by diagnosing the cause of failed machinery and guiding them towards the best steps to rectifying the problem.

During these trials, Amelia was able to go from solving very few queries independently to 42 per cent of the most common queries within one month. By the second month she could answer 64 per cent of those queries independently.

That’s a true learning cognitive agent. Learning is the key to the kingdom, because humans learn from experience. A child may need to be told five times before they learn something, but Amelia needs to be told only once,” said Dube.

Amelia is that Mensa kid, who personifies a major breakthrough in cognitive technologies.

Analysts at Gartner predict that, by 2017, managed services offerings that make use of autonomics and cognitive platforms like Amelia will drive a 60 per cent reduction in the cost of services, enabling organisations to apply human talent to higher level tasks requiring creativity, curiosity and innovation.

IPsoft even has plans to start embedding Amelia into humanoid robots such as Softbank’s Pepper, Honda’s Asimo or Rethink Robotics’ Baxter, allowing her to take advantage of their mechanical functions.

The robots have got a fair degree of sophistication in all the mechanical functions – the ability to climb up stairs, the ability to run, the ability to play ping pong. What they don’t have is the brain, and we’ll be supplementing that brain part with Amelia,” said Dube.

I am convinced that in the next decade you’ll pass someone in the corridor and not be able to discern if it’s a human or an android.

Given the premise of IPsoft’s artificial intelligence system, it seems logical that the ultimate measure of Amelia’s success would be passing the Turing Test – which sets out to see whether humans can discern whether they are interacting with a human or a machine.

Earlier this year, a chatbot named Eugene Goostman became the first machine to pass the Turing Test by convincingly imitating a 13-year-old boy. In a five-minute keyboard conversation with a panel of human judges, Eugene managed to convince 33 per cent that it was human.

Interestingly, however, IPsoft believes that the Turing Test needs reframing, to redefine what it means to ‘think’. While Eugene was able to immitate natural language, he was only mimicking understanding. He did not learn from the interaction, nor did he demonstrate problem solving skills.

Natural language understanding is a big step up from parsing. Parsing is syntactic, understanding is semantic, and there’s a big cavern between the two,” said Dube.

The aim of Amelia is not just to get an accolade for managing to fool one in three people on a panel. The assertion is to create something that can answer to the fundamental need of human beings – particularly after a certain age – of companionship. That is our intent.

Tagged , , , ,

AI For Everyone: Startups Democratize Deep Learning So Google And Facebook Don’t Own It All

9/17/2014When I arrived at a Stanford University auditorium Tuesday night for what I thought would be a pretty nerdy panel on deep learning, a fast-growing branch of artificial intelligence, I figured I must be in the wrong place–maybe a different event for all the new Stanford students and their parents visiting the campus. Nope. Despite the highly technical nature of deep learning, some 600 people had shown up for the sold-out AI event, presented by VLAB, a Stanford-based chapter of the MIT Enterprise Forum. The turnout was a stark sign of the rising popularity of deep learning, an approach to AI that tries to mimic the activity of the brain in so-called neural networks. In just the last couple of years, deep learning software from giants like Google GOOGL +1.27%, Facebook, and China’s Baidu as well as a raft of startups, has led to big advances in image and speech recognition, medical diagnostics, stock trading, and more. “There’s quite a bit of excitement in this area,” panel moderator Steve Jurvetson, a partner with the venture firm DFJ, said with uncustomary understatement.

In the past year or two, big companies have been locked in a land grab for talent, paying big bucks for startups and even hiring away deep learning experts from each other. But this event, focused mostly on startups, including several that demonstrated their products before the panel, also revealed there’s still a lot of entrepreneurial activity. In particular, several companies aim to democratize deep learning by offering it as a service or coming up with cheaper hardware to make it more accessible to businesses.

Jurvetson explained why deep learning has pushed the boundaries of AI so much further recently.

  • For one, there’s a lot more data around because of the Internet, there’s metadata such as tags and translations, and there’s even services such as Amazon’s Mechanical Turk, which allows for cheap labeling or tagging.
  • There are also algorithmic advances, especially for using unlabeled data.
  • And computing has advanced enough to allow much larger neural networks with more synapses–in the case of Google Brain, for instance, 1 billion synapses (though that’s still a very long way form the 100 trillion synapses in the adult human brain).

Adam Berenzweig, cofounder and CTO of image recognition firm Clarifai and former engineer at Google for 10 years, made the case that deep learning is “adding a new primary sense to computing” in the form of useful computer vision. “Deep learning is forming that bridge between the physical world and the world of computing,” he said.

And it’s allowing that to happen in real time. “Now we’re getting into a world where we can take measurements of the physical world, like pixels in a picture, and turn them into symbols that we can sort,” he said. Clarifai has been working on taking an image and producing a meaningful description very quickly, like 80 milliseconds, and show very similar images.

One interesting application relevant to advertising and marketing, he noted: Once you can recognize key objects in images, you can target ads not just on keywords but on objects in an image.

DFJ’s Steve Jurvetson led a panel of AI experts at a Stanford even Sept. 16.

Even more sweeping, said Naveen Rao, cofounder and CEO of deep-learning hardware and software startup Nervana Systems and former researcher in neuromorphic computing at Qualcomm, deep learning is “that missing link between computing and what the brain does.” Instead of doing specific computations very fast, as conventional computers do, “we can start building new hardware to take computer processing in a whole new direction,” assessing probabilities, like the brain does. “Now there’s actually a business case for this kind of computing,” he said.

And not just for big businesses. Elliot Turner, founder and CEO of AlchemyAPI, a deep-learning platform in the cloud, said his company’s mission is to “democratize deep learning.” The company is working in 10 industries from advertising to business intelligence, helping companies apply it to their businesses. “I look forward to the day that people actually stop talking about deep learning, because that will be when it has really succeeded,” he added.

Despite the obvious advantages of large companies such as Google, which have untold amounts of both data and computer power that deep learning requires to be useful, startups can still have a big impact, a couple of the panelists said. “There’s data in a lot of places. There’s a lot of nooks and crannies that Google doesn’t have access to,” Berenzweig said hopefully. “Also, you can trade expertise for data. There’s also a question of how much data is enough.

Turner agreed. “It’s not just a matter of stockpiling data,” he said. “Better algorithms can help an application perform better.” He noted that even Facebook, despite its wealth of personal data, found this in its work on image recognition.

Those algorithms may have broad applicability, too. Even if they’re initially developed for specific applications such as speech recognition, it looks like they can be used on a wide variety of applications. “These algorithms are extremely fungible,” said Rao. And he said companies such as Google aren’t keeping them as secret as expected, often publishing them in academic journals and at conferences–though Berenzweig noted that “it takes more than what they publish to do what they do well.

For all that, it’s not yet clear how much deep learning will actually emulate the brain, even if they are intelligent. But Ilya Sutskever, research scientist at Google Brain and a protege of Geoffrey Hinton, the University of Toronto deep learning guru since the 1980s who’s now working part-time at Google, said it almost doesn’t matter. “You can still do useful predictions” using them. And while the learning principles for dealing with all the unlabeled data out there remain primitive, he said he and many others are working on this and likely will make even more progress.

Rao said he’s unworried that we’ll end up creating some kind of alien intelligence that could run amok if only because advances will be driven by market needs. Besides, he said, “I think a lot of the similarities we’re seeing in computation and brain functions is coincidental. It’s driven that way because we constrain it that way.

OK, so how are these companies planning to make money on this stuff? Jurvetson wondered. Of course, we’ve already seen improvements in speech and image recognition that make smartphones and apps more useful, leading more people to buy them. “Speech recognition is useful enough that I use it,” said Sutskever. “I’d be happy if I didn’t press a button ever again. And language translation could have a very large impact.

Beyond that, Berenzweig said, “we’re looking for the low-hanging fruit,” common use cases such as visual search for shopping, organizing your personal photos, and various business niches such as security.

Follow me on Twitter and Google+, and read the rest of my Forbes posts here.

Tagged , , , , , , ,

Danko Nikolic on Singularity 1 on 1: Practopoiesis Tells Us Machine Learning Is Not Enough!

If there’s ever been a case when I just wanted to jump on a plane and go interview someone in person, not because they are famous but because they have created a totally unique and arguably seminal theory, it has to be Danko Nikolic. I believe Danko’s theory of Practopoiesis is that good and he should and probably eventually would become known around the world for it. Unfortunately, however, I don’t have a budget of thousands of dollars per interview which will allow me to pay for my audio and video team to travel to Germany and produce the quality that Nikolic deserves. So, I’ve had to settle with Skype. And Skype refused to cooperate on that day even though both me and Danko have pretty much the fastest internet connections money can buy. Luckily, despite the poor video quality, our audio was very good and I would urge that if there’s ever been an interview where you ought to disregard the video quality and focus on the content – it has to be this one.
During our 67 min conversation with Danko we cover a variety of interesting topics such as:

As always you can listen to or download the audio file above or scroll down and watch the video interview in full.
To show your support you can write a review on iTunes or make a donation.
Who is Danko Nikolic?
The main motive for my studies is the explanatory gap between the brain and the mind. My interest is in how the physical world of neuronal activity produces the mental world of perception and cognition. I am associated with

  • the Max-Planck Institute for Brain Research,
  • Ernst Strüngmann Institute,
  • Frankfurt Institute for Advanced Studies, and
  • the University of Zagreb.
I approach the problem of explanatory gap from both sides, bottom-up and top-down. The bottom-up approach investigates brain physiology. The top-down approach investigates the behavior and experiences. Each of the two approaches led me to develop a theory: The work on physiology resulted in the theory of practopoiesis. The work on behavior and experiences led to the phenomenon of ideasthesia.
The empirical work in the background of those theories involved

  • simultaneous recordings of activity of 100+ neurons in the visual cortex (extracellular recordings),
  • behavioral and imaging studies in visual cognition (attention, working memory, long-term memory), and
  • empirical investigations of phenomenal experiences (synesthesia).
The ultimate goal of my studies is twofold.

  • First, I would like to achieve conceptual understanding of how the dynamics of physical processes creates the mental ones. I believe that the work on practopoiesis presents an important step in this direction and that it will help us eventually address the hard problem of consciousness and the mind-body problem in general.
  • Second, I would like to use this theoretical knowledge to create artificial systems that are biologically-like intelligent and adaptive. This would have implications for our technology.
A reason why one would be interested in studying the brain in the first place is described here: Why brain?
Tagged , , , , , , , ,

Neurons in human skin perform advanced calculations

[2014-09-01] Neurons in human skin perform advanced calculations, previously believed that only the brain could perform. This is according to a study from Umeå University in Sweden published in the journal Nature Neuroscience.

A fundamental characteristic of neurons that extend into the skin and record touch, so-called first-order neurons in the tactile system, is that they branch in the skin so that each neuron reports touch from many highly-sensitive zones on the skin.
According to researchers at the Department of Integrative Medical Biology, IMB, Umeå University, this branching allows first-order tactile neurons not only to send signals to the brain that something has touched the skin, but also process geometric data about the object touching the skin.
- Our work has shown that two types of first-order tactile neurons that supply the sensitive skin at our fingertips not only signal information about when and how intensely an object is touched, but also information about the touched object’s shape, says Andrew Pruszynski, who is one of the researchers behind the study.
The study also shows that the sensitivity of individual neurons to the shape of an object depends on the layout of the neuron’s highly-sensitive zones in the skin.
- Perhaps the most surprising result of our study is that these peripheral neurons, which are engaged when a fingertip examines an object, perform the same type of calculations done by neurons in the cerebral cortex. Somewhat simplified, it means that our touch experiences are already processed by neurons in the skin before they reach the brain for further processing, says Andrew Pruszynski.
For more information about the study, please contact Andrew Pruszynski, post doc at the Department of Integrative Medical Biology, IMB, Umeå University. He is English-speaking and can be reached at:
Phone: +46 90 786 51 09; Mobile: +46 70 610 80 96
Tagged , , , , , , ,

This AI-Powered Calendar Is Designed to Give You Me-Time

09.02.14 |
Timeful intelligently schedules to-dos and habits on your calendar. Timeful
No one on their death bed wishes they’d taken a few more meetings. Instead, studies find people consistently say things like: 
  • I wish I’d spent more time with my friends and family;
  • I wish I’d focused more on my health; 
  • I wish I’d picked up more hobbies.
That’s what life’s all about, after all. So, question: Why don’t we ever put any of that stuff on our calendar?
That’s precisely what the folks behind Timeful want you to do. Their app (iPhone, free) is a calendar designed to handle it all. You don’t just put in the things you need to do—meeting on Thursday; submit expenses; take out the trash—but also the things you want to do, like going running more often or brushing up on your Spanish. Then, the app algorithmically generates a schedule to help you find time for it all. The more you use it, the smarter that schedule gets.
Even in the crowded categories of calendars and to-do lists, Timeful stands out. Not many iPhone calendar apps were built by renowned behavioral psychologists and machine learning experts, nor have many attracted investor attention to the tune of $7 million.
It was born as a research project at Stanford, where Jacob Bank, a computer science PhD candidate, and his advisor, AI expert Yoav Shoham, started exploring how machine learning could be applied to time management. To help with their research, they brought on Dan Ariely, the influential behavior psychologist and author of the book Predictably Irrational. It didn’t take long for the group to realize that there was an opportunity to bring time management more in step with the times. “It suddenly occurred to me that my calendar and my grandfather’s calendar are essentially the same,” Shoham recalls.
A Tough Problem and an Artifically Intelligent Solution
Like all of Timeful’s founders, Shoham sees time as our most valuable resource–far more valuable, even, than money. And yet he says the tools we have for managing money are far more sophisticated than the ones we have for managing time. In part, that’s because time poses a tricky problem. Simply put, it’s tough to figure out the best way to plan your day. On top of that, people are lazy, and prone to distraction. “We have a hard computational problem compounded by human mistakes,” Shoham says.
To address that lazy human bit, Timeful is designed around a simple fact: When you schedule something, you’re far more likely to get it done. Things you put in the app don’t just live in some list. Everything shows up on the calendar. Meetings and appointments get slotted at the times they take place, as you’d expect. But at the start of the day, the app also blocks off time for your to-dos and habits, rendering them as diagonally-slatted rectangles on your calendar which you can accept, dismiss, or move around as you desire.
Suggestions have diagonal slats. Timeful
Suggestions have diagonal slats. Timeful
In each case, Timeful takes note of how you respond and adjusts its “intention rank,” as the company calls its scheduling algorithm. This is the special sauce that elevates Timeful from dumb calendar to something like an assistant. As Bank sees it, the more nebulous lifestyle events we’d never think to put on our calendar are a perfect subject for some machine learning smarts. “Habits have the really nice property that they repeat over time with very natural patterns,” he says. “So if you put in, ‘run three times a week,’ we can quickly learn what times you like to run and when you’re most likely to do it.
The other machine learning challenge involved with Timeful is the problem of input. Where many other to-do apps try to make the input process as frictionless as possible, Timeful often needs to ask a few follow-up questions to schedule tasks properly, like how long you expect them to take, and if there’s a deadline for completion. As with all calendars and to-do apps, Timeful’s only as useful as the stuff you put on it, and here that interaction’s a fairly heavy one. For many, it could simply be too much work for the reward. Plus, isn’t it a little weird to block off sixty minutes to play with your kid three times a week?
Bank admits that it takes longer to put things into Timeful than some other apps, and the company’s computer scientists are actively trying to come up with new ways to offload the burden algorithmically. In future versions, Bank hopes to be able to automatically pull in data from other apps and services. A forthcoming web version could also make input easier (an Android version is on the way too). But as Bank sees it, there may be an upside to having a bit of friction here. By going through the trouble of putting something in the app, you’re showing that you truly want to get it done, and that could help keep Timeful from becoming a “list of shame” like other to-do apps. (And as far as the kid thing goes, it might feel weird, but if scheduling family time on your calendar results in more family time, then it’s kinda hard to knock, no?)
How Much Scheduling Is Too Much?
Perhaps the bigger question is how much day-to-day optimization people can really swallow. Having been conditioned to see the calendar as a source of responsibilities and obligations, opening up one’s preferred scheduling application and seeing a long white column stretching down for the day can be the source of an almost embarrassing degree of relief. Thank God, now I can finally get something done! With Timeful, that feeling becomes extinct. Every new dawn brings a whole bunch of new stuff to do.
Two of Timeful’s co-founders, Jacob Bank (top) and Yoav Shoham Timeful
Bank and Shoham are acutely aware of this thorny problem. “Sometimes there’s a tension between what’s best for a user and what the user wants to accept, and we need to be really delicate about that,” Bank says. In the app, you can fine-tune just how aggressive you want it to be in its planning, and a significant part of the design process was making sure the app’s suggestions felt like suggestions, not demands. Still, we might crave that structure more than we think. After some early user tests, the company actually cranked up the pushiness of Timeful’s default setting; the overwhelming response from beta testers was “give me more!
The vision is for Timeful to become something akin to a polite assistant. Shoham likens it to Google Now for your schedule–a source of informed suggestions about what to do next. Whether you take those suggestions or leave them is entirely up to you. “This is not your paternalistic dad telling you, ‘thou shall do this!’” he says. “It’s not your guilt-abusing mom. Well, maybe there’s a little bit of that.”
Tagged , , , , , ,

Practopoiesis: How cybernetics of biology can help AI

By creating any form of AI we must copy from biology. The argument goes as follows. A brain is a biological product. And so must be then its products such as perception, insight, inference, logic, mathematics, etc. By creating AI we inevitably tap into something that biology has already invented on its own. It follows thus that the more we want the AI system to be similar to a human—e.g., to get a better grade on the Turing test—the more we need to copy the biology.
When it comes to describing living systems, traditionally, we assume the approach of different explanatory principles for different levels of system organization.

  1. One set of principles is used for “low-level” biology such as the evolution of our genome through natural selection, which is a completely different set of principles than the one used for describing the expression of those genes. 
  2. A yet different type of story is used to explain what our neural networks do. Needless to say, 
  3. the descriptions at the very top of that organizational hierarchy—at the level of our behavior—are made by concepts that again live in their own world.
But what if it was possible to unify all these different aspects of biology and describe them all by a single set of principles? What if we could use the same fundamental rules to talk about the physiology of a kidney and the process of a conscious thought? What if we had concepts that could give us insights into mental operations underling logical inferences on one hand and the relation between the phenotype and genotype on the other hand? This request is not so outrageous. After all, all those phenomena are biological.
One can argue that such an all-embracing theory of the living would be beneficial also for further developments of AI. The theory could guide us on what is possible and what is not. Given a certain technological approach, what are its limitations? Maybe it could answer the question of what the unitary components of intelligence are. And does my software have enough of them?
For more inspiration, let us look into Shannon-Wiener theory of information and appreciate how much helpful this theory is for dealing with various types of communication channels (including memory storage, which is also a communication channel, only over time rather than space). We can calculate how much channel capacity is needed to transmit (store) certain contents. Also, we can easily compare two communication channels and determine which one has more capacity. This allows us to directly compare devices that are otherwise incomparable. For example, an interplanetary communication system based on satellites can be compared to DNA located within a nucleus of a human cell. Only thanks to the information theory can we calculate whether a given satellite connection has enough capacity to transfer the DNA information about human person to a hypothetical recipient at another planet. (The answer is: yes, easily.) Thus, information theory is invaluable in making these kinds of engineering decisions.
So, how about intelligence? Wouldn’t it be good to come into possession of a similar general theory for adaptive intelligent behavior? Maybe we could use certain quantities other than bits that could tell us why the intelligence of plants is lagging behind that of primates? Also, we may be able to know better what the essential ingredients are that distinguish human intelligence from that of a chimpanzee? Using the same theory we could compare

  • an abacus, 
  • a hand-held calculator, 
  • a supercomputer, and 
  • a human intellect.
The good news is that, since recently, such an overarching biological theory exists, and it is called practopoiesis. Derived from Ancient Greek praxis + poiesis, practopoiesis means creation of actions. The name reflects the fundamental presumption on what the common property can be found across all the different levels of organization of biological systems

  • Gene expression mechanisms act; 
  • bacteria act; 
  • organs act; 
  • organisms as a whole act.
Due to this focus on biological action, practopoiesis has a strong cybernetic flavor as it has to deal with the need of acting systems to close feedback loops. Input is needed to trigger actions and to determine whether more actions are needed. For that reason, the theory is founded in the basic theorems of cybernetics, namely that of requisite variety and good regulator theorem.
The key novelty of practopoiesis is that it introduces the mechanisms explaining how different levels of organization mutually interact. These mechanisms help explain how genes create anatomy of the nervous system, or how anatomy creates behavior.
When practopoiesis is applied to human mind and to AI algorithms, the results are quite revealing.
To understand those, we need to introduce the concept of practopoietic traverse. Without going into details on what a traverse is, let us just say that this is a quantity with which one can compare different capabilities of systems to adapt. Traverse is a kind of a practopoietic equivalent to the bit of information in Shannon-Wiener theory. If we can compare two communication channels according to the number of bits of information transferred, we can compare two adaptive systems according to the number of traverses. Thus, a traverse is not a measure of how much knowledge a system has (for that the good old bit does the job just fine). It is rather a measure of how much capability the system has to adjust its existing knowledge for example, when new circumstances emerge in the surrounding world.
To the best of my knowledge no artificial intelligence algorithm that is being used today has more than two traverses. That means that these algorithms interact with the surrounding world at a maximum of two levels of organization. For example, an AI algorithm may receive satellite images at one level of organization and the categories to which to learn to classify those images at another level of organization. We would say that this algorithm has two traverses of cybernetic knowledge. In contrast, biological behaving systems (that is, animals, homo sapiens) operate with three traverses.
This makes a whole lot of difference in adaptive intelligence. Two-traversal systems can be super-fast and omni-knowledgeable, and their tech-specs may list peta-everything, which they sometimes already do, but these systems nevertheless remain comparably dull when compared to three-traversal systems, such as a three-year old girl, or even a domestic cat.
To appreciate the difference between two and three traverses, let us go one step lower and consider systems with only one traverse. An example would be a PC computer without any advanced AI algorithm installed.
This computer is already light speed faster than I am in calculations, way much better in memory storage, and beats me in spell checking without the processor even getting warm. And, paradoxically, I am still the smarter one around. Thus, computational capacity and adaptive intelligence are not the same.
Importantly, this same relationship “me vs. the computer” holds for “me vs. a modern advanced AI-algorithm”. I am still the more intelligent one although the computer may have more computational power. But also the relationship holds “AI-algorithm vs. non-AI computer”. Even a small AI algorithm, implemented say on a single PC, is in many ways more intelligent than a petaflop supercomputer without AI. Thus, there is a certain hierarchy in adaptive intelligence that is not determined by memory size or the number of floating point operations executed per second but by the ability to learn and adapt to the environment.
A key requirement for adaptive intelligence is the capacity to observe how well one is doing towards a certain goal combined with the capacity to make changes and adjust in light of the feedback obtained. Practopoiesis tells us that there is not only one step possible from non-adaptive to adaptive, but that multiple adaptive steps are possible. Multiple traverses indicate a potential for adapting the ways in which we adapt.
We can go even one step further down the adaptive hierarchy and consider the least adaptive systems e.g., a book: Provided that the book is large enough, it can contain all of the knowledge about the world and yet it is not adaptive as it cannot for example, rewrite itself when something changes in that world. Typical computer software can do much more and administer many changes, but there is also a lot left that cannot be adjusted without a programmer. A modern AI-system is even smarter and can reorganize its knowledge to a much higher degree. Still, nevertheless, these systems are incapable of doing a certain types of adjustments that a human person can do, or that animals can do. Practopoisis tells us that these systems fall into different adaptive categories, which are independent of the raw information processing capabilities of the systems. Rather, these adaptive categories are defined by the number of levels of organization at which the system receives feedback from the environment — also referred to as traverses.
We can thus make the following hierarchical list of the best exemplars in each adaptive category:
  • A book: dumbest; zero traverses
  • A computer: somewhat smarter; one traverse
  • An AI system: much smarter; two traverses
  • A human: rules them all; three traverses
Most importantly for creation of strong AI, practopoiesis tells us in which direction the technological developments should be heading:
Engineering creativity should be geared towards empowering the machines with one more traverse. To match a human, a strong AI system has to have three traverses.
Practopoietic theory explains also what is so special about the third traverse. Systems with three traverses (referred to as T3-systems) are capable of storing their past experiences in an abstract, general form, which can be used in a much more efficient way than in two-traversal systems. This general knowledge can be applied to interpretation of specific novel situations such that quick and well-informed inferences are made about what is currently going on and what actions should be executed next. This process, unique for T3-systems, is referred to as anapoiesis, and can be generally described as a capability to reconstruct cybernetic knowledge that the system once had and use this knowledge efficiently in a given novel situation.
If biology has invented T3-systems and anapoiesis and has made a good use of them, there is no reason why we should not be able to do the same in machines.
5 Robots Booking It to a Classroom Near You
Danko Nikolić is a brain and mind scientist, running an electrophysiology lab at the Max Planck Institute for Brain Research, and is the creator of the concept of ideasthesia. More about practopoiesis can be read here
ORIGINAL: Singularity Web
Tagged , , , , , , ,

5 Robots Booking It to a Classroom Near You


Robots are the new kids in school.
The technological creations are taking on serious roles in the classroom. With the accelerating rate of robotic technology, school administrators all over the world are plotting how to implement them in education, from elementary through high school.
In South Korea, robots are replacing English teachers entirely, entrusted with leading and teaching entire classrooms. In Alaska, some robots are replacing the need for teachers to physically be present at all.
Robotics 101 is now in session. Here are five ways robots are being introduced into schools.
1. Nao Robot as math teacher
In Harlem school PS 76, a Nao robot created in France, nicknamed Projo helps students improve their math skills. It’s small, about the size of a stuffed animal, and sits by a computer to assist students working on math and science problems online.
Sandra Okita, a teacher at the school, told The Wall Street Journal the robot gauges how students interact with non-human teachers. The students have taken to the humanoid robotic peer, who can speak and react, saying it’s helpful and gives the right amount of hints to help them get their work done.
2. Aiding children with autism
The Nao Robot also helps improve social interaction and communication for children with autism. The robots were introduced in a classroom in Birmingham, England in 2012, to play with children in elementary school. Though the children were intimidated at first, they’ve taken to the robotic friend, according to The Telegraph.
3. VGo robot for ill children

Sick students will never have to miss class again if the VGo robot catches on. Created by VGo Communications, the rolling robot has a webcam and can be controlled and operated remotely via computer. About 30 students with special needs nationwide have been using the $6,000 robot to attend classes.
For example, a 12-year-old Texas student with leukemia kept up with classmates by using a VGo robot. With a price tag of about $6,000, the robots aren’t easily accessible, but they’re a promising sign of what’s to come.

4. Robots over teachers
In the South Korean town of Masan, robots are starting to replace teachers entirely. The government started using the robots to teach students English in 2010. The robots operate under supervision, but the plan is to have them lead a room exclusively in a few years, as robot technology develops.
5. Virtual teachers

South Korea isn’t the only place getting virtual teachers. A school in Kodiak, Alaska has started using telepresence robots to beam teachers into the classroom. The tall, rolling robots have iPads attached to the top, which teachers will use to video chat with students.
The Kodiak Island Borough School District‘s superintendent, Stewart McDonald, told The Washington Times he was inspired to do this because of the show The Big Bang Theory, which stars a similar robot. Each robot costs about $2,000; the school bought 12 total in early 2014.

Tagged , , , , ,

DARPA Project Starts Building Human Memory Prosthetics

By Eliza Strickland
Posted 27 Aug 2014
The first memory-enhancing devices could be implanted within four years
Photo: Lawrence Livermore National LaboratoryRemember This? Lawrence Livermore engineer Vanessa Tolosa holds up a silicon wafer containing micromachined implantable neural devices for use in experimental memory prostheses.
They’re trying to do 20 years of research in 4 years,” says Michael Kahana in a tone that’s a mixture of excitement and disbelief. Kahana, director of the Computational Memory Lab at the University of Pennsylvania, is mulling over the tall order from the U.S. Defense Advanced Research Projects Agency (DARPA). In the next four years, he and other researchers are charged with understanding the neuroscience of memory and then building a prosthetic memory device that’s ready for implantation in a human brain.
DARPA’s first contracts under its Restoring Active Memory (RAM) program challenge two research groups to construct implants for veterans with traumatic brain injuries that have impaired their memories. Over 270,000 U.S. military service members have suffered such injuries since 2000, according to DARPA, and there are no truly effective drug treatments. This program builds on an earlier DARPA initiative focused on building a memory prosthesis, under which a different group of researchers had dramatic success in improving recall in mice and monkeys.
Kahana’s team will start by searching for biological markers of memory formation and retrieval. For this early research, the test subjects will be hospitalized epilepsy patients who have already had electrodes implanted to allow doctors to study their seizures. Kahana will record the electrical activity in these patients’ brains while they take memory tests.
The memory is like a search engine,” Kahana says. “In the initial memory encoding, each event has to be tagged. Then in retrieval, you need to be able to search effectively using those tags.” He hopes to find the electric signals associated with these two operations.
Once they’ve found the signals, researchers will try amplifying them using sophisticated neural stimulation devices. Here Kahana is working with the medical device maker Medtronic, in Minneapolis, which has already developed one experimental implant that can both record neural activity and stimulate the brain. Researchers have long wanted such a “closed-loop” device, as it can use real-time signals from the brain to define the stimulation parameters.
Kahana notes that designing such closed-loop systems poses a major engineering challenge. Recording natural neural activity is difficult when stimulation introduces new electrical signals, so the device must have special circuitry that allows it to quickly switch between the two functions. What’s more, the recorded information must be interpreted with blistering speed so it can be translated into a stimulation command. “We need to take analyses that used to occupy a personal computer for several hours and boil them down to a 10-millisecond algorithm,” he says.
In four years’ time, Kahana hopes his team can show that such systems reliably improve memory in patients who are already undergoing brain surgery for epilepsy or Parkinson’s. That, he says, will lay the groundwork for future experiments in which medical researchers can try out the hardware in people with traumatic brain injuries—people who would not normally receive invasive neurosurgery.
The second research team is led by Itzhak Fried, director of the Cognitive Neurophysiology Laboratory at the University of California, Los Angeles. Fried’s team will focus on a part of the brain called the entorhinal cortex, which is the gateway to the hippocampus, the primary brain region associated with memory formation and storage. “Our approach to the RAM program is homing in on this circuit, which is really the golden circuit of memory,” Fried says. In a 2012 experiment, he showed that stimulating the entorhinal regions of patients while they were learning memory tasks improved their performance.
Fried’s group is working with Lawrence Livermore National Laboratory, in California, to develop more closed-loop hardware. At Livermore’s Center for Bioengineering, researchers are leveraging semiconductor manufacturing techniques to make tiny implantable systems. They first print microelectrodes on a polymer that sits atop a silicon wafer, then peel the polymer off and mold it into flexible cylinders about 1 millimeter in diameter. The memory prosthesis will have two of these cylindrical arrays, each studded with up to 64 hair-thin electrodes, which will be capable of both recording the activity of individual neurons and stimulating them. Fried believes his team’s device will be ready for tryout in patients with traumatic brain injuries within the four-year span of the RAM program.
Outside observers say the program’s goals are remarkably ambitious. Yet Steven Hyman, director of psychiatric research at the Broad Institute of MIT and Harvard, applauds its reach. “The kind of hardware that DARPA is interested in developing would be an extraordinary advance for the whole field,” he says. Hyman says DARPA’s funding for device development fills a gap in existing research. Pharmaceutical companies have found few new approaches to treating psychiatric and neurodegenerative disorders in recent years, he notes, and have therefore scaled back drug discovery efforts. “I think that approaches that involve devices and neuromodulation have greater near-term promise,” he says.
This article originally appeared in print as “Making a Human Memory Chip.
Tagged , , , , , , , , ,

Everybody Relax: An MIT Economist Explains Why Robots Won’t Steal Our Jobs

Living together in harmony. Photo by Oli Scarff/Getty Images
If you’ve ever found yourself fretting about the possibility that software and robotics are on the verge of thieving away all our jobs, renowned MIT labor economist David Autor is out with a new paper that might ease your nerves. Presented Friday at the Federal Reserve Bank of Kansas City’s big annual conference in Jackson Hole, Wyoming, the paper argues that humanity still has two big points in its favor: People have “common sense,” and they’re “flexible.
Neil Irwin already has a lovely writeup of the paper at the New York Times, but let’s run down the basics. There’s no question machines are getting smarter, and quickly acquiring the ability to perform work that once seemed uniquely human. Think self-driving cars that might one day threaten cabbies, or computer programs that can handle the basics of legal research.
But artificial intelligence is still just that: artificial. We haven’t untangled all the mysteries of human judgment, and programmers definitely can’t translate the way we think entirely into code. Instead, scientists at the forefront of AI have found workarounds like machine-learning algorithms. As Autor points out, a computer might not have any abstract concept of a chair, but show it enough Ikea catalogs, and it can eventually suss out the physical properties statistically associated with a seat. Fortunately for you and me, this approach still has its limits.
For example, both a toilet and a traffic cone look somewhat like a chair, but a bit of reasoning about their shapes vis-à-vis the human anatomy suggests that a traffic cone is unlikely to make a comfortable seat. Drawing this inference, however, requires reasoning about what an object is “for” not simply what it looks like. Contemporary object recognition programs do not, for the most part, take this reasoning-based approach to identifying objects, likely because the task of developing and generalizing the approach to a large set of objects would be extremely challenging.
That’s what Autor means when he says machines lack for common sense. They don’t think. They just do math.
And that leaves lots of room for human workers in the future.
Technology has already whittled away at middle class jobs, from factory workers replaced by robotic arms to secretaries made redundant by Outlook, over the past few decades. But Autor argues that plenty of today’s middle-skill occupations, such as construction trades and medical technicians, will stick around, because “many of the tasks currently bundled into these jobs cannot readily be unbundled … without a substantial drop in quality.
These aren’t jobs that require performing a single task over and over again, but instead demand that employees handle some technical work while dealing with other human beings and improvising their way through unexpected problems. Machine learning algorithms can’t handle all of that. Human beings, Swiss-army knives that we are, can. We’re flexible.
Just like the dystopian arguments that machines are about to replace a vast swath of the workforce, Autor’s paper is very much speculative. It’s worth highlighting, though, because it cuts through the silly sense of inevitability that sometimes clouds this subject. Predictions about the future of technology and the economy are made to be dashed. And while Noah Smith makes a good point that we might want to be prepared for mass, technology-driven unemployment even if there’s just a slim chance of it happening, there’s also no reason to take it for granted.
Jordan Weissmann is Slate’s senior business and economics correspondent.
Tagged , , , , , ,
%d bloggers like this: