Are Telepathy Experiments Stunts, or Science?

November 21, 2014
Scientists have established direct communication between two human brains, but is it more than a stunt?
WHY IT MATTERS
Communicating directly with the brain could help scientists better understand how it encodes information.
Two scientific teams this year patched together some well-known technologies to directly exchange information between human brains.
The projects, in the U.S. and Europe, appear to represent the first occasions in history that any two people have transmitted information without either of them speaking or moving any muscle. For now, however, the “telepathy” technology remains so crude that it’s unlikely to have any practical impact.
In a paper published last week in the journal PLOS One, neuroscientists and computer engineers at the University of Washington in Seattle described a brain-to-brain interface they built that lets two people coöperatively play a simple video game. Earlier this year, a company in Barcelona called Starlab described transmitting short words like “ciao,” encoded as binary digits, between the brains of individuals on different continents.
Both studies used a similar setup: the sender of the message wore an EEG (electroencephalography) cap that captured electrical signals generated by his cortex while he thought about moving his hands or feet. These signals were then sent over the Internet to a computer that translated them into jolts delivered to a recipient’s brain using a magnetic coil. In Starlab’s case, the recipient perceived a flash of light. In the University of Washington’s case, the magnetic pulse caused an involuntary twitch of the wrist over a touchpad, to shoot a rocket in a computer game.
Neither EEG recording nor this kind of brain stimulation (called transcranial magnetic stimulation, or TMS) are new technologies. What is novel is bringing the two together for the purposes of simple communication. The Starlab researchers suggested that such “hyperinteraction technologies” could “eventually have a profound impact on the social structure of our civilization.
For now, however, the technology remains extremely limited. Neither experiment transmitted emotions, thoughts, or ideas. Instead they used human brains essentially as relays to convey a simple signal between two computers. The rate as which information was transmitted was also glacial.
Safety guidelines limit the use of TMS devices to a single pulse every 20 seconds. But even without that restriction, a person can only transmit a few bits of information per minute wearing an EEG cap, because willfully changing the shape of their brain wave takes deliberate concentration.
By comparison, human speech conveys information at roughly 3,000 bits per minute, according to one estimate. That means the information content of a 90-second conversation would take a day or more to transmit mentally.
Researchers intend to explore more precise, and faster, ways of conveying information. Andreas Stocco, one of the University of Washington researchers, says his team has a $1 million grant from the WM Keck Foundation to upgrade its equipment and to carry out experiments with different ways of exchanging information between minds, including with focused ultrasound waves that can stimulate nerves through the skull.
Stocco says an important use of the technology would be to help scientists test their ideas about how neurons in the brain represent information, especially about abstract concepts. For instance, if a researcher believed she could identify the neuronal pattern reflecting, say, the idea of a yellow airplane, one way to prove it would be to transmit that pattern to another person and ask what she was thinking.
You can see this interface as two different things,” says Stocco. “One is a super-cool toy that we have developed because it’s futuristic and an engineering feat but that doesn’t produce science. The other is, in the future, the ultimate way to test hypotheses about how the brain encodes information.
Tagged , , , , , ,

Pathway Genomics: Bringing Watson’s Smarts to Personal Health and Fitness

ORIGINAL: A Smarter Planet
November, 12th 2014
By Michael Nova M.D.
Michael Nova, Chief Medical Officer, Pathway Genomics
To describe me as a health nut would be a gross understatement. I run five days a week, bench press 275 pounds, do 120 pushups at a time, and surf the really big waves in Indonesia. I don’t eat red meat, I typically have berries for breakfast and salad for dinner, and I consume an immense amount of kale—even though I don’t like the way it tastes. My daily vitamin/supplement regimen includes Alpha-lipoic acid, Coenzyme Q and Resveratrol. And, yes, I wear one of those fitness gizmos around my neck to count how many steps I take in a day.
I have been following this regimen for years, and it’s an essential part of my life.
For anybody concerned about health, diet and fitness, these are truly amazing times. There’s a superabundance of health and fitness information published online. We’re able to tap into our electronic health records, we can measure just about everything we do physically, and, thanks to the plummeting price of gene sequencing, we can map our complete genomes for as little as $3000 and get readings on smaller chunks of genomic data for less than $100.
Think of it as your own personal health big-data tsunami.
The problem is we’re confronted with way too much of a good thing. There’s no way an individual like me or you can process all of the raw information that’s available to us—much less make sense out of it. That’s why I’m looking forward to being one of the first customers for a new mobile app that my company, Pathway Genomics, is developing with help from IBM Watson Group.
Surfing in Indonesia
Called Pathway Panorama, the smartphone app will make it possible for individuals to ask questions in everyday language and get answers in less than three seconds that take into consideration their personal health, diet and fitness scenarios combined with more general information. The result is recommendations that fit each of us like a surfer’s wet suit. Say you’ve just flown from your house on the coast to a city that’s 10,000 feet above sea level. You might want to ask how far you could safely run on your first day after getting off the plane—and at what pulse rate should you slow your jogging pace.
Or say you’re diabetic and you’re in a city you have never visited before. You had a pastry for breakfast and you want to know when you should take your next shot of insulin. In an emergency, you’ll be able to find specialized healthcare providers near where you are who can take care of you.
Whether you’re totally healthy and want to maximize your physical performance or you have health issues and want to reduce risks, this service will give you the advice you need. It’s like a guardian angel sitting on your shoulder who will also pre-emptively offer you help even if you don’t ask for it.
We use Watson’s language processing and cognitive abilities and combine them with information from a host of sources. The critical data comes from individual 
DNA and biomarker analysis that Pathway Genomics performs using a variety of devices and software tools.
Pathway Genomics, which launched 6 years ago in San Diego, already has a growing business of providing individual health reports delivered primarily through individuals’ personal physicians. With our Pathway Panorama app, we’ll reach out directly to consumers in a big way.
We’re in the middle of raising a new round of venture financing to pay for the expansion of our business. This brings to $80 million the amount of venture capital we have raised in the past six years—which makes us one of the best capitalized healthcare startups.
IBM is investing in Pathway Genomics as part of its commitment of $100 million to companies that are bringing to market a new generation of apps and services infused with Watson’s cognitive computing intelligence. This is the third such investment IBM has made this year.
We expect the app to be available in midi2015. We have not yet set pricing, but we expect to charge a small monthly fee. We also are creating a version for physicians.
To me, the real beauty of the Panorama app is that it will make it possible for us to safeguard our health and improve our fitness without obsessing all the time. We’ll just live our lives, and, when we need help, we’ll get it.
——-
To learn more about the new era of computing, read Smart Machines: IBM’s Watson and the Era of Cognitive Computing.
Tagged , , , , , , , , ,

A Worm’s Mind In A Lego Body

ORIGINAL: i-Programmer
Written by Lucy Black
16 November 2014
Take the connectome of a worm and transplant it as software in a Lego Mindstorms EV3 robot – what happens next?
It is a deep and long standing philosophical question. Are we just the sum of our neural networks. Of course, if you work in AI you take the answer mostly for granted, but until someone builds a human brain and switches it on we really don’t have a concrete example of the principle in action. 
KDS444, modified by Nnemo
The nematode worm Caenorhabditis elegans (C. elegans) is tiny and only has 302 neurons. These have been completely mapped and the OpenWorm project is working to build a complete simulation of the worm in software. One of the founders of the OpenWorm project, Timothy Busbice, has taken the connectome and implemented an object oriented neuron program.
The model is accurate in its connections and makes use of UDP packets to fire neurons. If two neurons have three synaptic connections then when the first neuron fires a UDP packet is sent to the second neuron with the payload “3”. The neurons are addressed by IP and port number. The system uses an integrate and fire algorithm. Each neuron sums the weights and fires if it exceeds a threshold. The accumulator is zeroed if no message arrives in a 200ms window or if the neuron fires. This is similar to what happens in the real neural network, but not exact.
The software works with sensors and effectors provided by a simple LEGO robot. The sensors are sampled every 100ms. For example, the sonar sensor on the robot is wired as the worm’s nose. If anything comes within 20cm of the “nose” then UDP packets are sent to the sensory neurons in the network.
The same idea is applied to the 95 motor neurons but these are mapped from the two rows of muscles on the left and right to the left and right motors on the robot. The motor signals are accumulated and applied to control the speed of each motor. The motor neurons can be excitatory or inhibitory and positive and negative weights are used.
And the result?
It is claimed that the robot behaved in ways that are similar to observed C. elegans. Stimulation of the nose stopped forward motion. Touching the anterior and posterior touch sensors made the robot move forward and back accordingly. Stimulating the food sensor made the robot move forward.
Watch the video to see it in action. 
The key point is that there was no programming or learning involved to create the behaviors. The connectome of the worm was mapped and implemented as a software system and the behaviors emerge.
The conectome may only consist of 302 neurons but it is self-stimulating and it is difficult to understand how it works – but it does.
Currently the connectome model is being transferred to a Raspberry Pi and a self-contained Pi robot is being constructed. It is suggested that it might have practical application as some sort of mobile sensor – exploring its environment and reporting back results. Given its limited range of behaviors, it seems unlikely to be of practical value, but given more neurons this might change.
  • Is the robot a C. elegans in a different body or is it something quite new? 
  • Is it alive?
These are questions for philosophers, but it does suggest that the ghost in the machine is just the machine.
For us AI researchers, we still need to know if the principle of implementing a connectome scales.
More Information
Related Articles
To be informed about new articles on I Programmer, install the I Programmer Toolbar, subscribe to the RSS feed, follow us on, Twitter, Facebook, Google+ or Linkedin, or sign up for our weekly newsletter.
Tagged , , , , , ,

IBM’s new email app learns your habits to help get things done

Email can be overwhelming, especially at work; it can take a while to get back to an important conversation or project. IBM clearly knows how bad that deluge can be, though, since its new Verse email client is built to eliminate as much clutter as possible. The app learns your habits and puts the highest-priority people and tasks at the top level. You’ll know if a key team member emailed you during lunch, or that you have a meeting in 10 minutes. Verse also puts a much heavier emphasis on collaboration and search. It’s easier to find a particular file, message or topic, and there will even be a future option to get answers from a Watson thinking supercomputer — you may get insights without having to speak to a colleague across the hall.
It’s quite clever at first glance, although you may have to wait a while to give it a spin; a Verse beta on the desktop will be available this month, but only to a handful of IBM’s customers and partners. You’ll have to wait until the first quarter of 2015 to get a version built for individual use. It’ll be “freemium” (free with paid add-ons) when it does reach the public, however, and there are promises of apps for Android and iOS to make sure you’re productive while on the road.
SOURCE: IBM (1), (2)
ORIGINAL: Engadget
November 18th 2014
Tagged , , , , , ,

Machine Learning Algorithm Ranks the World’s Most Notable Authors

ORIGINAL: Tech Review
November 17, 2014
Deciding which books to digitise when they enter the public domain is tricky; unless you have an independent ranking of the most notable authors.
Public Domain Day, 1 January, is the day on which previously copyrighted works become freely available to print, digitise, modify or re-use in more or less any way. In most countries, this happens 50 or 70 years after the death of the author.
There is even a website that celebrates this event, announcing all the most notable authors whose works become freely available on that day. This allows organisations such as Project Gutenberg to prepare digital editions and LibriVox to create audio versions and so on
But here’s an interesting question. While the works of thousands of authors enter the public domain each year, only a small percentage of these end up being widely available. So how to choose the ones to focus on?
Today, Allen Riddell at Dartmouth College in New Hampshire, says he has the answer. Riddell has developed an algorithm that automatically generates an independent ranking of notable authors for a given year. It is then a simple task to pick the works to focus on or to spot notable omissions from the past.
Riddell’s approach is to look at what kind of public domain content the world has focused on in the past and then use this as a guide to find content that people are likely to focus on in the future. For this he uses a machine learning algorithm to mine two databases. The first is a list of over a million online books in the public domain maintained by the University of Pennsylvania. The second is Wikipedia.
Riddell’s begins with the Wikipedia entries of all authors in the English language edition—more than a million of them. His algorithm extracts information such as the article length, article age, estimated views per day, time elapsed since last revision and so on.
The algorithm then takes the list of all authors on the online book database and looks for a correlation between the biographical details on Wikipedia and the existence of a digital edition in the public domain.
That produces a “public domain ranking” of all the authors that appear on Wikipedia. For example, the author Virginia Woolf has a ranking of 1081 out of 1,011,304 while the Italian painter Giuseppe Amisani, who died in the same year as Woolf, has a ranking of 580,363. So Riddell’s new ranking clearly suggests that organisations like Project Guttenberg should focus more on digitising Woolf’s work than Amisani’s.
The beauty of this approach is that it is entirely independent. That’s in stark contrast to the committees that are often set up to rank works subjectively.
Of the individuals who died in 1965 and whose work will enter the public domain next January in many parts of the world, the new algorithm picks out T S Eliot as the most highly ranked individual. Others highly ranked include Somerset Maugham, Winston Churchill and Malcolm X.
As well as by year of death, it’s possible to rank authors according to categories of interest. For example, the top-ranked Mexican poet is Homero Aridjis, the top-ranked French philosopher, Jean-Paul Sartre and the top-ranked female American writer, Terri Windling.
Riddell says his ranking system compares well with existing rankings compiled by human experts, such as one compiled by the editorial board of the Modern Library. “The Public Domain Rank of the authors selected by the Modern Library editorial board are consistently high,” he says.
It is not perfect, however. Riddell acknowledges that his new Public Domain Ranking is likely to reflect the biases inherent in Wikipedia, which is well known for having few female editors, for example.
But with that in mind, the ranking is still likely to be useful. It should be handy for finding notable authors in the public domain whose works are not yet available electronically because they have somehow been overlooked. “Flannery O’Connor and Sylvia Plath stand out as significant examples of authors whose works might be made available today on Project Gutenberg Canada, “ says Riddell. (Canada follows the 50 year rule rather than 70).
It may even change the nature of Public Domain Day. “Public Domain Rank promises to facilitate—and even automate—Public Domain Day,” says Riddell.
Handy!
Ref: arxiv.org/abs/1411.2180 Public Domain Rank: Identifying Notable Individuals with the Wisdom of the Crowd
Tagged , , , , , , , ,

Robot Brains Catch Humans in 25 Years, Then Speed Right On By

ORIGINAL: Bloomberg
By Tom Randall
Nov 10, 2014

 

An android Repliee S1, produced by Japan’s Osaka University professor Hiroshi Ishiguro, performing during a dress rehearsal of Franz Kafka‘s “The Metamorphosis.” Phototographer: Yoshikazu Tsuno/AFP via Getty Images
We’ve been wrong about these robots before.
Soon after modern computers evolved in the 1940s, futurists started predicting that in just a few decades machines would be as smart as humans. Every year, the prediction seems to get pushed back another year. The consensus now is that it’s going to happen in … you guessed it, just a few more decades.
There’s more reason to believe the predictions today. After research that’s produced everything from self-driving cars to Jeopardy!-winning supercomputers, scientists have a much better understanding of what they’re up against. And, perhaps, what we’re up against.
Nick Bostrom, director of the Future of Humanity Institute at Oxford University, lays out the best predictions of the artificial intelligence (AI) research community in his new book, “Superintelligence: Paths, Dangers, Strategies.” Here are the combined results of four surveys of AI researchers, including a poll of the most-cited scientists in the field, totalling 170 respondents.
Human-level machine intelligence is defined here as “one that can carry out most human professions at least as well as a typical human.
By that definition, maybe we shouldn’t be so surprised about these predictions. Robots and algorithms are already squeezing the edges of our global workforce. Jobs with routine tasks are getting digitized: farmers, telemarketers, stock traders, loan officers, lawyers, journalists — all of these professions have already felt the cold steel nudge of our new automated colleagues.
Replication of routine isn’t the kind of intelligence Bostrom is interested in. He’s talking about an intelligence with intuition and logic, one that can learn, deal with uncertainty and sense the world around it. The most interesting thing about reaching human-level intelligence isn’t the achievement itself, says Bostrom; it’s what comes next. Once machines can reason and improve themselves, the skynet is the limit.
Computers are improving at an exponential rate. In many areas — chess, for example — machine skill is already superhuman. In others — reason, emotional intelligence — there’s still a long way to go. Whether human-level general intelligence is reached in 15 years or 150, it’s likely to be a little-observed mile marker on the road toward superintelligence.
Superintelligence: one that “greatly exceeds the cognitive performance of humans in virtually all domains of interest.
Inventor and Tesla CEO Elon Musk warns that superintelligent machines are possibly the greatest existential threat to humanity. He says the investments he’s made in artificial-intelligence companies are primarily to keep an eye on where the field is headed.
Hope we’re not just the biological boot loader for digital superintelligence,” Musk Tweeted in August. “Unfortunately, that is increasingly probable.
There are lots of caveats before we prepare to hand the keys to our earthly kingdom over to robot offspring.

  • First, humans have a terrible track record of predicting the future. 
  • Second, people are notoriously optimistic when forecasting the future of their own industries. 
  • Third, it’s not a given that technology will continue to advance along its current trajectory, or even with its current aims.
Still, the brightest minds devoted to this evolving technology are predicting the end of human intellectual supremacy by midcentury. That should be enough to give everyone pause. The direction of technology may be inevitable, but the care with which we approach it is not.
Success in creating AI would be the biggest event in human history,” wrote theoretical physicist Stephen Hawking, in an Independent column in May. “It might also be the last.”
Tagged , , , ,

The Myth Of AI. A Conversation with Jaron Lanier

ORIGINAL: EDGE

11.14.14

The idea that computers are people has a long and storied history. It goes back to the very origins of computers, and even from before. There’s always been a question about whether a program is something alive or not since it intrinsically has some kind of autonomy at the very least, or it wouldn’t be a program. There has been a domineering subculture—that’s been the most wealthy, prolific, and influential subculture in the technical world—that for a long time has not only promoted the idea that there’s an equivalence between algorithms and life, and certain algorithms and people, but a historical determinism that we’re inevitably making computers that will be smarter and better than us and will take over from us. 



That mythology, in turn, has spurred a reactionary, perpetual spasm from people who are horrified by what they hear. You’ll have a figure say, “The computers will take over the Earth, but that’s a good thing, because people had their chance and now we should give it to the machines.” Then you’ll have other people say, “Oh, that’s horrible, we must stop these computers.Most recently, some of the most beloved and respected figures in the tech and science world, including Stephen Hawking and Elon Musk, have taken that position of: “Oh my God, these things are an existential threat. They must be stopped.

In the history of organized religion, it’s often been the case that people have been disempowered precisely to serve what was perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity. … That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else who contributes to the corpora that allows the data schemes to operate, contributing to the fortunes of whoever runs the computers. You’re saying, “Well, but they’re helping the AI, it’s not us, they’re helping the AI.” It reminds me of somebody saying, “Oh, build these pyramids, it’s in the service of this deity,” and, on the ground, it’s in the service of an elite. It’s an economic effect of the new idea. The new religious idea of AI is a lot like the economic effect of the old idea, religion.

 

[39:47]
JARON LANIER is a Computer Scientist; Musician; Author of Who Owns the Future?
INTRODUCTION
This past weekend, during a trip to San Francisco, Jaron Lanier stopped by to talk to me for an Edge feature. He had something on his mind: news reports about comments by Elon Musk and Stephen Hawking, two of the most highly respected and distinguished members of the science and technology communiity, on the dangers of AI. (“Elon Musk, Stephen Hawking and fearing the machine” by Alan Wastler, CNBC 6.21.14). He then talked, uninterrupted, for an hour.
As Lanier was about to depart, John Markoff, the Pulitzer Prize-winning technology correspondent for THE NEW YORK TIMES, arrived. Informed of the topic of the previous hour’s conversation, he said, “I have a piece in the paper next week. Read it.” A few days later, his article, “Fearing Bombs That Can Pick Whom to Kill” (11.12.14), appeared on the front page. It’s one of a continuing series of articles by Markoff pointing to the darker side of the digital revolution.
This is hardly new territory. Cambridge cosmologist Martin Rees, the former Astronomer Royal and President of the Royal Society, addressed similar topics in his 2004 book, Our Final Hour: A Scientist’s Warning, as did computer scientist, Bill Joy, co-founder of Sun Microsystems, in his highly influential 2000 article in Wired, “Why The Future Doesn’t Need Us: Our most powerful 21st-century technologies — robotics, genetic engineering, and nanotech — are threatening to make humans an endangered species”.
But these topics are back on the table again, and informing the conversation in part is Superintelligence: Paths, Dangers, Strategies, the recently published book by Nick Bostrom, founding director of Oxford University’s Institute for the Future of Humanity. In his book, Bostrom asks questions such as “what happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us?
I am encouraging, and hope to publish, a Reality Club conversation, with comments (up to 500 words) on, but not limited to, Lanier’s piece. This is a very broad topic that involves many different scientific fields and I am sure the Edgies will have lots of interesting things to say.
—JB
Related on Edge:
THE MYTH OF AI (Transcript)
A lot of us were appalled a few years ago when the American Supreme Court decided, out of the blue, to decide a question it hadn’t been asked to decide, and declare that corporations are people. That’s a cover for making it easier for big money to have an influence in politics. But there’s another angle to it, which I don’t think has been considered as much: the tech companies, which are becoming the most profitable, the fastest rising, the richest companies, with the most cash on hand, are essentially people for a different reason than that. They might be people because the Supreme Court said so, but they’re essentially algorithms.

Continue reading

Tagged , , , , , , , , , ,

10 IBM Watson-Powered Apps That Are Changing Our World

ORIGINAL: CIO
Nov 6, 2014
By IBM 

 

IBM is investing $1 billion in its IBM Watson Group with the aim of creating an ecosystem of startups and businesses building cognitive computing applications with Watson. Here are 10 examples that are making an impact.
IBM considers Watson to represent a new era of computing — a step forward to cognitive computing, where apps and systems interact with humans via natural language and help us augment our own understanding of the world with big data insights.
Big Blue isn’t playing small ball with that claim. It has opened a new IBM Watson Global Headquarters in the heart of New York City’s Silicon Alley and is investing $1 billion into the Watson Group, focusing on development and research as well as bringing cloud-delivered cognitive applications and services to the market. That includes $100 million available for venture investments to support IBM’s ecosystem of start-ups and businesses building cognitive apps with Watson.
Here are 10 examples of Watson-powered cognitive apps that are already starting to shake things up.
USAA and Watson Help Military Members Transition to Civilian Life
USAA, a financial services firm dedicated to those who serve or have served in the military, has turned to IBM’s Watson Engagement Advisor in a pilot program to help military men and women transition to civilian life.
According to the U.S. Bureau of Labor Statistics, about 155,000 active military members transition to civilian life each year. This process can raise many questions, like “Can I be in the reserve and collect veteran’s compensation benefits?” or “How do I make the most of the Post-9/11 GI Bill?” Watson has analyzed and understands more than 3,000 documents on topics exclusive to military transitions, allowing members to ask it questions and receive answers specific to their needs.

LifeLearn Sofie is an intelligent treatment support tool for veterinarians of all backgrounds and levels of experience. Sofie is powered by IBM WatsonTM, the world’s leading cognitive computing system. She can understand and process natural language, enabling interactions that are more aligned with how humans think and interact.

Implement Watson
Dive deeper into subjects. Find insights where no one ever thought to look before. From Healthcare to Retail, there’s an IBM Watson Solution that’s right for your enterprise.

Healthcare

Helping doctors identify treatment options
The challenge
According to one expert, only 20 percent of the knowledge physicians use to diagnose and treat patients today is evidence based. Which means that one in five diagnoses is incorrect or incomplete.

Continue reading

Tagged , , , , , , ,

fMRI Data Reveals the Number of Parallel Processes Running in the Brain

ORIGINAL: Tech Review
November 5, 2014
The human brain carries out many tasks at the same time, but how many? Now fMRI data has revealed just how parallel gray matter is.
The human brain is often described as a massively parallel computing machine. That raises an interesting question: just how parallel is it?
Today, we get an answer thanks to the work of Harris Georgiou at the National Kapodistrian University of Athens in Greece, who has counted the number of “CPU cores” at work in the brain as it performs simple tasks in a functional magnetic resonance imaging (fMRI) machine. The answer could help lead to computers that better match the performance of the human brain.
The brain itself consists of around 100 billion neurons that each make up to 10,000 connections with their neighbors. All of this is packed into a structure the size of a party cake and operates at a peak power of only 20 watts, a level of performance that computer scientists observe with unconcealed envy.
fMRI machines reveal this activity by measuring changes in the levels of oxygen in the blood passing through the brain. The thinking is that more active areas use more oxygen so oxygen depletion is a sign of brain activity.
Typically, fMRI machines divide the brain into three-dimensional pixels called voxels, each about five cubic millimeters in size. The complete activity of the brain at any instant can be recorded using a three-dimensional grid of 60 x 60 x 30 voxels. These measurements are repeated every second or so, usually for tasks lasting two or three minutes. The result is a dataset of around 30 million data points.
Georgiou’s work is in determining the number of independent processes at work within this vast data set. “This is not much different than trying to recover the (minimum) number of actual ‘cpu cores’ required to ‘run’ all the active cognitive tasks that are registered in the entire 3-D brain volume,” he says.
This is a difficult task given the size of the dataset. To test his signal processing technique, Georgiou began by creating a synthetic fMRI dataset made up of eight different signals with statistical characteristics similar to those at work in the brain. He then used a standard signal processing technique, called independent component analysis, to work out how many different signals were present, finding that there are indeed eight, as expected.
Next, he applied the same independent component analysis technique to real fMRI data gathered from human subjects performing two simple tasks. The first was a simple visuo-motor task in which a subject watches a screen and then has to perform a simple task depending on what appears.
In this case, the screen displays either a red or green box on the left or right side. If the box is red, the subject must indicate this with their right index finger, and if the box is green, the subject indicates this with their left index finger. This is easier when the red box appears on the right and the green box appears on the left but is more difficult when the positions are swapped. The data consisted of almost 100 trials carried out on nine healthy adults.
The second task was easier. Subjects were shown a series of images that fall into categories such as faces, houses, chairs, and so on. The task was to spot when the same object appears twice, albeit from a different angle or under different lighting conditions. This is a classic visual recognition task.
The results make for interesting reading. Although the analysis is complex, the outcome is simple to state. Georgiou says that independent component analysis reveals that about 50 independent processes are at work in human brains performing the complex visuo-motor tasks of indicating the presence of green and red boxes. However, the brain uses fewer processes when carrying out simple tasks, like visual recognition.
That’s a fascinating result that has important implications for the way computer scientists should design chips intended to mimic human performance. It implies that parallelism in the brain does not occur on the level of individual neurons but on a much higher structural and functional level, and that there are about 50 of these.
Georgiou points out that a typical voxel corresponds to roughly three million neurons, each with several thousand connections with its neighbors. However, the current state-of-the-art neuromorphic chips contain a million artificial neurons each with only 256 connections. What is clear from this work is that the parallelism that Georgiou has measured occurs on a much larger scale than this.
This means that, in theory, an artificial equivalent of a brain-like cognitive structure may not require a massively parallel architecture at the level of single neurons, but rather a properly designed set of limited processes that run in parallel on a much lower scale,” he concludes.
Anybody thinking of designing brain-like chips might find this a useful tip.
Ref: arxiv.org/abs/1410.7100 Estimating The Intrinsic Dimension In fMRI Space Via Dataset Fractal Analysis
Tagged , , , , , , ,

The Next Big Programming Language You’ve Never Heard Of

ORIGINAL: Wired
07.07.14
 Getty
Andrei Alexandrescu didn’t stand much of a chance. And neither did Walter Bright.
When the two men met for beers at a Seattle bar in 2005, each was in the midst of building a new programming language, trying to remake the way the world creates and runs its computer software. That’s something pretty close to a hopeless task, as Bright knew all too well. “Most languages never go anywhere,” he told Alexandrescu that night. “Your language may have interesting ideas. But it’s never going to succeed.
Alexandrescu, a graduate student at the time, could’ve said the same thing to Bright, an engineer who had left the venerable software maker Symantec a few years earlier. People are constantly creating new programming languages, but because the software world is already saturated with so many if them, the new ones rarely get used by more than a handful of coders—especially if they’re built by an ex-Symantec engineer without the backing of a big-name outfit. But Bright’s new language, known as D, was much further along than the one Alexandrescu was working on, dubbed Enki, and Bright said they’d both be better off if Alexandrescu dumped Enki and rolled his ideas into D. Alexandrescu didn’t much like D, but he agreed. “I think it was the beer,” he now says.
Andrei Alexandrescu.
Photo: Ariel Zambelich/WIRED
The result is a programming language that just might defy the odds. Nine years after that night in Seattle, a $200-million startup has used D to build its entire online operation, and thanks to Alexandrescu, one of biggest names on the internet is now exploring the new language as well. Today, Alexandrescu is a research scientist at Facebook, where he and a team of coders are using D to refashion small parts of the company’s massive operation. Bright, too, has collaborated with Facebook on this experimental software, as an outsider contractor. The tech giant isn’t an official sponsor of the language—something Alexandrescu is quick to tell you—but Facebook believes in D enough to keep him working on it full-time, and the company is at least considering the possibility of using D in lieu of C++, the venerable language that drives the systems at the heart of so many leading web services.
C++ is an extremely fast language—meaning software built with it runs at high speed—and it provides great control over your code. But it’s not as easy to use as languages like Python, Ruby, and PHP. In other words, it doesn’t let coders build software as quickly. D seeks to bridge that gap, offering the performance of C++ while making things more convenient for programmers.

Among the giants of tech, this is an increasingly common goal. Google’s Go programming language aims for a similar balance of power and simplicity, as does the Swift language that Apple recently unveiled. In the past, the programming world was split in two:

  • the fast languages and 
  • the simpler modern languages

But now, these two worlds are coming together. “D is similar to C++, but better,” says Brad Anderson, a longtime C++ programmer from Utah who has been using D as well. “It’s high performance, but it’s expressive. You can get a lot done without very much code.

IN THE PAST, THE PROGRAMMING WORLD WAS SPLIT IN TWO: THE FAST LANGUAGES AND THE SIMPLER MODERN LANGUAGES. BUT NOW, THESE TWO WORLDS ARE COMING TOGETHER.
In fact, Facebook is working to bridge this gap with not one but two languages. As it tinkers with D, the company has already revamped much of its online empire with a new language called Hack, which, in its own way, combines speed with simplicity. While using Hack to build the front-end of its service—the webpages you see when you open the service in your web browser—Facebook is experimenting with D on the back-end, the systems that serve as the engine of its social network.
But Alexandrescu will also tell you that programmers can use D to build anything, including the front-end of a web service. The language is so simple, he says, you can even use it for quick-and-dirty programming scripts. “You want to write a 50-line script? Sure, go for it.” This is what Bright strove for—a language suitable for all situations. Today, he says, people so often build their online services with multiple languages—a simpler language for the front and a more powerful language for the back. The goal should be a single language that does it all. “Having a single language suitable for both the front and the back would be a lot more productive for programmers,” Bright says. “D aims to be that language.
The Cape of a Superhero
When Alexandrescu discusses his years of work on D, he talks about wearing the “cape of a superhero”—being part of a swashbuckling effort to make the software world better. That’s not said with arrogance. Alexandrescu, whose conversations reveal a wonderfully self-deprecating sense of humor, will also tell you he “wasn’t a very good” programming language researcher at the University of Washington—so bad he switched his graduate studies to machine learning. The superhero bit is just a product of his rather contagious enthusiasm for the D project.
For years, he worked on the language only on the side. “It was sort of a free-time activity, in however much free-time a person in grad school can have, which is like negative,” says Alexandrescu, a Romanian who immigrated to the States in the late ’90s. Bright says the two of them would meet in coffee shops across Seattle to argue the ins and outs of the language. The collaboration was fruitful, he explains, because they were so different. Alexandrescu was an academic, and Bright was an engineer. “We came at the same problems from opposite directions. That’s what made the language great–the yin and the yang of these two different viewpoints of how the language should be put together.
For Alexandrescu, D is unique. It’s not just that it combines speed and simplicity. It also has what he calls “modeling power.” It lets coders more easily create models of stuff we deal with in the real world, including everything from bank accounts and stock exchanges to automative sensors and spark plugs. D, he says, doesn’t espouse a particular approach to modeling. It allows the programmer “to mix and match a variety of techniques to best fit the problem.
THAT’S WHAT MADE THE LANGUAGE GREAT–THE YIN AND THE YANG OF THESE TWO DIFFERENT VIEWPOINTS OF HOW THE LANGUAGE SHOULD BE PUT TOGETHER.
He ended up writing the book on D. But when he joined Facebook in 2009, it remained a side project. His primary research involved machine learning. Then, somewhere along the way, the company agreed to put him on language full-time. “It was better,” he says, “to do the caped-superhero-at-night thing during the daytime.
 
For Facebook, this is still a research project. But the company has hosted the past two D conferences—most recently in May—and together with various Facebook colleagues, Alexandrescu has used D to rebuild two select pieces of Facebook software. They rebuilt the Facebook “linter,” known as Flint, a means of identifying errors in other Facebook software, and they fashioned a new Facebook “preprocessor,” dubbed Warp, which helps generate the company’s core code.
In both cases, D replaced C++. That, at least for the moment, is where the language shines the most. When Bright first started the language, he called it Mars, but the community that sprung up around the language called it D, because they saw it as the successor to C++. “D became the nickname,” Bright says. “And the nickname stuck.

The Interpreted Language That Isn’t

Facebook is the most high-profile D user. But it’s not alone. Sociomantic—a German online advertising outfit recently purchased by British grocery giant Tesco for a reported $200 million—has built its operation in D. About 10,000 people download the D platform each month. “I’m assuming it’s not the same 10,000 every month,” Alexandrescu quips. And judging from D activity on various online developer services—from GitHub to Stackoverflowthe language is now among the 20 to 30 most popular in the world.
For coder Brad Anderson, the main appeal is that D feels like interpreted languages such as Ruby and PHP. “It results in code that’s more compact,” he says. “You’re not writing boilerplate as much. You’re not writing as much stuff you’re obligated to write in other languages.It’s less “verbose” than C++ and Java.
Yes, like C++ and Java, D is a compiled language, meaning that you must take time to transform it into executable software before running it. Unlike with interpreted languages, you can’t run your code as soon as you write it. But it compiles unusually quickly. Bright—who worked on C++, Java, and Javascript compilers at Symantec and Sun Microsystems—says this was a primary goal.When your compiler runs fast,” he says, “it transforms the way your write code. It lets you see the results much faster. For Anderson, this is another reason that D feels more like an interpreted language. “It’s usually very, very fast to compile–fast enough that the edit [and] run cycle usually feels just like an interpreted language.” He adds, however, that this begins to change if your program gets very large.
IT’S USUALLY VERY, VERY FAST TO COMPILE–FAST ENOUGH THAT THE EDIT AND RUN CYCLE USUALLY FEELS JUST LIKE AN INTERPRETED LANGUAGE.
What’s more, Anderson explains, a D program has this unusual ability to generate additional D code and weave this into itself at compile time. That may sound odd, but the end result is a program more finely tuned to the task at hand. Essentially, a program can optimize itself as it compiles. “It makes for some amazing code generation capabilities,” Anderson says.
The trouble with the language, according to Alexandrescu, is that it still needs a big-name backer. “Corporate support would be vital right now,” he says. This shows you that Facebook’s involvement only goes so far, and it provides some insight into why new languages have such trouble succeeding. In addition to backing Hack, Facebook employs some of the world’s leading experts in Haskell, another powerful but relatively underused language. What D needs, Alexandrescu says, is someone willing to pump big money into promoting it. The Java programming language succeeded, he says, because Sun Microsystems put so much money behind it back in the ’90s.
Certainly, D still faces a long road to success. But this new language has already come further than most.
Tagged , , , , , , ,

DARPA funds $11 million tool that will make coding a lot easier

ORIGINAL: Engadget
November 9, 2014
DARPA is funding a new project by Rice University called PLINY, and it’s neither a killer robot nor a high-tech weapon. PLINY, named after Pliny the Elder who wrote one of the earliest encyclopedias ever, will actually be a tool that can automatically complete a programmer’s draft — and yes, it will work somewhat like the autocomplete on your smartphones. Its developers describe it as a repository of terabytes upon terabytes of all the open-source code they’ll find, which people will be able to query in order to easily create complex software or quickly finish a simple one. Rice University assistant professor Swarat Chaudhuri says he and his co-developers “envision a system where the programmer writes a few of lines of code, hits a button and the rest of the code appears.” Also, the parts PLINY conjures up “should work seamlessly with the code that’s already been written.
In the video below, Chaudhuri used a sheet of paper with a hole in the middle to represent a programmer’s incomplete work. If he uses PLINY to fill that hole, the tool will look through the billions of lines of code in its collection to find possible solutions (represented by different shapes in the video). Once it finds the nearest fit, the tool will clip any unnecessary parts, polish the code further to come up with the best solution it can, and make sure the final product has no security flaws. More than a dozen Rice University researchers will be working on PLINY for the next four years, fueled by the $11 million funding from the Pentagon’s mad science division.
[Image credit: Shutterstock / Yellowj]
Tagged , , , , , ,

Amazon Echo: An Intelligent Speaker That Listens to Your Commands

ORIGINAL: Gizmodo

By Mario Aguilar

Amazon Echo: An Intelligent Speaker That Listens to Your Commands

Amazon Echo is a speaker that has a voice assistant built in. If you ask it a question its got an answer. If you tell it to do stuff, it complies. Well, this is different.

Echo is an always-on speaker that you plop into a corner of your house and turns it into the futuristic homes we’ve been dreaming about. It’s like Jarvis, or the assistant computer from Her.

When you say the wake word “Alexa,” it starts listening and you can ask it for information or to perform any of a number of tasks. For example, you can ask it for the weather, to play a particular style of music, or to add something to you calendar.

Of course voice assistants aren’t an entirely new concept, but building the technology into a home appliance rather than into a a smartphone makes a lot of sense and gives the technology a more conversational and natural feel. To that end, its got what Amazon calls “far-field recognition” that allows you to talk to it from across the room. It eliminates the clumsiness of assistants like Siri and Google Now that you have to be right on top of.

Besides being an assistant, Echo is also a little Bluetooth speaker with 360-degree sound. It stands 9-inches tall, has a 2-inch tweeter and a 2.5-inch woofer.

If you’re not near the speaker, you can also access it using an app for Android and Fire OS as well as through web browsers on iOS.

Right now, Echo is available by invitation only. It costs $200 for regular people and $100 for people who have an Amazon Prime account. [Amazon]

Tagged , , ,

Robotic Micro-Scallops Can Swim Through Your Eyeballs

ORIGINAL: IEEE Spectrum
By Evan Ackerman
Posted 4 Nov 2014

Image: Alejandro Posada/MPI-IS
An engineered scallop that is only a fraction of a millimeter in size and that is capable of swimming in biomedically relevant fluids has been developed by researchers at the Max Planck Institute for Intelligent Systems in Stuttgart.
Designing robots on the micro or nano scale (like, small enough to fit inside your body) is all about simplicity. There just isn’t room for complex motors or actuation systems. There’s barely room for any electronics whatsoever, not to mention batteries, which is why robots that can swim inside your bloodstream or zip around your eyeballs are often driven by magnetic fields. However, magnetic fields drag around anything and everything that happens to be magnetic, so in general, they’re best for controlling just one single microrobot robot at a time. Ideally, you’d want robots that can swim all by themselves, and a robotic micro-scallop, announced today in Nature Communications, could be the answer.
When we’re thinking about robotic microswimmers motion, the place to start is with understanding how fluids (specifically, biological fluids) work at very small scales. Blood doesn’t behave like water does, in that blood is what’s called a non-Newtonian fluid. All that this means is that blood behaves differently (it changes viscosity, becoming thicker or thinner) depending on how much force you’re exerting on it. The classic example of a non-Newtonian fluid is oobleck, which you can make yourself by mixing one part water with two parts corn starch. Oobleck acts like a liquid until you exert a bunch of force on it (say, by rapidly trying to push your hand into it), at which point its viscosity increases to the point where it’s nearly solid.
These non-Newtonian fluids represent most of the liquid stuff that you have going on in your body (blood, joint fluid, eyeball goo, etc), which, while it sounds like it would be more complicated to swim through, is actually an opportunity for robots. Here’s why:
At very small scales, robotic actuators tend to be simplistic and reciprocal. That is, they move back and forth, as opposed to around and around, like you’d see with a traditional motor. In water (or another Newtonian fluid), it’s hard to make a simple swimming robot out of reciprocal motions, because the back and forth motion exerts the same amount of force in both directions, and the robot just moves forward a little, and backward a little, over and over. Biological microorganisms generally do not use reciprocal motions to get around in fluids for this exact reason, instead relying on nonreciprocal motions of flagella and cilia.
However, if we’re dealing with a non-Newtonian fluid, this rule (it’s actually a theorem called the Scallop theorem) doesn’t apply anymore, meaning that it should be possible to use reciprocal movements to get around. A team of researchers led by Prof. Peer Fischer at the Max Planck Institute for Intelligent Systems, in Germany, have figured out how, and appropriately enough, it’s a microscopic robot that’s based on the scallop:
As we discussed above, these robots are true swimmers. This particular version is powered by an external magnetic field, but it’s just providing energy input, not dragging the robot around directly as other microbots do. And there are plenty of kinds of micro-scale reciprocal actuators that could be used, like piezoelectrics, bimetal strips, shape memory alloys, or heat or light-actuated polymers. There’s lots of design optimizations that can be made as well, like making the micro-scallop more streamlined or “optimizing its surface morphology,” whatever that means.
The researchers say that the micro-scallop is more of a “general scheme” for micro-robots rather than a specific micro-robot that’s intended to do anything in particular. It’ll be interesting to see how this design evolves, hopefully to something that you can inject into yourself to fix everything that could ever be wrong with you. Ever.
Tagged , , ,

Google CEO: Computers Are Going To Take Our Jobs, And ‘There’s No Way Around That’

ORIGINAL: BusinessInsider
OCT. 31, 2014
Google+/Larry Page Google CEO Larry Page
When Google co-founders Larry Page and Sergey Brin formed the company in 1998, they sought to package all the information on the internet into an index that’s simple to use.
Today, Google is much more than a search engine. The company appears to be involved in every type of new technology ranging from self-driving cars to contact lenses that can test for disease.
In a recent interview with the Financial Times, CEO Larry Page provided some insight as to why the company has decided to take on so many different tasks.
Part of the reason is because Page believes there’s this inevitable shift coming in which computers will be much better-suited to take on most jobs.
You can’t wish away these things from happening, they are going to happen,” he told the Financial Times on the subject of artificial intelligence infringing on the job market. “You’re going to have some very amazing capabilities in the economy. When we have computers that can do more and more jobs, it’s going to change how we think about work. There’s no way around that. You can’t wish it away.
But people shouldn’t fear computers taking over their occupations, according to Page, who says it doesn’t make sense” for people to work so much.
The idea that everyone should slavishly work so they do something inefficiently so they keep their job — that just doesn’t make any sense to me,” he told the Financial Times. “That can’t be the right answer.
Based on Page’s quotes in the Financial Times, it sounds as if he feels like Google has an obligation to invest in forward-thinking technologies.
…We have all these billions we should be investing to make people’s lives better,” Page said to the Financial Times. “If we just do the same thing we did before and don’t do something new, it seems like a crime to me.
Tagged , , , ,

Almost human: Xerox brings higher level of AI to its virtual customer support agents

Almost human: Xerox brings higher level of AI to its virtual customer support agents
 Above: A brain depicting Xerox WDS Virtual Agent
Image Credit: Xerox

The WDS Virtual Agent taps into intelligence gleaned from terabytes of data that the company keeps about real customer interactions. Armed with this info, the virtual agent can more reliably solve problems itself, as it learns through experience. The more customer care data it is exposed to, the more effective it becomes in delivering relevant responses to real customer questions.

Of course, AI proponents have been saying this for decades, so the proof will be in how well it works.

It may be a long time before we get virtual AI companions like in the movie Her, where actor Joaquin Phoenix’s character falls in love with Siri-like AI. But virtual assistants are becoming popular because, Xerox says, they cost about a fiftieth of what a human being costs.

Xerox has applied its research from its PARC (formerly Palo Alto Research Center) and Xerox Research Centre Europe in AI, machine learning, and natural language processing. The AI can understand, diagnose, and solve customer problems — without being specifically programmed to give rote responses. It analyzes and learns from human agents

Because many first-generation virtual agents rely on basic keyword searches, they aren’t able to understand the context of a customer’s question like a human agent can,” said WDS’ Nick Gyles, chief technology officer, in a statement. “The WDS Virtual Agent has the confidence to solve problems itself because it learns just like we do, through experience. The more care data it’s exposed to, the more effective it becomes in delivering relevant and proven responses.

Xerox captures data like customer sentiment, described symptoms, problem types, root causes and the techniques agents use to resolve customer problems. The data have been there for a while; it just needs AI that is smart enough to absorb it all.

We’ve found a way for organizations to unlock that data potential to deliver benefit across their wider care channels,” Gyles said. “No other virtual agent technology is able to deliver this consistency and connect intelligence from multiple sources to ensure that the digital experience is as reliable and authentic as a human one.

Xerox is delivering the WDS Virtual Agent as a cloud-based solution. It will be available in the fourth quarter.

Our technology helps overcome one of the key barriers brands face in trying to deliver a truly omni-channel care experience; the ability to be consistent. Digital care tools often lag behind the intelligence that resides in the contact center, with outdated content or no awareness of new problems. Our research in artificial intelligence is changing this,” said Jean-Michel Renders, senior scientist at XRCE in a statement. “With our machine learning technology, the WDS Virtual Agent has the ability to learn how to solve new problems as they arise across a company’s wider care channels.

Tagged , , , ,

Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts

ORIGINAL: IEEE Spectrum
By Lee Gomes
20 Oct 2014
Big-data boondoggles and brain-inspired chips are just two of the things we’re really getting wrong
Photo-Illustration: Randi Klett

The overeager adoption of big data is likely to result in catastrophes of analysis comparable to a national epidemic of collapsing bridges. Hardware designers creating chips based on the human brain are engaged in a faith-based undertaking likely to prove a fool’s errand. 

Despite recent claims to the contrary, we are no further along with computer vision than we were with physics when Isaac Newton sat under his apple tree.

Those may sound like the Luddite ravings of a crackpot who breached security at an IEEE conference. In fact, the opinions belong to IEEE Fellow Michael I. Jordan, Pehong Chen Distinguished Professor at the University of California, Berkeley. Jordan is one of the world’s most respected authorities on machine learning and an astute observer of the field. His CV would require its own massive database, and his standing in the field is such that he was chosen to write the introduction to the 2013 National Research Council report “Frontiers in Massive Data Analysis.” San Francisco writer Lee Gomes interviewed him for IEEE Spectrum on 3 October 2014.
Michael Jordan on…

 

1- Why We Should Stop Using Brain Metaphors When We Talk About Computing

Continue reading

Tagged , , , , , , ,

Artificial Intelligence Planning Course at Coursera by U of Edimurgh

ORIGINAL: Coursera

The course aims to provide a foundation in artificial intelligence techniques for planning, with an overview of the wide spectrum of different problems and approaches, including their underlying theory and their applications.


About the Course

The course aims to provide a foundation in artificial intelligence techniques for planning, with an overview of the wide spectrum of different problems and approaches, including their underlying theory and their applications. It will allow you to:

  • Understand different planning problems
  • Have the basic know how to design and implement AI planning systems
  • Know how to use AI planning technology for projects in different application domains
  • Have the ability to make use of AI planning literature

Planning is a fundamental part of intelligent systems. In this course, for example, you will learn the basic algorithms that are used in robots to deliberate over a course of actions to take. Simpler, reactive robots don’t need this, but if a robot is to act intelligently, this type of reasoning about actions is vital.

Course Syllabus

Week 1: Introduction and Planning in Context
Week 2: State-Space Search: Heuristic Search and STRIPS
Week 3: Plan-Space Search and HTN Planning
One week catch up break
Week 4: Graphplan and Advanced Heuristics

Week 5: Plan Execution and ApplicationsM

Exam week

Recommended Background

The MOOC is based on a Masters level course at the University of Edinburgh but is designed to be accessible at several levels of engagement from an “Awareness Level”, through the core “Foundation Level” requiring a basic knowledge of logic and mathematical reasoning, to a more involved “Performance Level” requiring programming and other assignments.

Suggested Readings

The course follows a text book, but this is not required for the course:
Automated Planning: Theory & Practice (The Morgan Kaufmann Series in Artificial Intelligence) by M. Ghallab, D. Nau, and P. Traverso (Elsevier, ISBN 1-55860-856-7) 2004.

Course Format

Five weeks of study comprising 10 hours of video lecture material and special features videos. Quizzes and assessments throughout the course will assist in learning. Some weeks will involve recommended readings. Discussion on the course forum and via other social media will be encouraged. A mid-course catch up break week and a final week for exams and completion of assignments allows for flexibility in study.

You can engage with the course at a number of levels to suit your interests and the time you have available:

  • Awareness Level – gives an overview of the topic, along with introductory videos and application related features. This level is likely to require 2-3 hours of study per week.
  • Foundation Level – is the core taught material on the course and gives a grounding in AI planning technology and algorithms. This level is likely to require 5-6 hours of study per week of study.
  • Performance Level – is for those interested in carrying out additional programming assignments and engaging in creative challenges to understand the subject more deeply. This level is likely to require 8 hours or more of study per week.

FAQ

  • Will I get a certificate after completing this class?Students who complete the class will be offered a Statement of Accomplishment signed by the instructors.
  • Do I earn University of Edinburgh credits upon completion of this class?The Statement of Accomplishment is not part of a formal qualification from the University. However, it may be useful to demonstrate prior learning and interest in your subject to a higher education institution or potential employer.
  • What resources will I need for this class?Nothing is required, but if you want to try out implementing some of the algorithms described in the lectures you’ll need access to a programming environment. No specific programming language is required. Also, you may want to download existing planners and try those out. This may require you to compile them first.
  • Can I contact the course lecturers directly?You will appreciate that such direct contact would be difficult to manage. You are encouraged to use the course social network and discussion forum to raise questions and seek inputs. The tutors will participate in the forums, and will seek to answer frequently asked questions, in some cases by adding to the course FAQ area.
  • What Twitter hash tag should I use?Use the hash tag #aiplan for tweets about the course.
  • How come this is free?We are passionate about open on-line collaboration and education. Our taught AI planning course at Edinburgh has always published its course materials, readings and resources on-line for anyone to view. Our own on-campus students can access these materials at times when the course is not available if it is relevant to their interests and projects. We want to make the materials available in a more accessible form that can reach a broader audience who might be interested in AI planning technology. This achieves our primary objective of getting such technology into productive use. Another benefit for us is that more people get to know about courses in AI in the School of Informatics at the University of Edinburgh, or get interested in studying or collaborating with us.
  • When will the course run again?It is likely that the 2015 session will be the final time this course runs as a Coursera MOOC, but we intend to leave the course wiki open for further study and use across course instances.
Tagged , ,

How IBM Got Brainlike Efficiency From the TrueNorth Chip

ORIGINAL: IEEE Spectrum
By Jeremy Hsu
Posted 29 Sep 2014 | 19:01 GMT


TrueNorth takes a big step toward using the brain’s architecture to reduce computing’s power consumption

Photo: IBM

Neuromorphic computer chips meant to mimic the neural network architecture of biological brains have generally fallen short of their wetware counterparts in efficiency—a crucial factor that has limited practical applications for such chips. That could be changing. At a power density of just 20 milliwatts per square centimeter, IBM’s new brain-inspired chip comes tantalizingly close to such wetware efficiency. The hope is that it could bring brainlike intelligence to the sensors of smartphones, smart cars, and—if IBM has its way—everything else.

The latest IBM neurosynaptic computer chip, called TrueNorth, consists of 1 million programmable neurons and 256 million programmable synapses conveying signals between the digital neurons. Each of the chip’s 4,096 neurosynaptic cores includes the entire computing package:

  • memory, 
  • computation, and 
  • communication. 

Such architecture helps to bypass the bottleneck in traditional von Neumann computing, where program instructions and operation data cannot pass through the same route simultaneously.
This is literally a supercomputer the size of a postage stamp, light like a feather, and low power like a hearing aid,” says Dharmendra Modha, IBM fellow and chief scientist for brain-inspired computing at IBM Research-Almaden, in San Jose, Calif.

Such chips can emulate the human brain’s ability to recognize different objects in real time; TrueNorth showed it could distinguish among pedestrians, bicyclists, cars, and trucks. IBM envisions its new chips working together with traditional computing devices as hybrid machines, providing a dose of brainlike intelligence. The chip’s architecture, developed together by IBM and Cornell University, was first detailed in August in the journal Science.

Tagged , , , , , ,

Meet Amelia: the computer that’s after your job

29 Sep 2014
A new artificially intelligent computer system called ‘Amelia’ – that can read and understand text, follow processes, solve problems and learn from experience – could replace humans in a wide range of low-level jobs

Amelia aims to answer the question, can machines think? Photo: IPsoft

In February 2011 an artificially intelligent computer system called IBM Watson astonished audiences worldwide by beating the two all-time greatest Jeopardy champions at their own game.

Thanks to its ability to apply

  • advanced natural language processing,
  • information retrieval,
  • knowledge representation,
  • automated reasoning, and
  • machine learning technologies, Watson consistently outperformed its human opponents on the American quiz show Jeopardy.

Watson represented an important milestone in the development of artificial intelligence, but the field has been progressing rapidly – particularly with regard to natural language processing and machine learning.

In 2012, Google used 16,000 computer processors to build a simulated brain that could correctly identify cats in YouTube videos; the Kinect, which provides a 3D body-motion interface for Microsoft’s Xbox, uses algorithms that emerged from artificial intelligence research, as does the iPhone’s Siri virtual personal assistant.

Today a new artificial intelligence computing system has been unveiled, which promises to transform the global workforce. Named ‘Amelia‘ after American aviator and pioneer Amelia Earhart, the system is able to shoulder the burden of often tedious and laborious tasks, allowing human co-workers to take on more creative roles.

Watson is perhaps the best data analytics engine that exists on the planet; it is the best search engine that exists on the planet; but IBM did not set out to create a cognitive agent. It wanted to build a program that would win Jeopardy, and it did that,” said Chetan Dube, chief executive Officer of IPsoft, the company behind Amelia.

Amelia, on the other hand, started out not with the intention of winning Jeopardy, but with the pure intention of answering the question posed by Alan Turing in 1950 – can machines think?


Amelia learns by following the same written instructions as her human colleagues, but is able to absorb information in a matter of seconds.
She understands the full meaning of what she reads rather than simply recognising individual words. This involves

  • understanding context,
  • applying logic and
  • inferring implications.

When exposed to the same information as any new employee in a company, Amelia can quickly apply her knowledge to solve queries in a wide range of business processes. Just like any smart worker she learns from her colleagues and, by observing their work, she continually builds her knowledge.

While most ‘smart machines’ require humans to adapt their behaviour in order to interact with them, Amelia is intelligent enough to interact like a human herself. She speaks more than 20 languages, and her core knowledge of a process needs only to be learned once for her to be able to communicate with customers in their language.

Independently, rather than through time-intensive programming, Amelia creates her own ‘process map’ of the information she is given so that she can work out for herself what actions to take depending on the problem she is solving.

Intelligence is the ability to acquire and apply knowledge. If a system claims to be intelligent, it must be able to read and understand documents, and answer questions on the basis of that. It must be able to understand processes that it observes. It must be able to solve problems based on the knowledge it has acquired. And when it cannot solve a problem, it must be capable of learning the solution through noticing how a human did it,” said Dube.

IPsoft has been working on this technology for 15 years with the aim of developing a platform that does not simply mimic human thought processes but can comprehend the underlying meaning of what is communicated – just like a human.

Just as machines transformed agriculture and manufacturing, IPsoft believes that cognitive technologies will drive the next evolution of the global workforce, so that in the future companies will have digital workforces that comprise a mixture of human and virtual employees.

Amelia has already been trialled within a number of Fortune 1000 companies, in areas such as manning technology help desks, procurement processing, financial trading operations support and providing expert advice for field engineers.

In each of these environments, she has learnt not only from reading existing manuals and situational context but also by observing and working with her human colleagues and discerning for herself a map of the business processes being followed.

In a help desk situation, for example, Amelia can understand what a caller is looking for, ask questions to clarify the issue, find and access the required information and determine which steps to follow in order to solve the problem.

As a knowledge management advisor, she can help engineers working in remote locations who are unable to carry detailed manuals, by diagnosing the cause of failed machinery and guiding them towards the best steps to rectifying the problem.

During these trials, Amelia was able to go from solving very few queries independently to 42 per cent of the most common queries within one month. By the second month she could answer 64 per cent of those queries independently.

That’s a true learning cognitive agent. Learning is the key to the kingdom, because humans learn from experience. A child may need to be told five times before they learn something, but Amelia needs to be told only once,” said Dube.

Amelia is that Mensa kid, who personifies a major breakthrough in cognitive technologies.

Analysts at Gartner predict that, by 2017, managed services offerings that make use of autonomics and cognitive platforms like Amelia will drive a 60 per cent reduction in the cost of services, enabling organisations to apply human talent to higher level tasks requiring creativity, curiosity and innovation.

IPsoft even has plans to start embedding Amelia into humanoid robots such as Softbank’s Pepper, Honda’s Asimo or Rethink Robotics’ Baxter, allowing her to take advantage of their mechanical functions.

The robots have got a fair degree of sophistication in all the mechanical functions – the ability to climb up stairs, the ability to run, the ability to play ping pong. What they don’t have is the brain, and we’ll be supplementing that brain part with Amelia,” said Dube.

I am convinced that in the next decade you’ll pass someone in the corridor and not be able to discern if it’s a human or an android.

Given the premise of IPsoft’s artificial intelligence system, it seems logical that the ultimate measure of Amelia’s success would be passing the Turing Test – which sets out to see whether humans can discern whether they are interacting with a human or a machine.

Earlier this year, a chatbot named Eugene Goostman became the first machine to pass the Turing Test by convincingly imitating a 13-year-old boy. In a five-minute keyboard conversation with a panel of human judges, Eugene managed to convince 33 per cent that it was human.

Interestingly, however, IPsoft believes that the Turing Test needs reframing, to redefine what it means to ‘think’. While Eugene was able to immitate natural language, he was only mimicking understanding. He did not learn from the interaction, nor did he demonstrate problem solving skills.

Natural language understanding is a big step up from parsing. Parsing is syntactic, understanding is semantic, and there’s a big cavern between the two,” said Dube.

The aim of Amelia is not just to get an accolade for managing to fool one in three people on a panel. The assertion is to create something that can answer to the fundamental need of human beings – particularly after a certain age – of companionship. That is our intent.

Tagged , , , ,

AI For Everyone: Startups Democratize Deep Learning So Google And Facebook Don’t Own It All

ORIGINAL: Forbes
9/17/2014When I arrived at a Stanford University auditorium Tuesday night for what I thought would be a pretty nerdy panel on deep learning, a fast-growing branch of artificial intelligence, I figured I must be in the wrong place–maybe a different event for all the new Stanford students and their parents visiting the campus. Nope. Despite the highly technical nature of deep learning, some 600 people had shown up for the sold-out AI event, presented by VLAB, a Stanford-based chapter of the MIT Enterprise Forum. The turnout was a stark sign of the rising popularity of deep learning, an approach to AI that tries to mimic the activity of the brain in so-called neural networks. In just the last couple of years, deep learning software from giants like Google GOOGL +1.27%, Facebook, and China’s Baidu as well as a raft of startups, has led to big advances in image and speech recognition, medical diagnostics, stock trading, and more. “There’s quite a bit of excitement in this area,” panel moderator Steve Jurvetson, a partner with the venture firm DFJ, said with uncustomary understatement.

In the past year or two, big companies have been locked in a land grab for talent, paying big bucks for startups and even hiring away deep learning experts from each other. But this event, focused mostly on startups, including several that demonstrated their products before the panel, also revealed there’s still a lot of entrepreneurial activity. In particular, several companies aim to democratize deep learning by offering it as a service or coming up with cheaper hardware to make it more accessible to businesses.

Jurvetson explained why deep learning has pushed the boundaries of AI so much further recently.

  • For one, there’s a lot more data around because of the Internet, there’s metadata such as tags and translations, and there’s even services such as Amazon’s Mechanical Turk, which allows for cheap labeling or tagging.
  • There are also algorithmic advances, especially for using unlabeled data.
  • And computing has advanced enough to allow much larger neural networks with more synapses–in the case of Google Brain, for instance, 1 billion synapses (though that’s still a very long way form the 100 trillion synapses in the adult human brain).

Adam Berenzweig, cofounder and CTO of image recognition firm Clarifai and former engineer at Google for 10 years, made the case that deep learning is “adding a new primary sense to computing” in the form of useful computer vision. “Deep learning is forming that bridge between the physical world and the world of computing,” he said.

And it’s allowing that to happen in real time. “Now we’re getting into a world where we can take measurements of the physical world, like pixels in a picture, and turn them into symbols that we can sort,” he said. Clarifai has been working on taking an image and producing a meaningful description very quickly, like 80 milliseconds, and show very similar images.

One interesting application relevant to advertising and marketing, he noted: Once you can recognize key objects in images, you can target ads not just on keywords but on objects in an image.

DFJ’s Steve Jurvetson led a panel of AI experts at a Stanford even Sept. 16.

Even more sweeping, said Naveen Rao, cofounder and CEO of deep-learning hardware and software startup Nervana Systems and former researcher in neuromorphic computing at Qualcomm, deep learning is “that missing link between computing and what the brain does.” Instead of doing specific computations very fast, as conventional computers do, “we can start building new hardware to take computer processing in a whole new direction,” assessing probabilities, like the brain does. “Now there’s actually a business case for this kind of computing,” he said.

And not just for big businesses. Elliot Turner, founder and CEO of AlchemyAPI, a deep-learning platform in the cloud, said his company’s mission is to “democratize deep learning.” The company is working in 10 industries from advertising to business intelligence, helping companies apply it to their businesses. “I look forward to the day that people actually stop talking about deep learning, because that will be when it has really succeeded,” he added.

Despite the obvious advantages of large companies such as Google, which have untold amounts of both data and computer power that deep learning requires to be useful, startups can still have a big impact, a couple of the panelists said. “There’s data in a lot of places. There’s a lot of nooks and crannies that Google doesn’t have access to,” Berenzweig said hopefully. “Also, you can trade expertise for data. There’s also a question of how much data is enough.

Turner agreed. “It’s not just a matter of stockpiling data,” he said. “Better algorithms can help an application perform better.” He noted that even Facebook, despite its wealth of personal data, found this in its work on image recognition.

Those algorithms may have broad applicability, too. Even if they’re initially developed for specific applications such as speech recognition, it looks like they can be used on a wide variety of applications. “These algorithms are extremely fungible,” said Rao. And he said companies such as Google aren’t keeping them as secret as expected, often publishing them in academic journals and at conferences–though Berenzweig noted that “it takes more than what they publish to do what they do well.

For all that, it’s not yet clear how much deep learning will actually emulate the brain, even if they are intelligent. But Ilya Sutskever, research scientist at Google Brain and a protege of Geoffrey Hinton, the University of Toronto deep learning guru since the 1980s who’s now working part-time at Google, said it almost doesn’t matter. “You can still do useful predictions” using them. And while the learning principles for dealing with all the unlabeled data out there remain primitive, he said he and many others are working on this and likely will make even more progress.

Rao said he’s unworried that we’ll end up creating some kind of alien intelligence that could run amok if only because advances will be driven by market needs. Besides, he said, “I think a lot of the similarities we’re seeing in computation and brain functions is coincidental. It’s driven that way because we constrain it that way.

OK, so how are these companies planning to make money on this stuff? Jurvetson wondered. Of course, we’ve already seen improvements in speech and image recognition that make smartphones and apps more useful, leading more people to buy them. “Speech recognition is useful enough that I use it,” said Sutskever. “I’d be happy if I didn’t press a button ever again. And language translation could have a very large impact.

Beyond that, Berenzweig said, “we’re looking for the low-hanging fruit,” common use cases such as visual search for shopping, organizing your personal photos, and various business niches such as security.

Follow me on Twitter and Google+, and read the rest of my Forbes posts here.

Tagged , , , , , , ,

Danko Nikolic on Singularity 1 on 1: Practopoiesis Tells Us Machine Learning Is Not Enough!

If there’s ever been a case when I just wanted to jump on a plane and go interview someone in person, not because they are famous but because they have created a totally unique and arguably seminal theory, it has to be Danko Nikolic. I believe Danko’s theory of Practopoiesis is that good and he should and probably eventually would become known around the world for it. Unfortunately, however, I don’t have a budget of thousands of dollars per interview which will allow me to pay for my audio and video team to travel to Germany and produce the quality that Nikolic deserves. So, I’ve had to settle with Skype. And Skype refused to cooperate on that day even though both me and Danko have pretty much the fastest internet connections money can buy. Luckily, despite the poor video quality, our audio was very good and I would urge that if there’s ever been an interview where you ought to disregard the video quality and focus on the content – it has to be this one.
During our 67 min conversation with Danko we cover a variety of interesting topics such as:

As always you can listen to or download the audio file above or scroll down and watch the video interview in full.
To show your support you can write a review on iTunes or make a donation.
Who is Danko Nikolic?
The main motive for my studies is the explanatory gap between the brain and the mind. My interest is in how the physical world of neuronal activity produces the mental world of perception and cognition. I am associated with

  • the Max-Planck Institute for Brain Research,
  • Ernst Strüngmann Institute,
  • Frankfurt Institute for Advanced Studies, and
  • the University of Zagreb.
I approach the problem of explanatory gap from both sides, bottom-up and top-down. The bottom-up approach investigates brain physiology. The top-down approach investigates the behavior and experiences. Each of the two approaches led me to develop a theory: The work on physiology resulted in the theory of practopoiesis. The work on behavior and experiences led to the phenomenon of ideasthesia.
The empirical work in the background of those theories involved

  • simultaneous recordings of activity of 100+ neurons in the visual cortex (extracellular recordings),
  • behavioral and imaging studies in visual cognition (attention, working memory, long-term memory), and
  • empirical investigations of phenomenal experiences (synesthesia).
The ultimate goal of my studies is twofold.

  • First, I would like to achieve conceptual understanding of how the dynamics of physical processes creates the mental ones. I believe that the work on practopoiesis presents an important step in this direction and that it will help us eventually address the hard problem of consciousness and the mind-body problem in general.
  • Second, I would like to use this theoretical knowledge to create artificial systems that are biologically-like intelligent and adaptive. This would have implications for our technology.
A reason why one would be interested in studying the brain in the first place is described here: Why brain?
Tagged , , , , , , , ,

Neurons in human skin perform advanced calculations

[2014-09-01] Neurons in human skin perform advanced calculations, previously believed that only the brain could perform. This is according to a study from Umeå University in Sweden published in the journal Nature Neuroscience.

A fundamental characteristic of neurons that extend into the skin and record touch, so-called first-order neurons in the tactile system, is that they branch in the skin so that each neuron reports touch from many highly-sensitive zones on the skin.
According to researchers at the Department of Integrative Medical Biology, IMB, Umeå University, this branching allows first-order tactile neurons not only to send signals to the brain that something has touched the skin, but also process geometric data about the object touching the skin.
- Our work has shown that two types of first-order tactile neurons that supply the sensitive skin at our fingertips not only signal information about when and how intensely an object is touched, but also information about the touched object’s shape, says Andrew Pruszynski, who is one of the researchers behind the study.
The study also shows that the sensitivity of individual neurons to the shape of an object depends on the layout of the neuron’s highly-sensitive zones in the skin.
- Perhaps the most surprising result of our study is that these peripheral neurons, which are engaged when a fingertip examines an object, perform the same type of calculations done by neurons in the cerebral cortex. Somewhat simplified, it means that our touch experiences are already processed by neurons in the skin before they reach the brain for further processing, says Andrew Pruszynski.
For more information about the study, please contact Andrew Pruszynski, post doc at the Department of Integrative Medical Biology, IMB, Umeå University. He is English-speaking and can be reached at:
Phone: +46 90 786 51 09; Mobile: +46 70 610 80 96
Tagged , , , , , , ,

This AI-Powered Calendar Is Designed to Give You Me-Time

ORIGINAL: Wired
09.02.14 |
Timeful intelligently schedules to-dos and habits on your calendar. Timeful
No one on their death bed wishes they’d taken a few more meetings. Instead, studies find people consistently say things like: 
  • I wish I’d spent more time with my friends and family;
  • I wish I’d focused more on my health; 
  • I wish I’d picked up more hobbies.
That’s what life’s all about, after all. So, question: Why don’t we ever put any of that stuff on our calendar?
That’s precisely what the folks behind Timeful want you to do. Their app (iPhone, free) is a calendar designed to handle it all. You don’t just put in the things you need to do—meeting on Thursday; submit expenses; take out the trash—but also the things you want to do, like going running more often or brushing up on your Spanish. Then, the app algorithmically generates a schedule to help you find time for it all. The more you use it, the smarter that schedule gets.
Even in the crowded categories of calendars and to-do lists, Timeful stands out. Not many iPhone calendar apps were built by renowned behavioral psychologists and machine learning experts, nor have many attracted investor attention to the tune of $7 million.
It was born as a research project at Stanford, where Jacob Bank, a computer science PhD candidate, and his advisor, AI expert Yoav Shoham, started exploring how machine learning could be applied to time management. To help with their research, they brought on Dan Ariely, the influential behavior psychologist and author of the book Predictably Irrational. It didn’t take long for the group to realize that there was an opportunity to bring time management more in step with the times. “It suddenly occurred to me that my calendar and my grandfather’s calendar are essentially the same,” Shoham recalls.
A Tough Problem and an Artifically Intelligent Solution
Like all of Timeful’s founders, Shoham sees time as our most valuable resource–far more valuable, even, than money. And yet he says the tools we have for managing money are far more sophisticated than the ones we have for managing time. In part, that’s because time poses a tricky problem. Simply put, it’s tough to figure out the best way to plan your day. On top of that, people are lazy, and prone to distraction. “We have a hard computational problem compounded by human mistakes,” Shoham says.
To address that lazy human bit, Timeful is designed around a simple fact: When you schedule something, you’re far more likely to get it done. Things you put in the app don’t just live in some list. Everything shows up on the calendar. Meetings and appointments get slotted at the times they take place, as you’d expect. But at the start of the day, the app also blocks off time for your to-dos and habits, rendering them as diagonally-slatted rectangles on your calendar which you can accept, dismiss, or move around as you desire.
Suggestions have diagonal slats. Timeful
Suggestions have diagonal slats. Timeful
In each case, Timeful takes note of how you respond and adjusts its “intention rank,” as the company calls its scheduling algorithm. This is the special sauce that elevates Timeful from dumb calendar to something like an assistant. As Bank sees it, the more nebulous lifestyle events we’d never think to put on our calendar are a perfect subject for some machine learning smarts. “Habits have the really nice property that they repeat over time with very natural patterns,” he says. “So if you put in, ‘run three times a week,’ we can quickly learn what times you like to run and when you’re most likely to do it.
The other machine learning challenge involved with Timeful is the problem of input. Where many other to-do apps try to make the input process as frictionless as possible, Timeful often needs to ask a few follow-up questions to schedule tasks properly, like how long you expect them to take, and if there’s a deadline for completion. As with all calendars and to-do apps, Timeful’s only as useful as the stuff you put on it, and here that interaction’s a fairly heavy one. For many, it could simply be too much work for the reward. Plus, isn’t it a little weird to block off sixty minutes to play with your kid three times a week?
Bank admits that it takes longer to put things into Timeful than some other apps, and the company’s computer scientists are actively trying to come up with new ways to offload the burden algorithmically. In future versions, Bank hopes to be able to automatically pull in data from other apps and services. A forthcoming web version could also make input easier (an Android version is on the way too). But as Bank sees it, there may be an upside to having a bit of friction here. By going through the trouble of putting something in the app, you’re showing that you truly want to get it done, and that could help keep Timeful from becoming a “list of shame” like other to-do apps. (And as far as the kid thing goes, it might feel weird, but if scheduling family time on your calendar results in more family time, then it’s kinda hard to knock, no?)
How Much Scheduling Is Too Much?
Perhaps the bigger question is how much day-to-day optimization people can really swallow. Having been conditioned to see the calendar as a source of responsibilities and obligations, opening up one’s preferred scheduling application and seeing a long white column stretching down for the day can be the source of an almost embarrassing degree of relief. Thank God, now I can finally get something done! With Timeful, that feeling becomes extinct. Every new dawn brings a whole bunch of new stuff to do.
Two of Timeful’s co-founders, Jacob Bank (top) and Yoav Shoham Timeful
Bank and Shoham are acutely aware of this thorny problem. “Sometimes there’s a tension between what’s best for a user and what the user wants to accept, and we need to be really delicate about that,” Bank says. In the app, you can fine-tune just how aggressive you want it to be in its planning, and a significant part of the design process was making sure the app’s suggestions felt like suggestions, not demands. Still, we might crave that structure more than we think. After some early user tests, the company actually cranked up the pushiness of Timeful’s default setting; the overwhelming response from beta testers was “give me more!
The vision is for Timeful to become something akin to a polite assistant. Shoham likens it to Google Now for your schedule–a source of informed suggestions about what to do next. Whether you take those suggestions or leave them is entirely up to you. “This is not your paternalistic dad telling you, ‘thou shall do this!’” he says. “It’s not your guilt-abusing mom. Well, maybe there’s a little bit of that.”
Tagged , , , , , ,

Practopoiesis: How cybernetics of biology can help AI

By creating any form of AI we must copy from biology. The argument goes as follows. A brain is a biological product. And so must be then its products such as perception, insight, inference, logic, mathematics, etc. By creating AI we inevitably tap into something that biology has already invented on its own. It follows thus that the more we want the AI system to be similar to a human—e.g., to get a better grade on the Turing test—the more we need to copy the biology.
When it comes to describing living systems, traditionally, we assume the approach of different explanatory principles for different levels of system organization.

  1. One set of principles is used for “low-level” biology such as the evolution of our genome through natural selection, which is a completely different set of principles than the one used for describing the expression of those genes. 
  2. A yet different type of story is used to explain what our neural networks do. Needless to say, 
  3. the descriptions at the very top of that organizational hierarchy—at the level of our behavior—are made by concepts that again live in their own world.
But what if it was possible to unify all these different aspects of biology and describe them all by a single set of principles? What if we could use the same fundamental rules to talk about the physiology of a kidney and the process of a conscious thought? What if we had concepts that could give us insights into mental operations underling logical inferences on one hand and the relation between the phenotype and genotype on the other hand? This request is not so outrageous. After all, all those phenomena are biological.
One can argue that such an all-embracing theory of the living would be beneficial also for further developments of AI. The theory could guide us on what is possible and what is not. Given a certain technological approach, what are its limitations? Maybe it could answer the question of what the unitary components of intelligence are. And does my software have enough of them?
For more inspiration, let us look into Shannon-Wiener theory of information and appreciate how much helpful this theory is for dealing with various types of communication channels (including memory storage, which is also a communication channel, only over time rather than space). We can calculate how much channel capacity is needed to transmit (store) certain contents. Also, we can easily compare two communication channels and determine which one has more capacity. This allows us to directly compare devices that are otherwise incomparable. For example, an interplanetary communication system based on satellites can be compared to DNA located within a nucleus of a human cell. Only thanks to the information theory can we calculate whether a given satellite connection has enough capacity to transfer the DNA information about human person to a hypothetical recipient at another planet. (The answer is: yes, easily.) Thus, information theory is invaluable in making these kinds of engineering decisions.
So, how about intelligence? Wouldn’t it be good to come into possession of a similar general theory for adaptive intelligent behavior? Maybe we could use certain quantities other than bits that could tell us why the intelligence of plants is lagging behind that of primates? Also, we may be able to know better what the essential ingredients are that distinguish human intelligence from that of a chimpanzee? Using the same theory we could compare

  • an abacus, 
  • a hand-held calculator, 
  • a supercomputer, and 
  • a human intellect.
The good news is that, since recently, such an overarching biological theory exists, and it is called practopoiesis. Derived from Ancient Greek praxis + poiesis, practopoiesis means creation of actions. The name reflects the fundamental presumption on what the common property can be found across all the different levels of organization of biological systems

  • Gene expression mechanisms act; 
  • bacteria act; 
  • organs act; 
  • organisms as a whole act.
Due to this focus on biological action, practopoiesis has a strong cybernetic flavor as it has to deal with the need of acting systems to close feedback loops. Input is needed to trigger actions and to determine whether more actions are needed. For that reason, the theory is founded in the basic theorems of cybernetics, namely that of requisite variety and good regulator theorem.
The key novelty of practopoiesis is that it introduces the mechanisms explaining how different levels of organization mutually interact. These mechanisms help explain how genes create anatomy of the nervous system, or how anatomy creates behavior.
When practopoiesis is applied to human mind and to AI algorithms, the results are quite revealing.
To understand those, we need to introduce the concept of practopoietic traverse. Without going into details on what a traverse is, let us just say that this is a quantity with which one can compare different capabilities of systems to adapt. Traverse is a kind of a practopoietic equivalent to the bit of information in Shannon-Wiener theory. If we can compare two communication channels according to the number of bits of information transferred, we can compare two adaptive systems according to the number of traverses. Thus, a traverse is not a measure of how much knowledge a system has (for that the good old bit does the job just fine). It is rather a measure of how much capability the system has to adjust its existing knowledge for example, when new circumstances emerge in the surrounding world.
To the best of my knowledge no artificial intelligence algorithm that is being used today has more than two traverses. That means that these algorithms interact with the surrounding world at a maximum of two levels of organization. For example, an AI algorithm may receive satellite images at one level of organization and the categories to which to learn to classify those images at another level of organization. We would say that this algorithm has two traverses of cybernetic knowledge. In contrast, biological behaving systems (that is, animals, homo sapiens) operate with three traverses.
This makes a whole lot of difference in adaptive intelligence. Two-traversal systems can be super-fast and omni-knowledgeable, and their tech-specs may list peta-everything, which they sometimes already do, but these systems nevertheless remain comparably dull when compared to three-traversal systems, such as a three-year old girl, or even a domestic cat.
To appreciate the difference between two and three traverses, let us go one step lower and consider systems with only one traverse. An example would be a PC computer without any advanced AI algorithm installed.
This computer is already light speed faster than I am in calculations, way much better in memory storage, and beats me in spell checking without the processor even getting warm. And, paradoxically, I am still the smarter one around. Thus, computational capacity and adaptive intelligence are not the same.
Importantly, this same relationship “me vs. the computer” holds for “me vs. a modern advanced AI-algorithm”. I am still the more intelligent one although the computer may have more computational power. But also the relationship holds “AI-algorithm vs. non-AI computer”. Even a small AI algorithm, implemented say on a single PC, is in many ways more intelligent than a petaflop supercomputer without AI. Thus, there is a certain hierarchy in adaptive intelligence that is not determined by memory size or the number of floating point operations executed per second but by the ability to learn and adapt to the environment.
A key requirement for adaptive intelligence is the capacity to observe how well one is doing towards a certain goal combined with the capacity to make changes and adjust in light of the feedback obtained. Practopoiesis tells us that there is not only one step possible from non-adaptive to adaptive, but that multiple adaptive steps are possible. Multiple traverses indicate a potential for adapting the ways in which we adapt.
We can go even one step further down the adaptive hierarchy and consider the least adaptive systems e.g., a book: Provided that the book is large enough, it can contain all of the knowledge about the world and yet it is not adaptive as it cannot for example, rewrite itself when something changes in that world. Typical computer software can do much more and administer many changes, but there is also a lot left that cannot be adjusted without a programmer. A modern AI-system is even smarter and can reorganize its knowledge to a much higher degree. Still, nevertheless, these systems are incapable of doing a certain types of adjustments that a human person can do, or that animals can do. Practopoisis tells us that these systems fall into different adaptive categories, which are independent of the raw information processing capabilities of the systems. Rather, these adaptive categories are defined by the number of levels of organization at which the system receives feedback from the environment — also referred to as traverses.
We can thus make the following hierarchical list of the best exemplars in each adaptive category:
  • A book: dumbest; zero traverses
  • A computer: somewhat smarter; one traverse
  • An AI system: much smarter; two traverses
  • A human: rules them all; three traverses
Most importantly for creation of strong AI, practopoiesis tells us in which direction the technological developments should be heading:
Engineering creativity should be geared towards empowering the machines with one more traverse. To match a human, a strong AI system has to have three traverses.
Practopoietic theory explains also what is so special about the third traverse. Systems with three traverses (referred to as T3-systems) are capable of storing their past experiences in an abstract, general form, which can be used in a much more efficient way than in two-traversal systems. This general knowledge can be applied to interpretation of specific novel situations such that quick and well-informed inferences are made about what is currently going on and what actions should be executed next. This process, unique for T3-systems, is referred to as anapoiesis, and can be generally described as a capability to reconstruct cybernetic knowledge that the system once had and use this knowledge efficiently in a given novel situation.
If biology has invented T3-systems and anapoiesis and has made a good use of them, there is no reason why we should not be able to do the same in machines.
5 Robots Booking It to a Classroom Near You
Danko Nikolić is a brain and mind scientist, running an electrophysiology lab at the Max Planck Institute for Brain Research, and is the creator of the concept of ideasthesia. More about practopoiesis can be read here
ORIGINAL: Singularity Web
Tagged , , , , , , ,

5 Robots Booking It to a Classroom Near You

IMAGE: ANDY BAKER/GETTY IMAGES

Robots are the new kids in school.
The technological creations are taking on serious roles in the classroom. With the accelerating rate of robotic technology, school administrators all over the world are plotting how to implement them in education, from elementary through high school.
In South Korea, robots are replacing English teachers entirely, entrusted with leading and teaching entire classrooms. In Alaska, some robots are replacing the need for teachers to physically be present at all.
Robotics 101 is now in session. Here are five ways robots are being introduced into schools.
1. Nao Robot as math teacher
IMAGE: WIKIPEDIA
In Harlem school PS 76, a Nao robot created in France, nicknamed Projo helps students improve their math skills. It’s small, about the size of a stuffed animal, and sits by a computer to assist students working on math and science problems online.
Sandra Okita, a teacher at the school, told The Wall Street Journal the robot gauges how students interact with non-human teachers. The students have taken to the humanoid robotic peer, who can speak and react, saying it’s helpful and gives the right amount of hints to help them get their work done.
2. Aiding children with autism
The Nao Robot also helps improve social interaction and communication for children with autism. The robots were introduced in a classroom in Birmingham, England in 2012, to play with children in elementary school. Though the children were intimidated at first, they’ve taken to the robotic friend, according to The Telegraph.
3. VGo robot for ill children

Sick students will never have to miss class again if the VGo robot catches on. Created by VGo Communications, the rolling robot has a webcam and can be controlled and operated remotely via computer. About 30 students with special needs nationwide have been using the $6,000 robot to attend classes.
For example, a 12-year-old Texas student with leukemia kept up with classmates by using a VGo robot. With a price tag of about $6,000, the robots aren’t easily accessible, but they’re a promising sign of what’s to come.

4. Robots over teachers
In the South Korean town of Masan, robots are starting to replace teachers entirely. The government started using the robots to teach students English in 2010. The robots operate under supervision, but the plan is to have them lead a room exclusively in a few years, as robot technology develops.
5. Virtual teachers


IMAGE: FLICKR, SEAN MACENTEE
South Korea isn’t the only place getting virtual teachers. A school in Kodiak, Alaska has started using telepresence robots to beam teachers into the classroom. The tall, rolling robots have iPads attached to the top, which teachers will use to video chat with students.
The Kodiak Island Borough School District‘s superintendent, Stewart McDonald, told The Washington Times he was inspired to do this because of the show The Big Bang Theory, which stars a similar robot. Each robot costs about $2,000; the school bought 12 total in early 2014.

Tagged , , , , ,
%d bloggers like this: