Artificial Intelligence Planning Course at Coursera by U of Edimurgh

ORIGINAL: Coursera

The course aims to provide a foundation in artificial intelligence techniques for planning, with an overview of the wide spectrum of different problems and approaches, including their underlying theory and their applications.


About the Course

The course aims to provide a foundation in artificial intelligence techniques for planning, with an overview of the wide spectrum of different problems and approaches, including their underlying theory and their applications. It will allow you to:

  • Understand different planning problems
  • Have the basic know how to design and implement AI planning systems
  • Know how to use AI planning technology for projects in different application domains
  • Have the ability to make use of AI planning literature

Planning is a fundamental part of intelligent systems. In this course, for example, you will learn the basic algorithms that are used in robots to deliberate over a course of actions to take. Simpler, reactive robots don’t need this, but if a robot is to act intelligently, this type of reasoning about actions is vital.

Course Syllabus

Week 1: Introduction and Planning in Context
Week 2: State-Space Search: Heuristic Search and STRIPS
Week 3: Plan-Space Search and HTN Planning
One week catch up break
Week 4: Graphplan and Advanced Heuristics

Week 5: Plan Execution and ApplicationsM

Exam week

Recommended Background

The MOOC is based on a Masters level course at the University of Edinburgh but is designed to be accessible at several levels of engagement from an “Awareness Level”, through the core “Foundation Level” requiring a basic knowledge of logic and mathematical reasoning, to a more involved “Performance Level” requiring programming and other assignments.

Suggested Readings

The course follows a text book, but this is not required for the course:
Automated Planning: Theory & Practice (The Morgan Kaufmann Series in Artificial Intelligence) by M. Ghallab, D. Nau, and P. Traverso (Elsevier, ISBN 1-55860-856-7) 2004.

Course Format

Five weeks of study comprising 10 hours of video lecture material and special features videos. Quizzes and assessments throughout the course will assist in learning. Some weeks will involve recommended readings. Discussion on the course forum and via other social media will be encouraged. A mid-course catch up break week and a final week for exams and completion of assignments allows for flexibility in study.

You can engage with the course at a number of levels to suit your interests and the time you have available:

  • Awareness Level – gives an overview of the topic, along with introductory videos and application related features. This level is likely to require 2-3 hours of study per week.
  • Foundation Level – is the core taught material on the course and gives a grounding in AI planning technology and algorithms. This level is likely to require 5-6 hours of study per week of study.
  • Performance Level – is for those interested in carrying out additional programming assignments and engaging in creative challenges to understand the subject more deeply. This level is likely to require 8 hours or more of study per week.

FAQ

  • Will I get a certificate after completing this class?Students who complete the class will be offered a Statement of Accomplishment signed by the instructors.
  • Do I earn University of Edinburgh credits upon completion of this class?The Statement of Accomplishment is not part of a formal qualification from the University. However, it may be useful to demonstrate prior learning and interest in your subject to a higher education institution or potential employer.
  • What resources will I need for this class?Nothing is required, but if you want to try out implementing some of the algorithms described in the lectures you’ll need access to a programming environment. No specific programming language is required. Also, you may want to download existing planners and try those out. This may require you to compile them first.
  • Can I contact the course lecturers directly?You will appreciate that such direct contact would be difficult to manage. You are encouraged to use the course social network and discussion forum to raise questions and seek inputs. The tutors will participate in the forums, and will seek to answer frequently asked questions, in some cases by adding to the course FAQ area.
  • What Twitter hash tag should I use?Use the hash tag #aiplan for tweets about the course.
  • How come this is free?We are passionate about open on-line collaboration and education. Our taught AI planning course at Edinburgh has always published its course materials, readings and resources on-line for anyone to view. Our own on-campus students can access these materials at times when the course is not available if it is relevant to their interests and projects. We want to make the materials available in a more accessible form that can reach a broader audience who might be interested in AI planning technology. This achieves our primary objective of getting such technology into productive use. Another benefit for us is that more people get to know about courses in AI in the School of Informatics at the University of Edinburgh, or get interested in studying or collaborating with us.
  • When will the course run again?It is likely that the 2015 session will be the final time this course runs as a Coursera MOOC, but we intend to leave the course wiki open for further study and use across course instances.
Tagged , ,

How IBM Got Brainlike Efficiency From the TrueNorth Chip

ORIGINAL: IEEE Spectrum
By Jeremy Hsu
Posted 29 Sep 2014 | 19:01 GMT


TrueNorth takes a big step toward using the brain’s architecture to reduce computing’s power consumption

Photo: IBM

Neuromorphic computer chips meant to mimic the neural network architecture of biological brains have generally fallen short of their wetware counterparts in efficiency—a crucial factor that has limited practical applications for such chips. That could be changing. At a power density of just 20 milliwatts per square centimeter, IBM’s new brain-inspired chip comes tantalizingly close to such wetware efficiency. The hope is that it could bring brainlike intelligence to the sensors of smartphones, smart cars, and—if IBM has its way—everything else.

The latest IBM neurosynaptic computer chip, called TrueNorth, consists of 1 million programmable neurons and 256 million programmable synapses conveying signals between the digital neurons. Each of the chip’s 4,096 neurosynaptic cores includes the entire computing package:

  • memory, 
  • computation, and 
  • communication. 

Such architecture helps to bypass the bottleneck in traditional von Neumann computing, where program instructions and operation data cannot pass through the same route simultaneously.
This is literally a supercomputer the size of a postage stamp, light like a feather, and low power like a hearing aid,” says Dharmendra Modha, IBM fellow and chief scientist for brain-inspired computing at IBM Research-Almaden, in San Jose, Calif.

Such chips can emulate the human brain’s ability to recognize different objects in real time; TrueNorth showed it could distinguish among pedestrians, bicyclists, cars, and trucks. IBM envisions its new chips working together with traditional computing devices as hybrid machines, providing a dose of brainlike intelligence. The chip’s architecture, developed together by IBM and Cornell University, was first detailed in August in the journal Science.

Tagged , , , , , ,

Meet Amelia: the computer that’s after your job

29 Sep 2014
A new artificially intelligent computer system called ‘Amelia’ – that can read and understand text, follow processes, solve problems and learn from experience – could replace humans in a wide range of low-level jobs

Amelia aims to answer the question, can machines think? Photo: IPsoft

In February 2011 an artificially intelligent computer system called IBM Watson astonished audiences worldwide by beating the two all-time greatest Jeopardy champions at their own game.

Thanks to its ability to apply

  • advanced natural language processing,
  • information retrieval,
  • knowledge representation,
  • automated reasoning, and
  • machine learning technologies, Watson consistently outperformed its human opponents on the American quiz show Jeopardy.

Watson represented an important milestone in the development of artificial intelligence, but the field has been progressing rapidly – particularly with regard to natural language processing and machine learning.

In 2012, Google used 16,000 computer processors to build a simulated brain that could correctly identify cats in YouTube videos; the Kinect, which provides a 3D body-motion interface for Microsoft’s Xbox, uses algorithms that emerged from artificial intelligence research, as does the iPhone’s Siri virtual personal assistant.

Today a new artificial intelligence computing system has been unveiled, which promises to transform the global workforce. Named ‘Amelia‘ after American aviator and pioneer Amelia Earhart, the system is able to shoulder the burden of often tedious and laborious tasks, allowing human co-workers to take on more creative roles.

Watson is perhaps the best data analytics engine that exists on the planet; it is the best search engine that exists on the planet; but IBM did not set out to create a cognitive agent. It wanted to build a program that would win Jeopardy, and it did that,” said Chetan Dube, chief executive Officer of IPsoft, the company behind Amelia.

Amelia, on the other hand, started out not with the intention of winning Jeopardy, but with the pure intention of answering the question posed by Alan Turing in 1950 – can machines think?


Amelia learns by following the same written instructions as her human colleagues, but is able to absorb information in a matter of seconds.
She understands the full meaning of what she reads rather than simply recognising individual words. This involves

  • understanding context,
  • applying logic and
  • inferring implications.

When exposed to the same information as any new employee in a company, Amelia can quickly apply her knowledge to solve queries in a wide range of business processes. Just like any smart worker she learns from her colleagues and, by observing their work, she continually builds her knowledge.

While most ‘smart machines’ require humans to adapt their behaviour in order to interact with them, Amelia is intelligent enough to interact like a human herself. She speaks more than 20 languages, and her core knowledge of a process needs only to be learned once for her to be able to communicate with customers in their language.

Independently, rather than through time-intensive programming, Amelia creates her own ‘process map’ of the information she is given so that she can work out for herself what actions to take depending on the problem she is solving.

Intelligence is the ability to acquire and apply knowledge. If a system claims to be intelligent, it must be able to read and understand documents, and answer questions on the basis of that. It must be able to understand processes that it observes. It must be able to solve problems based on the knowledge it has acquired. And when it cannot solve a problem, it must be capable of learning the solution through noticing how a human did it,” said Dube.

IPsoft has been working on this technology for 15 years with the aim of developing a platform that does not simply mimic human thought processes but can comprehend the underlying meaning of what is communicated – just like a human.

Just as machines transformed agriculture and manufacturing, IPsoft believes that cognitive technologies will drive the next evolution of the global workforce, so that in the future companies will have digital workforces that comprise a mixture of human and virtual employees.

Amelia has already been trialled within a number of Fortune 1000 companies, in areas such as manning technology help desks, procurement processing, financial trading operations support and providing expert advice for field engineers.

In each of these environments, she has learnt not only from reading existing manuals and situational context but also by observing and working with her human colleagues and discerning for herself a map of the business processes being followed.

In a help desk situation, for example, Amelia can understand what a caller is looking for, ask questions to clarify the issue, find and access the required information and determine which steps to follow in order to solve the problem.

As a knowledge management advisor, she can help engineers working in remote locations who are unable to carry detailed manuals, by diagnosing the cause of failed machinery and guiding them towards the best steps to rectifying the problem.

During these trials, Amelia was able to go from solving very few queries independently to 42 per cent of the most common queries within one month. By the second month she could answer 64 per cent of those queries independently.

That’s a true learning cognitive agent. Learning is the key to the kingdom, because humans learn from experience. A child may need to be told five times before they learn something, but Amelia needs to be told only once,” said Dube.

Amelia is that Mensa kid, who personifies a major breakthrough in cognitive technologies.

Analysts at Gartner predict that, by 2017, managed services offerings that make use of autonomics and cognitive platforms like Amelia will drive a 60 per cent reduction in the cost of services, enabling organisations to apply human talent to higher level tasks requiring creativity, curiosity and innovation.

IPsoft even has plans to start embedding Amelia into humanoid robots such as Softbank’s Pepper, Honda’s Asimo or Rethink Robotics’ Baxter, allowing her to take advantage of their mechanical functions.

The robots have got a fair degree of sophistication in all the mechanical functions – the ability to climb up stairs, the ability to run, the ability to play ping pong. What they don’t have is the brain, and we’ll be supplementing that brain part with Amelia,” said Dube.

I am convinced that in the next decade you’ll pass someone in the corridor and not be able to discern if it’s a human or an android.

Given the premise of IPsoft’s artificial intelligence system, it seems logical that the ultimate measure of Amelia’s success would be passing the Turing Test – which sets out to see whether humans can discern whether they are interacting with a human or a machine.

Earlier this year, a chatbot named Eugene Goostman became the first machine to pass the Turing Test by convincingly imitating a 13-year-old boy. In a five-minute keyboard conversation with a panel of human judges, Eugene managed to convince 33 per cent that it was human.

Interestingly, however, IPsoft believes that the Turing Test needs reframing, to redefine what it means to ‘think’. While Eugene was able to immitate natural language, he was only mimicking understanding. He did not learn from the interaction, nor did he demonstrate problem solving skills.

Natural language understanding is a big step up from parsing. Parsing is syntactic, understanding is semantic, and there’s a big cavern between the two,” said Dube.

The aim of Amelia is not just to get an accolade for managing to fool one in three people on a panel. The assertion is to create something that can answer to the fundamental need of human beings – particularly after a certain age – of companionship. That is our intent.

Tagged , , , ,

AI For Everyone: Startups Democratize Deep Learning So Google And Facebook Don’t Own It All

ORIGINAL: Forbes
9/17/2014When I arrived at a Stanford University auditorium Tuesday night for what I thought would be a pretty nerdy panel on deep learning, a fast-growing branch of artificial intelligence, I figured I must be in the wrong place–maybe a different event for all the new Stanford students and their parents visiting the campus. Nope. Despite the highly technical nature of deep learning, some 600 people had shown up for the sold-out AI event, presented by VLAB, a Stanford-based chapter of the MIT Enterprise Forum. The turnout was a stark sign of the rising popularity of deep learning, an approach to AI that tries to mimic the activity of the brain in so-called neural networks. In just the last couple of years, deep learning software from giants like Google GOOGL +1.27%, Facebook, and China’s Baidu as well as a raft of startups, has led to big advances in image and speech recognition, medical diagnostics, stock trading, and more. “There’s quite a bit of excitement in this area,” panel moderator Steve Jurvetson, a partner with the venture firm DFJ, said with uncustomary understatement.

In the past year or two, big companies have been locked in a land grab for talent, paying big bucks for startups and even hiring away deep learning experts from each other. But this event, focused mostly on startups, including several that demonstrated their products before the panel, also revealed there’s still a lot of entrepreneurial activity. In particular, several companies aim to democratize deep learning by offering it as a service or coming up with cheaper hardware to make it more accessible to businesses.

Jurvetson explained why deep learning has pushed the boundaries of AI so much further recently.

  • For one, there’s a lot more data around because of the Internet, there’s metadata such as tags and translations, and there’s even services such as Amazon’s Mechanical Turk, which allows for cheap labeling or tagging.
  • There are also algorithmic advances, especially for using unlabeled data.
  • And computing has advanced enough to allow much larger neural networks with more synapses–in the case of Google Brain, for instance, 1 billion synapses (though that’s still a very long way form the 100 trillion synapses in the adult human brain).

Adam Berenzweig, cofounder and CTO of image recognition firm Clarifai and former engineer at Google for 10 years, made the case that deep learning is “adding a new primary sense to computing” in the form of useful computer vision. “Deep learning is forming that bridge between the physical world and the world of computing,” he said.

And it’s allowing that to happen in real time. “Now we’re getting into a world where we can take measurements of the physical world, like pixels in a picture, and turn them into symbols that we can sort,” he said. Clarifai has been working on taking an image and producing a meaningful description very quickly, like 80 milliseconds, and show very similar images.

One interesting application relevant to advertising and marketing, he noted: Once you can recognize key objects in images, you can target ads not just on keywords but on objects in an image.

DFJ’s Steve Jurvetson led a panel of AI experts at a Stanford even Sept. 16.

Even more sweeping, said Naveen Rao, cofounder and CEO of deep-learning hardware and software startup Nervana Systems and former researcher in neuromorphic computing at Qualcomm, deep learning is “that missing link between computing and what the brain does.” Instead of doing specific computations very fast, as conventional computers do, “we can start building new hardware to take computer processing in a whole new direction,” assessing probabilities, like the brain does. “Now there’s actually a business case for this kind of computing,” he said.

And not just for big businesses. Elliot Turner, founder and CEO of AlchemyAPI, a deep-learning platform in the cloud, said his company’s mission is to “democratize deep learning.” The company is working in 10 industries from advertising to business intelligence, helping companies apply it to their businesses. “I look forward to the day that people actually stop talking about deep learning, because that will be when it has really succeeded,” he added.

Despite the obvious advantages of large companies such as Google, which have untold amounts of both data and computer power that deep learning requires to be useful, startups can still have a big impact, a couple of the panelists said. “There’s data in a lot of places. There’s a lot of nooks and crannies that Google doesn’t have access to,” Berenzweig said hopefully. “Also, you can trade expertise for data. There’s also a question of how much data is enough.

Turner agreed. “It’s not just a matter of stockpiling data,” he said. “Better algorithms can help an application perform better.” He noted that even Facebook, despite its wealth of personal data, found this in its work on image recognition.

Those algorithms may have broad applicability, too. Even if they’re initially developed for specific applications such as speech recognition, it looks like they can be used on a wide variety of applications. “These algorithms are extremely fungible,” said Rao. And he said companies such as Google aren’t keeping them as secret as expected, often publishing them in academic journals and at conferences–though Berenzweig noted that “it takes more than what they publish to do what they do well.

For all that, it’s not yet clear how much deep learning will actually emulate the brain, even if they are intelligent. But Ilya Sutskever, research scientist at Google Brain and a protege of Geoffrey Hinton, the University of Toronto deep learning guru since the 1980s who’s now working part-time at Google, said it almost doesn’t matter. “You can still do useful predictions” using them. And while the learning principles for dealing with all the unlabeled data out there remain primitive, he said he and many others are working on this and likely will make even more progress.

Rao said he’s unworried that we’ll end up creating some kind of alien intelligence that could run amok if only because advances will be driven by market needs. Besides, he said, “I think a lot of the similarities we’re seeing in computation and brain functions is coincidental. It’s driven that way because we constrain it that way.

OK, so how are these companies planning to make money on this stuff? Jurvetson wondered. Of course, we’ve already seen improvements in speech and image recognition that make smartphones and apps more useful, leading more people to buy them. “Speech recognition is useful enough that I use it,” said Sutskever. “I’d be happy if I didn’t press a button ever again. And language translation could have a very large impact.

Beyond that, Berenzweig said, “we’re looking for the low-hanging fruit,” common use cases such as visual search for shopping, organizing your personal photos, and various business niches such as security.

Follow me on Twitter and Google+, and read the rest of my Forbes posts here.

Tagged , , , , , , ,

Danko Nikolic on Singularity 1 on 1: Practopoiesis Tells Us Machine Learning Is Not Enough!

If there’s ever been a case when I just wanted to jump on a plane and go interview someone in person, not because they are famous but because they have created a totally unique and arguably seminal theory, it has to be Danko Nikolic. I believe Danko’s theory of Practopoiesis is that good and he should and probably eventually would become known around the world for it. Unfortunately, however, I don’t have a budget of thousands of dollars per interview which will allow me to pay for my audio and video team to travel to Germany and produce the quality that Nikolic deserves. So, I’ve had to settle with Skype. And Skype refused to cooperate on that day even though both me and Danko have pretty much the fastest internet connections money can buy. Luckily, despite the poor video quality, our audio was very good and I would urge that if there’s ever been an interview where you ought to disregard the video quality and focus on the content – it has to be this one.
During our 67 min conversation with Danko we cover a variety of interesting topics such as:

As always you can listen to or download the audio file above or scroll down and watch the video interview in full.
To show your support you can write a review on iTunes or make a donation.
Who is Danko Nikolic?
The main motive for my studies is the explanatory gap between the brain and the mind. My interest is in how the physical world of neuronal activity produces the mental world of perception and cognition. I am associated with

  • the Max-Planck Institute for Brain Research,
  • Ernst Strüngmann Institute,
  • Frankfurt Institute for Advanced Studies, and
  • the University of Zagreb.
I approach the problem of explanatory gap from both sides, bottom-up and top-down. The bottom-up approach investigates brain physiology. The top-down approach investigates the behavior and experiences. Each of the two approaches led me to develop a theory: The work on physiology resulted in the theory of practopoiesis. The work on behavior and experiences led to the phenomenon of ideasthesia.
The empirical work in the background of those theories involved

  • simultaneous recordings of activity of 100+ neurons in the visual cortex (extracellular recordings),
  • behavioral and imaging studies in visual cognition (attention, working memory, long-term memory), and
  • empirical investigations of phenomenal experiences (synesthesia).
The ultimate goal of my studies is twofold.

  • First, I would like to achieve conceptual understanding of how the dynamics of physical processes creates the mental ones. I believe that the work on practopoiesis presents an important step in this direction and that it will help us eventually address the hard problem of consciousness and the mind-body problem in general.
  • Second, I would like to use this theoretical knowledge to create artificial systems that are biologically-like intelligent and adaptive. This would have implications for our technology.
A reason why one would be interested in studying the brain in the first place is described here: Why brain?
Tagged , , , , , , , ,

Neurons in human skin perform advanced calculations

[2014-09-01] Neurons in human skin perform advanced calculations, previously believed that only the brain could perform. This is according to a study from Umeå University in Sweden published in the journal Nature Neuroscience.

A fundamental characteristic of neurons that extend into the skin and record touch, so-called first-order neurons in the tactile system, is that they branch in the skin so that each neuron reports touch from many highly-sensitive zones on the skin.
According to researchers at the Department of Integrative Medical Biology, IMB, Umeå University, this branching allows first-order tactile neurons not only to send signals to the brain that something has touched the skin, but also process geometric data about the object touching the skin.
- Our work has shown that two types of first-order tactile neurons that supply the sensitive skin at our fingertips not only signal information about when and how intensely an object is touched, but also information about the touched object’s shape, says Andrew Pruszynski, who is one of the researchers behind the study.
The study also shows that the sensitivity of individual neurons to the shape of an object depends on the layout of the neuron’s highly-sensitive zones in the skin.
- Perhaps the most surprising result of our study is that these peripheral neurons, which are engaged when a fingertip examines an object, perform the same type of calculations done by neurons in the cerebral cortex. Somewhat simplified, it means that our touch experiences are already processed by neurons in the skin before they reach the brain for further processing, says Andrew Pruszynski.
For more information about the study, please contact Andrew Pruszynski, post doc at the Department of Integrative Medical Biology, IMB, Umeå University. He is English-speaking and can be reached at:
Phone: +46 90 786 51 09; Mobile: +46 70 610 80 96
Tagged , , , , , , ,

This AI-Powered Calendar Is Designed to Give You Me-Time

ORIGINAL: Wired
09.02.14 |
Timeful intelligently schedules to-dos and habits on your calendar. Timeful
No one on their death bed wishes they’d taken a few more meetings. Instead, studies find people consistently say things like: 
  • I wish I’d spent more time with my friends and family;
  • I wish I’d focused more on my health; 
  • I wish I’d picked up more hobbies.
That’s what life’s all about, after all. So, question: Why don’t we ever put any of that stuff on our calendar?
That’s precisely what the folks behind Timeful want you to do. Their app (iPhone, free) is a calendar designed to handle it all. You don’t just put in the things you need to do—meeting on Thursday; submit expenses; take out the trash—but also the things you want to do, like going running more often or brushing up on your Spanish. Then, the app algorithmically generates a schedule to help you find time for it all. The more you use it, the smarter that schedule gets.
Even in the crowded categories of calendars and to-do lists, Timeful stands out. Not many iPhone calendar apps were built by renowned behavioral psychologists and machine learning experts, nor have many attracted investor attention to the tune of $7 million.
It was born as a research project at Stanford, where Jacob Bank, a computer science PhD candidate, and his advisor, AI expert Yoav Shoham, started exploring how machine learning could be applied to time management. To help with their research, they brought on Dan Ariely, the influential behavior psychologist and author of the book Predictably Irrational. It didn’t take long for the group to realize that there was an opportunity to bring time management more in step with the times. “It suddenly occurred to me that my calendar and my grandfather’s calendar are essentially the same,” Shoham recalls.
A Tough Problem and an Artifically Intelligent Solution
Like all of Timeful’s founders, Shoham sees time as our most valuable resource–far more valuable, even, than money. And yet he says the tools we have for managing money are far more sophisticated than the ones we have for managing time. In part, that’s because time poses a tricky problem. Simply put, it’s tough to figure out the best way to plan your day. On top of that, people are lazy, and prone to distraction. “We have a hard computational problem compounded by human mistakes,” Shoham says.
To address that lazy human bit, Timeful is designed around a simple fact: When you schedule something, you’re far more likely to get it done. Things you put in the app don’t just live in some list. Everything shows up on the calendar. Meetings and appointments get slotted at the times they take place, as you’d expect. But at the start of the day, the app also blocks off time for your to-dos and habits, rendering them as diagonally-slatted rectangles on your calendar which you can accept, dismiss, or move around as you desire.
Suggestions have diagonal slats. Timeful
Suggestions have diagonal slats. Timeful
In each case, Timeful takes note of how you respond and adjusts its “intention rank,” as the company calls its scheduling algorithm. This is the special sauce that elevates Timeful from dumb calendar to something like an assistant. As Bank sees it, the more nebulous lifestyle events we’d never think to put on our calendar are a perfect subject for some machine learning smarts. “Habits have the really nice property that they repeat over time with very natural patterns,” he says. “So if you put in, ‘run three times a week,’ we can quickly learn what times you like to run and when you’re most likely to do it.
The other machine learning challenge involved with Timeful is the problem of input. Where many other to-do apps try to make the input process as frictionless as possible, Timeful often needs to ask a few follow-up questions to schedule tasks properly, like how long you expect them to take, and if there’s a deadline for completion. As with all calendars and to-do apps, Timeful’s only as useful as the stuff you put on it, and here that interaction’s a fairly heavy one. For many, it could simply be too much work for the reward. Plus, isn’t it a little weird to block off sixty minutes to play with your kid three times a week?
Bank admits that it takes longer to put things into Timeful than some other apps, and the company’s computer scientists are actively trying to come up with new ways to offload the burden algorithmically. In future versions, Bank hopes to be able to automatically pull in data from other apps and services. A forthcoming web version could also make input easier (an Android version is on the way too). But as Bank sees it, there may be an upside to having a bit of friction here. By going through the trouble of putting something in the app, you’re showing that you truly want to get it done, and that could help keep Timeful from becoming a “list of shame” like other to-do apps. (And as far as the kid thing goes, it might feel weird, but if scheduling family time on your calendar results in more family time, then it’s kinda hard to knock, no?)
How Much Scheduling Is Too Much?
Perhaps the bigger question is how much day-to-day optimization people can really swallow. Having been conditioned to see the calendar as a source of responsibilities and obligations, opening up one’s preferred scheduling application and seeing a long white column stretching down for the day can be the source of an almost embarrassing degree of relief. Thank God, now I can finally get something done! With Timeful, that feeling becomes extinct. Every new dawn brings a whole bunch of new stuff to do.
Two of Timeful’s co-founders, Jacob Bank (top) and Yoav Shoham Timeful
Bank and Shoham are acutely aware of this thorny problem. “Sometimes there’s a tension between what’s best for a user and what the user wants to accept, and we need to be really delicate about that,” Bank says. In the app, you can fine-tune just how aggressive you want it to be in its planning, and a significant part of the design process was making sure the app’s suggestions felt like suggestions, not demands. Still, we might crave that structure more than we think. After some early user tests, the company actually cranked up the pushiness of Timeful’s default setting; the overwhelming response from beta testers was “give me more!
The vision is for Timeful to become something akin to a polite assistant. Shoham likens it to Google Now for your schedule–a source of informed suggestions about what to do next. Whether you take those suggestions or leave them is entirely up to you. “This is not your paternalistic dad telling you, ‘thou shall do this!’” he says. “It’s not your guilt-abusing mom. Well, maybe there’s a little bit of that.”
Tagged , , , , , ,

Practopoiesis: How cybernetics of biology can help AI

By creating any form of AI we must copy from biology. The argument goes as follows. A brain is a biological product. And so must be then its products such as perception, insight, inference, logic, mathematics, etc. By creating AI we inevitably tap into something that biology has already invented on its own. It follows thus that the more we want the AI system to be similar to a human—e.g., to get a better grade on the Turing test—the more we need to copy the biology.
When it comes to describing living systems, traditionally, we assume the approach of different explanatory principles for different levels of system organization.

  1. One set of principles is used for “low-level” biology such as the evolution of our genome through natural selection, which is a completely different set of principles than the one used for describing the expression of those genes. 
  2. A yet different type of story is used to explain what our neural networks do. Needless to say, 
  3. the descriptions at the very top of that organizational hierarchy—at the level of our behavior—are made by concepts that again live in their own world.
But what if it was possible to unify all these different aspects of biology and describe them all by a single set of principles? What if we could use the same fundamental rules to talk about the physiology of a kidney and the process of a conscious thought? What if we had concepts that could give us insights into mental operations underling logical inferences on one hand and the relation between the phenotype and genotype on the other hand? This request is not so outrageous. After all, all those phenomena are biological.
One can argue that such an all-embracing theory of the living would be beneficial also for further developments of AI. The theory could guide us on what is possible and what is not. Given a certain technological approach, what are its limitations? Maybe it could answer the question of what the unitary components of intelligence are. And does my software have enough of them?
For more inspiration, let us look into Shannon-Wiener theory of information and appreciate how much helpful this theory is for dealing with various types of communication channels (including memory storage, which is also a communication channel, only over time rather than space). We can calculate how much channel capacity is needed to transmit (store) certain contents. Also, we can easily compare two communication channels and determine which one has more capacity. This allows us to directly compare devices that are otherwise incomparable. For example, an interplanetary communication system based on satellites can be compared to DNA located within a nucleus of a human cell. Only thanks to the information theory can we calculate whether a given satellite connection has enough capacity to transfer the DNA information about human person to a hypothetical recipient at another planet. (The answer is: yes, easily.) Thus, information theory is invaluable in making these kinds of engineering decisions.
So, how about intelligence? Wouldn’t it be good to come into possession of a similar general theory for adaptive intelligent behavior? Maybe we could use certain quantities other than bits that could tell us why the intelligence of plants is lagging behind that of primates? Also, we may be able to know better what the essential ingredients are that distinguish human intelligence from that of a chimpanzee? Using the same theory we could compare

  • an abacus, 
  • a hand-held calculator, 
  • a supercomputer, and 
  • a human intellect.
The good news is that, since recently, such an overarching biological theory exists, and it is called practopoiesis. Derived from Ancient Greek praxis + poiesis, practopoiesis means creation of actions. The name reflects the fundamental presumption on what the common property can be found across all the different levels of organization of biological systems

  • Gene expression mechanisms act; 
  • bacteria act; 
  • organs act; 
  • organisms as a whole act.
Due to this focus on biological action, practopoiesis has a strong cybernetic flavor as it has to deal with the need of acting systems to close feedback loops. Input is needed to trigger actions and to determine whether more actions are needed. For that reason, the theory is founded in the basic theorems of cybernetics, namely that of requisite variety and good regulator theorem.
The key novelty of practopoiesis is that it introduces the mechanisms explaining how different levels of organization mutually interact. These mechanisms help explain how genes create anatomy of the nervous system, or how anatomy creates behavior.
When practopoiesis is applied to human mind and to AI algorithms, the results are quite revealing.
To understand those, we need to introduce the concept of practopoietic traverse. Without going into details on what a traverse is, let us just say that this is a quantity with which one can compare different capabilities of systems to adapt. Traverse is a kind of a practopoietic equivalent to the bit of information in Shannon-Wiener theory. If we can compare two communication channels according to the number of bits of information transferred, we can compare two adaptive systems according to the number of traverses. Thus, a traverse is not a measure of how much knowledge a system has (for that the good old bit does the job just fine). It is rather a measure of how much capability the system has to adjust its existing knowledge for example, when new circumstances emerge in the surrounding world.
To the best of my knowledge no artificial intelligence algorithm that is being used today has more than two traverses. That means that these algorithms interact with the surrounding world at a maximum of two levels of organization. For example, an AI algorithm may receive satellite images at one level of organization and the categories to which to learn to classify those images at another level of organization. We would say that this algorithm has two traverses of cybernetic knowledge. In contrast, biological behaving systems (that is, animals, homo sapiens) operate with three traverses.
This makes a whole lot of difference in adaptive intelligence. Two-traversal systems can be super-fast and omni-knowledgeable, and their tech-specs may list peta-everything, which they sometimes already do, but these systems nevertheless remain comparably dull when compared to three-traversal systems, such as a three-year old girl, or even a domestic cat.
To appreciate the difference between two and three traverses, let us go one step lower and consider systems with only one traverse. An example would be a PC computer without any advanced AI algorithm installed.
This computer is already light speed faster than I am in calculations, way much better in memory storage, and beats me in spell checking without the processor even getting warm. And, paradoxically, I am still the smarter one around. Thus, computational capacity and adaptive intelligence are not the same.
Importantly, this same relationship “me vs. the computer” holds for “me vs. a modern advanced AI-algorithm”. I am still the more intelligent one although the computer may have more computational power. But also the relationship holds “AI-algorithm vs. non-AI computer”. Even a small AI algorithm, implemented say on a single PC, is in many ways more intelligent than a petaflop supercomputer without AI. Thus, there is a certain hierarchy in adaptive intelligence that is not determined by memory size or the number of floating point operations executed per second but by the ability to learn and adapt to the environment.
A key requirement for adaptive intelligence is the capacity to observe how well one is doing towards a certain goal combined with the capacity to make changes and adjust in light of the feedback obtained. Practopoiesis tells us that there is not only one step possible from non-adaptive to adaptive, but that multiple adaptive steps are possible. Multiple traverses indicate a potential for adapting the ways in which we adapt.
We can go even one step further down the adaptive hierarchy and consider the least adaptive systems e.g., a book: Provided that the book is large enough, it can contain all of the knowledge about the world and yet it is not adaptive as it cannot for example, rewrite itself when something changes in that world. Typical computer software can do much more and administer many changes, but there is also a lot left that cannot be adjusted without a programmer. A modern AI-system is even smarter and can reorganize its knowledge to a much higher degree. Still, nevertheless, these systems are incapable of doing a certain types of adjustments that a human person can do, or that animals can do. Practopoisis tells us that these systems fall into different adaptive categories, which are independent of the raw information processing capabilities of the systems. Rather, these adaptive categories are defined by the number of levels of organization at which the system receives feedback from the environment — also referred to as traverses.
We can thus make the following hierarchical list of the best exemplars in each adaptive category:
  • A book: dumbest; zero traverses
  • A computer: somewhat smarter; one traverse
  • An AI system: much smarter; two traverses
  • A human: rules them all; three traverses
Most importantly for creation of strong AI, practopoiesis tells us in which direction the technological developments should be heading:
Engineering creativity should be geared towards empowering the machines with one more traverse. To match a human, a strong AI system has to have three traverses.
Practopoietic theory explains also what is so special about the third traverse. Systems with three traverses (referred to as T3-systems) are capable of storing their past experiences in an abstract, general form, which can be used in a much more efficient way than in two-traversal systems. This general knowledge can be applied to interpretation of specific novel situations such that quick and well-informed inferences are made about what is currently going on and what actions should be executed next. This process, unique for T3-systems, is referred to as anapoiesis, and can be generally described as a capability to reconstruct cybernetic knowledge that the system once had and use this knowledge efficiently in a given novel situation.
If biology has invented T3-systems and anapoiesis and has made a good use of them, there is no reason why we should not be able to do the same in machines.
5 Robots Booking It to a Classroom Near You
Danko Nikolić is a brain and mind scientist, running an electrophysiology lab at the Max Planck Institute for Brain Research, and is the creator of the concept of ideasthesia. More about practopoiesis can be read here
ORIGINAL: Singularity Web
Tagged , , , , , , ,

5 Robots Booking It to a Classroom Near You

IMAGE: ANDY BAKER/GETTY IMAGES

Robots are the new kids in school.
The technological creations are taking on serious roles in the classroom. With the accelerating rate of robotic technology, school administrators all over the world are plotting how to implement them in education, from elementary through high school.
In South Korea, robots are replacing English teachers entirely, entrusted with leading and teaching entire classrooms. In Alaska, some robots are replacing the need for teachers to physically be present at all.
Robotics 101 is now in session. Here are five ways robots are being introduced into schools.
1. Nao Robot as math teacher
IMAGE: WIKIPEDIA
In Harlem school PS 76, a Nao robot created in France, nicknamed Projo helps students improve their math skills. It’s small, about the size of a stuffed animal, and sits by a computer to assist students working on math and science problems online.
Sandra Okita, a teacher at the school, told The Wall Street Journal the robot gauges how students interact with non-human teachers. The students have taken to the humanoid robotic peer, who can speak and react, saying it’s helpful and gives the right amount of hints to help them get their work done.
2. Aiding children with autism
The Nao Robot also helps improve social interaction and communication for children with autism. The robots were introduced in a classroom in Birmingham, England in 2012, to play with children in elementary school. Though the children were intimidated at first, they’ve taken to the robotic friend, according to The Telegraph.
3. VGo robot for ill children

Sick students will never have to miss class again if the VGo robot catches on. Created by VGo Communications, the rolling robot has a webcam and can be controlled and operated remotely via computer. About 30 students with special needs nationwide have been using the $6,000 robot to attend classes.
For example, a 12-year-old Texas student with leukemia kept up with classmates by using a VGo robot. With a price tag of about $6,000, the robots aren’t easily accessible, but they’re a promising sign of what’s to come.

4. Robots over teachers
In the South Korean town of Masan, robots are starting to replace teachers entirely. The government started using the robots to teach students English in 2010. The robots operate under supervision, but the plan is to have them lead a room exclusively in a few years, as robot technology develops.
5. Virtual teachers


IMAGE: FLICKR, SEAN MACENTEE
South Korea isn’t the only place getting virtual teachers. A school in Kodiak, Alaska has started using telepresence robots to beam teachers into the classroom. The tall, rolling robots have iPads attached to the top, which teachers will use to video chat with students.
The Kodiak Island Borough School District‘s superintendent, Stewart McDonald, told The Washington Times he was inspired to do this because of the show The Big Bang Theory, which stars a similar robot. Each robot costs about $2,000; the school bought 12 total in early 2014.

Tagged , , , , ,

DARPA Project Starts Building Human Memory Prosthetics

ORIGINAL: IEES Spectrum
By Eliza Strickland
Posted 27 Aug 2014
The first memory-enhancing devices could be implanted within four years
Photo: Lawrence Livermore National LaboratoryRemember This? Lawrence Livermore engineer Vanessa Tolosa holds up a silicon wafer containing micromachined implantable neural devices for use in experimental memory prostheses.
They’re trying to do 20 years of research in 4 years,” says Michael Kahana in a tone that’s a mixture of excitement and disbelief. Kahana, director of the Computational Memory Lab at the University of Pennsylvania, is mulling over the tall order from the U.S. Defense Advanced Research Projects Agency (DARPA). In the next four years, he and other researchers are charged with understanding the neuroscience of memory and then building a prosthetic memory device that’s ready for implantation in a human brain.
DARPA’s first contracts under its Restoring Active Memory (RAM) program challenge two research groups to construct implants for veterans with traumatic brain injuries that have impaired their memories. Over 270,000 U.S. military service members have suffered such injuries since 2000, according to DARPA, and there are no truly effective drug treatments. This program builds on an earlier DARPA initiative focused on building a memory prosthesis, under which a different group of researchers had dramatic success in improving recall in mice and monkeys.
Kahana’s team will start by searching for biological markers of memory formation and retrieval. For this early research, the test subjects will be hospitalized epilepsy patients who have already had electrodes implanted to allow doctors to study their seizures. Kahana will record the electrical activity in these patients’ brains while they take memory tests.
The memory is like a search engine,” Kahana says. “In the initial memory encoding, each event has to be tagged. Then in retrieval, you need to be able to search effectively using those tags.” He hopes to find the electric signals associated with these two operations.
Once they’ve found the signals, researchers will try amplifying them using sophisticated neural stimulation devices. Here Kahana is working with the medical device maker Medtronic, in Minneapolis, which has already developed one experimental implant that can both record neural activity and stimulate the brain. Researchers have long wanted such a “closed-loop” device, as it can use real-time signals from the brain to define the stimulation parameters.
Kahana notes that designing such closed-loop systems poses a major engineering challenge. Recording natural neural activity is difficult when stimulation introduces new electrical signals, so the device must have special circuitry that allows it to quickly switch between the two functions. What’s more, the recorded information must be interpreted with blistering speed so it can be translated into a stimulation command. “We need to take analyses that used to occupy a personal computer for several hours and boil them down to a 10-millisecond algorithm,” he says.
In four years’ time, Kahana hopes his team can show that such systems reliably improve memory in patients who are already undergoing brain surgery for epilepsy or Parkinson’s. That, he says, will lay the groundwork for future experiments in which medical researchers can try out the hardware in people with traumatic brain injuries—people who would not normally receive invasive neurosurgery.
The second research team is led by Itzhak Fried, director of the Cognitive Neurophysiology Laboratory at the University of California, Los Angeles. Fried’s team will focus on a part of the brain called the entorhinal cortex, which is the gateway to the hippocampus, the primary brain region associated with memory formation and storage. “Our approach to the RAM program is homing in on this circuit, which is really the golden circuit of memory,” Fried says. In a 2012 experiment, he showed that stimulating the entorhinal regions of patients while they were learning memory tasks improved their performance.
Fried’s group is working with Lawrence Livermore National Laboratory, in California, to develop more closed-loop hardware. At Livermore’s Center for Bioengineering, researchers are leveraging semiconductor manufacturing techniques to make tiny implantable systems. They first print microelectrodes on a polymer that sits atop a silicon wafer, then peel the polymer off and mold it into flexible cylinders about 1 millimeter in diameter. The memory prosthesis will have two of these cylindrical arrays, each studded with up to 64 hair-thin electrodes, which will be capable of both recording the activity of individual neurons and stimulating them. Fried believes his team’s device will be ready for tryout in patients with traumatic brain injuries within the four-year span of the RAM program.
Outside observers say the program’s goals are remarkably ambitious. Yet Steven Hyman, director of psychiatric research at the Broad Institute of MIT and Harvard, applauds its reach. “The kind of hardware that DARPA is interested in developing would be an extraordinary advance for the whole field,” he says. Hyman says DARPA’s funding for device development fills a gap in existing research. Pharmaceutical companies have found few new approaches to treating psychiatric and neurodegenerative disorders in recent years, he notes, and have therefore scaled back drug discovery efforts. “I think that approaches that involve devices and neuromodulation have greater near-term promise,” he says.
This article originally appeared in print as “Making a Human Memory Chip.
Tagged , , , , , , , , ,

Everybody Relax: An MIT Economist Explains Why Robots Won’t Steal Our Jobs

Living together in harmony. Photo by Oli Scarff/Getty Images
If you’ve ever found yourself fretting about the possibility that software and robotics are on the verge of thieving away all our jobs, renowned MIT labor economist David Autor is out with a new paper that might ease your nerves. Presented Friday at the Federal Reserve Bank of Kansas City’s big annual conference in Jackson Hole, Wyoming, the paper argues that humanity still has two big points in its favor: People have “common sense,” and they’re “flexible.
Neil Irwin already has a lovely writeup of the paper at the New York Times, but let’s run down the basics. There’s no question machines are getting smarter, and quickly acquiring the ability to perform work that once seemed uniquely human. Think self-driving cars that might one day threaten cabbies, or computer programs that can handle the basics of legal research.
But artificial intelligence is still just that: artificial. We haven’t untangled all the mysteries of human judgment, and programmers definitely can’t translate the way we think entirely into code. Instead, scientists at the forefront of AI have found workarounds like machine-learning algorithms. As Autor points out, a computer might not have any abstract concept of a chair, but show it enough Ikea catalogs, and it can eventually suss out the physical properties statistically associated with a seat. Fortunately for you and me, this approach still has its limits.
For example, both a toilet and a traffic cone look somewhat like a chair, but a bit of reasoning about their shapes vis-à-vis the human anatomy suggests that a traffic cone is unlikely to make a comfortable seat. Drawing this inference, however, requires reasoning about what an object is “for” not simply what it looks like. Contemporary object recognition programs do not, for the most part, take this reasoning-based approach to identifying objects, likely because the task of developing and generalizing the approach to a large set of objects would be extremely challenging.
That’s what Autor means when he says machines lack for common sense. They don’t think. They just do math.
And that leaves lots of room for human workers in the future.
Technology has already whittled away at middle class jobs, from factory workers replaced by robotic arms to secretaries made redundant by Outlook, over the past few decades. But Autor argues that plenty of today’s middle-skill occupations, such as construction trades and medical technicians, will stick around, because “many of the tasks currently bundled into these jobs cannot readily be unbundled … without a substantial drop in quality.
These aren’t jobs that require performing a single task over and over again, but instead demand that employees handle some technical work while dealing with other human beings and improvising their way through unexpected problems. Machine learning algorithms can’t handle all of that. Human beings, Swiss-army knives that we are, can. We’re flexible.
Just like the dystopian arguments that machines are about to replace a vast swath of the workforce, Autor’s paper is very much speculative. It’s worth highlighting, though, because it cuts through the silly sense of inevitability that sometimes clouds this subject. Predictions about the future of technology and the economy are made to be dashed. And while Noah Smith makes a good point that we might want to be prepared for mass, technology-driven unemployment even if there’s just a slim chance of it happening, there’s also no reason to take it for granted.
Jordan Weissmann is Slate’s senior business and economics correspondent.
ORIGINAL: Slate
Tagged , , , , , ,

It’s Time to Take Artificial Intelligence Seriously

By CHRISTOPHER MIMS
Aug. 24, 2014
No Longer an Academic Curiosity, It Now Has Measurable Impact on Our Lives
A still from “2001: A Space Odyssey” with Keir Dullea reflected in the lens of HAL’s “eye.” MGM / POLARIS / STANLEY KUBRICK
 
The age of intelligent machines has arrived—only they don’t look at all like we expected. Forget what you’ve seen in movies; this is no HAL from “2001: A Space Odyssey,” and it’s certainly not Scarlett Johansson‘s disembodied voice in “Her.It’s more akin to what happens when insects, or even fungi, do when they “think.” (What, you didn’t know that slime molds can solve mazes?)
Artificial intelligence has lately been transformed from an academic curiosity to something that has measurable impact on our lives. Google Inc. used it to increase the accuracy of voice recognition in Android by 25%. The Associated Press is printing business stories written by it. Facebook Inc. is toying with it as a way to improve the relevance of the posts it shows you.
What is especially interesting about this point in the history of AI is that it’s no longer just for technology companies. Startups are beginning to adapt it to problems where, at least to me, its applicability is genuinely surprising.
Take advertising copywriting. Could the “Mad Men” of Don Draper‘s day have predicted that by the beginning of the next century, they would be replaced by machines? Yet a company called Persado aims to do just that.
Persado does one thing, and judging by its client list, which includes Citigroup Inc. and Motorola Mobility, it does it well. It writes advertising emails and “landing pages” (where you end up if you click on a link in one of those emails, or an ad).
Here’s an example: Persado’s engine is being used across all of the types of emails a top U.S. wireless carrier sends out when it wants to convince its customers to renew their contracts, upgrade to a better plan or otherwise spend money.
Traditionally, an advertising copywriter would pen these emails; perhaps the company would test a few variants on a subset of its customers, to see which is best.
But Persado’s software deconstructs advertisements into five components, including emotion words, characteristics of the product, the “call to action” and even the position of text and the images accompanying it. By recombining them in millions of ways and then distilling their essential characteristics into eight or more test emails that are sent to some customers, Persado says it can effectively determine the best possible come-on.
A creative person is good but random,” says Lawrence Whittle, head of sales at Persado. “We’ve taken the randomness out by building an ontology of language.
The results speak for themselves: In the case of emails intended to convince mobile subscribers to renew their plans, initial trials with Persado increased click-through rates by 195%, the company says.
Here’s another example of AI becoming genuinely useful: X.ai is a startup aimed, like Persado, at doing one thing exceptionally well. In this case, it’s scheduling meetings. X.ai’s virtual assistant, Amy, isn’t a website or an app; she’s simply a “person” whom you cc: on emails to anyone with whom you’d like to schedule a meeting. Her sole “interface” is emails she sends and receives—just like a real assistant. Thus, you don’t have to bother with back-and-forth emails trying to find a convenient time and available place for lunch. Amy can correspond fluidly with anyone, but only on the subject of his or her calendar. This sounds like a simple problem to crack, but it isn’t, because Amy must communicate with a human being who might not even know she’s an AI, and she must do it flawlessly, says X.ai founder Dennis Mortensen.
E-mail conversations with Amy are already quite smooth. Mr. Mortensen used her to schedule our meeting, naturally, and it worked even though I purposely threw in some ambiguous language about the times I was available. But that is in part because Amy is still in the “training” stage, where anything she doesn’t understand gets handed to humans employed by X.ai.
It sounds like cheating, but every artificially intelligent system needs a body of data on which to “train” initially. For Persado, that body of data was text messages sent to prepaid cellphone customers in Europe, urging them to re-up their minutes or opt into special plans. For Amy, it’s a race to get a body of 100,000 email meeting requests. Amusingly, engineers at X.ai thought about using one of the biggest public database of emails available, the Enron emails, but there is too much scheming in them to be a good sample.
Both of these systems, and others like them, work precisely because their makers have decided to tackle problems that are as narrowly defined as possible. Amy doesn’t have to have a conversation about the weather—just when and where you’d like to schedule a meeting. And Persado’s system isn’t going to come up with the next “Just Do It” campaign.
This is where some might object that the commercialized vision for AI isn’t intelligent at all. But academics can’t even agree on where the cutoff for “intelligence” is in living things, so the fact that these first steps toward economically useful artificial intelligence lie somewhere near the bottom of the spectrum of things that think shouldn’t bother us.
We’re also at a time when it seems that advances in the sheer power of computers will lead to AI that becomes progressively smarter. So-called deep-learning algorithms allow machines to learn unsupervised, whereas both Persado and X.ai’s systems require training guided by humans.
Last year Google showed that its own deep-learning systems could learn to recognize a cat from millions of images scraped from the Internet, without ever being told what a cat was in the first place. It’s a parlor trick, but it isn’t hard to see where this is going—the enhancement of the effectiveness of knowledge workers. Mr. Mortensen estimates there are 87 million of them in the world already, and they schedule 10 billion meetings a year. As more tools tackling specific portions of their job become available, their days could be filled with the things that only humans can do, like creativity.
I think the next Siri is not Siri; it’s 100 companies like ours mashed into one,” says Mr. Mortensen.
—Follow Christopher Mims on Twitter @Mims or write to him atchristopher.mims@wsj.com.
Tagged , , , , ,

Ray Kurzweil: Get ready for hybrid thinking

ORIGINAL: TED
Jun 2, 2014
Two hundred million years ago, our mammal ancestors developed a new brain feature: the neocortex. This stamp-sized piece of tissue (wrapped around a brain the size of a walnut) is the key to what humanity has become. Now, futurist Ray Kurzweil suggests, we should get ready for the next big leap in brain power, as we tap into the computing power in the cloud.

 

Tagged , , , , ,

Why a deep-learning genius left Google & joined Chinese tech shop Baidu (interview)

ORIGINAL: VentureBeat
July 30, 2014 8:03 AM
Image Credit: Jordan Novet/VentureBeat
SUNNYVALE, California — Chinese tech company Baidu has yet to make its popular search engine and other web services available in English. But consider yourself warned: Baidu could someday wind up becoming a favorite among consumers.
The strength of Baidu lies not in youth-friendly marketing or an enterprise-focused sales team. It lives instead in Baidu’s data centers, where servers run complex algorithms on huge volumes of data and gradually make its applications smarter, including not just Web search but also Baidu’s tools for music, news, pictures, video, and speech recognition.
Despite lacking the visibility (in the U.S., at least) of Google and Microsoft, in recent years Baidu has done a lot of work on deep learning, one of the most promising areas of artificial intelligence (AI) research in recent years. This work involves training systems called artificial neural networks on lots of information derived from audio, images, and other inputs, and then presenting the systems with new information and receiving inferences about it in response.
Two months ago, Baidu hired Andrew Ng away from Google, where he started and led the so-called Google Brain project. Ng, whose move to Baidu follows Hugo Barra’s jump from Google to Chinese company Xiaomi last year, is one of the world’s handful of deep-learning rock stars.
Ng has taught classes on machine learning, robotics, and other topics at Stanford University. He also co-founded massively open online course startup Coursera.
He makes a strong argument for why a person like him would leave Google and join a company with a lower public profile. His argument can leave you feeling like you really ought to keep an eye on Baidu in the next few years.
I thought the best place to advance the AI mission is at Baidu,” Ng said in an interview with VentureBeat.
Baidu’s search engine only runs in a few countries, including China, Brazil, Egypt, and Thailand. The Brazil service was announced just last week. Google’s search engine is far more popular than Baidu’s around the globe, although Baidu has already beaten out Yahoo and Microsoft’s Bing in global popularity, according to comScore figures.
And Baidu co-founder and chief executive Robin Li, a frequent speaker on Stanford’s campus, has said he wants Baidu to become a brand name in more than half of all the world’s countries. Presumably, then, Baidu will one day become something Americans can use.
Above: Baidu co-founder and chief executive Robin Li.
Image Credit: Baidu

 

Now that Ng leads Baidu’s research arm as the company’s chief scientist out of the company’s U.S. R&D Center here, it’s not hard to imagine that Baidu’s tools in English, if and when they become available, will be quite brainy — perhaps even eclipsing similar services from Apple and other tech giants. (Just think of how many people are less than happy with Siri.)

A stable full of AI talent

But this isn’t a story about the difference a single person will make. Baidu has a history in deep learning.
A couple years ago, Baidu hired Kai Yu, a engineer skilled in artificial intelligence. Based in Beijing, he has kept busy.
I think Kai ships deep learning to an incredible number of products across Baidu,” Ng said. Yu also developed a system for providing infrastructure that enables deep learning for different kinds of applications.
That way, Kai personally didn’t have to work on every single application,” Ng said.
In a sense, then, Ng joined a company that had already built momentum in deep learning. He wasn’t starting from scratch.
Above: Baidu’s Kai Yu.
Image Credit: Kai Yu
Only a few companies could have appealed to Ng, given his desire to push artificial intelligence forward. It’s capital-intensive, as it requires lots of data and computation. Baidu, he said, can provide those things.
Baidu is nimble, too. Unlike Silicon Valley’s tech giants, which measure activity in terms of monthly active users, Chinese Internet companies prefer to track usage by the day, Ng said.
It’s a symptom of cadence,” he said. “What are you doing today?” And product cycles in China are short; iteration happens very fast, Ng said.
Plus, Baidu is willing to get infrastructure ready to use on the spot.
Frankly, Kai just made decisions, and it just happened without a lot of committee meetings,” Ng said. “The ability of individuals in the company to make decisions like that and move infrastructure quickly is something I really appreciate about this company.
That might sound like a kind deference to Ng’s new employer, but he was alluding to a clear advantage Baidu has over Google.
He ordered 1,000 GPUs [graphics processing units] and got them within 24 hours,Adam Gibson, co-founder of deep-learning startup Skymind, told VentureBeat. “At Google, it would have taken him weeks or months to get that.
Not that Baidu is buying this type of hardware for the first time. Baidu was the first company to build a GPU cluster for deep learning, Ng said — a few other companies, like Netflix, have found GPUs useful for deep learning — and Baidu also maintains a fleet of servers packing ARM-based chips.
Above: Baidu headquarters in Beijing.
Image Credit: Baidu
Now the Silicon Valley researchers are using the GPU cluster and also looking to add to it and thereby create still bigger artificial neural networks.
But the efforts have long since begun to weigh on Baidu’s books and impact products. “We deepened our investment in advanced technologies like deep learning, which is already yielding near term enhancements in user experience and customer ROI and is expected to drive transformational change over the longer term,” Li said in a statement on the company’s earnings the second quarter of 2014.
Next step: Improving accuracy
What will Ng do at Baidu? The answer will not be limited to any one of the company’s services. Baidu’s neural networks can work behind the scenes for a wide variety of applications, including those that handle text, spoken words, images, and videos. Surely core functions of Baidu like Web search and advertising will benefit, too.
All of these are domains Baidu is looking at using deep learning, actually,” Ng said.
Ng’s focus now might best be summed up by one word: accuracy.
That makes sense from a corporate perspective. Google has the brain trust on image analysis, and Microsoft has the brain trust on speech, said Naveen Rao, co-founder and chief executive of deep-learning startup Nervana. Accuracy could potentially be the area where Ng and his colleagues will make the most substantive progress at Baidu, Rao said.
Matthew Zeiler, founder and chief executive of another deep learning startup, Clarifai, was more certain. “I think you’re going to see a huge boost in accuracy,” said Zeiler, who has worked with Hinton and LeCun and spent two summers on the Google Brain project.
One thing is for sure: Accuracy is on Ng’s mind.
Above: The lobby at Baidu’s office in Sunnyvale, Calif.
Image Credit: Jordan Novet/VentureBeat
Here’s the thing. Sometimes changes in accuracy of a system will cause changes in the way you interact with the device,” Ng said. For instance, more accurate speech recognition could translate into people relying on it much more frequently. Think “Her”-level reliance, where you just talk to your computer as a matter of course rather than using speech recognition in special cases.
Speech recognition today doesn’t really work in noisy environments,” Ng said. But that could change if Baidu’s neural networks become more accurate under Ng.
Ng picked up his smartphone, opened the Baidu Translate app, and told it that he needed a taxi. A female voice said that in Mandarin and displayed Chinese characters on screen. But it wasn’t a difficult test, in some ways: This was no crowded street in Beijing. This was a quiet conference room in a quiet office.
There’s still work to do,” Ng said.
‘The future heroes of deep learning’
Meanwhile, researchers at companies and universities have been hard at work on deep learning for decades.
Google has built up a hefty reputation for applying deep learning to images from YouTube videos, data center energy use, and other areas, partly thanks to Ng’s contributions. And recently Microsoft made headlines for deep-learning advancements with its Project Adam work, although Li Deng of Microsoft Research has been working with neural networks for more than 20 years.
In academia, deep learning research groups all over North America and Europe. Key figures in the past few years include Yoshua Bengio at the University of Montreal, Geoff Hinton of the University of Toronto (Google grabbed him last year through its DNNresearch acquisition), Yann LeCun from New York University (Facebook pulled him aboard late last year), and Ng.
But Ng’s strong points differ from those of his contemporaries. Whereas Bengio made strides in training neural networks, LeCun developed convolutional neural networks, and Hinton popularized restricted Boltzmann machines, Ng takes the best, implements it, and makes improvements.
Andrew is neutral in that he’s just going to use what works,” Gibson said. “He’s very practical, and he’s neutral about the stamp on it.
Not that Ng intends to go it alone. To create larger and more accurate neural networks, Ng needs to look around and find like-minded engineers.
He’s going to be able to bring a lot of talent over,Dave Sullivan, co-founder and chief executive of deep-learning startup Ersatz Labs, told VentureBeat. “This guy is not sitting down and writing mountains of code every day.
And truth be told, Ng has had no trouble building his team.
Hiring for Baidu has been easier than I’d expected,” he said.
A lot of engineers have always wanted to work on AI. … My job is providing the team with the best possible environment for them to do AI, for them to be the future heroes of deep learning.

More information:

Google’s innovative search technologies connect millions of people around the world with information every day. Founded in 1998 by Stanford Ph.D. students Larry Page and Sergey Brin, Google today is a top web property in all major glob… read more »

Powered by VBProfiles

Tagged , , , , , , , , , ,

How Watson Changed IBM

ORIGINAL: HBR
by Brad Power
August 22, 2014

Remember when IBM’s “Watson” computer competed on the TV game show “Jeopardy” and won? Most people probably thought “Wow, that’s cool,” or perhaps were briefly reminded of the legend of John Henry and the ongoing contest between man and machine. Beyond the media splash it caused, though, the event was viewed as a breakthrough on many fronts. Watson demonstrated that machines could understand and interact in a natural language, question-and-answer format and learn from their mistakes. This meant that machines could deal with the exploding growth of non-numeric information that is getting hard for humans to keep track of: to name two prominent and crucially important examples,

  • keeping up with all of the knowledge coming out of human genome research, or 
  • keeping track of all the medical information in patient records.
So IBM asked the question: How could the fullest potential of this breakthrough be realized, and how could IBM create and capture a significant portion of that value? They knew the answer was not by relying on traditional internal processes and practices for R&D and innovation. Advances in technology — especially digital technology and the increasing role of software in products and services — are demanding that large, successful organizations increase their pace of innovation and make greater use of resources outside their boundaries. This means internal R&D activities must increasingly shift towards becoming crowdsourced, taking advantage of the wider ecosystem of customers, suppliers, and entrepreneurs.
IBM, a company with a long and successful tradition of internally-focused R&D activities, is adapting to this new world of creating platforms and enabling open innovation. Case in point, rather than keep Watson locked up in their research labs, they decided to release it to the world as a platform, to run experiments with a variety of organizations to accelerate development of natural language applications and services. In January 2014 IBM announced they were spending $1 billion to launch the Watson Group, including a $100 million venture fund to support start-ups and businesses that are building Watson-powered apps using the “Watson Developers Cloud.” More than 2,500 developers and start-ups have reached out to the IBM Watson Group since the Watson Developers Cloud was launched in November 2013.

So how does it work? First, with multiple business models. Mike Rhodin, IBM’s senior vice president responsible for Watson, told me, “There are three core business models that we will run in parallel. 

  • The first is around industries that we think will go through a big change in “cognitive” [natural language] computing, such as financial services and healthcare. For example, in healthcare we’re working with The Cleveland Clinic on how medical knowledge is taught. 
  • The second is where we see similar patterns across industries, such as how people discover and engage with organizations and how organizations make different kinds of decisions. 
  • The third business model is creating an ecosystem of entrepreneurs. We’re always looking for companies with brilliant ideas that we can partner with or acquire. With the entrepreneur ecosystem, we are behaving more like a Silicon Valley startup. We can provide the entrepreneurs with access to early adopter customers in the 170 countries in which we operate. If entrepreneurs are successful, we keep a piece of the action.
IBM also had to make some bold structural moves in order to create an organization that could both function as a platform as well as collaborate with outsiders for open innovation. They carved out The Watson Group as a new, semi-autonomous, vertically integrated unit, reporting to the CEO. They brought in 2000 people, a dozen projects, a couple of Big Data and content analytics tools, and a consulting unit (outside of IBM Global Services). IBM’s traditional annual budget cycle and business unit financial measures weren’t right for Watson’s fast pace, so, as Mike Rhodin told me, “I threw out the annual planning cycle and replaced it with a looser, more agile management system. In monthly meetings with CEO Ginni Rometty, we’ll talk one time about technology, and another time about customer innovations. I have to balance between strategic intent and tactical, short-term decision-making. Even though we’re able to take the long view, we still have to make tactical decisions.

More and more, organizations will need to make choices in their R&D activities to either create platforms or take advantage of them. 

Those with deep technical and infrastructure skills, like IBM, can shift the focus of their internal R&D activities toward building platforms that can connect with ecosystems of outsiders to collaborate on innovation.

The second and more likely option for most companies is to use platforms like IBM’s or Amazon’s to create their own apps and offerings for customers and partners. In either case, new, semi-autonomous agile units, like IBM’s Watson Group, can help to create and capture huge value from these new customer and entrepreneur ecosystems.

More blog posts by Brad Power
Tagged , , , , , ,

“Brain” In A Dish Acts As Autopilot Living Computer

ORIGINAL: U of Florida
by Jennifer Viegas
Nov 27, 2012
A glass dish contains a “brain” — a living network of 25,000 rat brain cells connected to an array of 60 electrodes.University of Florida/Ray Carson

downloadable pdf

A University of Florida scientist has grown a living “brain” that can fly a simulated plane, giving scientists a novel way to observe how brain cells function as a network.The “brain” — a collection of 25,000 living neurons, or nerve cells, taken from a rat’s brain and cultured inside a glass dish — gives scientists a unique real-time window into the brain at the cellular level. By watching the brain cells interact, scientists hope to understand what causes neural disorders such as epilepsy and to determine noninvasive ways to intervene.


2012 U of Florida - Brain Test

Thomas DeMarse holds a glass dish containing a living network of 25,000 rat brain cells connected to an array of 60 electrodes that can interact with a computer to fly a simulated F-22 fighter plane.

As living computers, they may someday be used to fly small unmanned airplanes or handle tasks that are dangerous for humans, such as search-and-rescue missions or bomb damage assessments.”

We’re interested in studying how brains compute,” said Thomas DeMarse, the UF assistant professor of biomedical engineering who designed the study. “If you think about your brain, and learning and the memory process, I can ask you questions about when you were 5 years old and you can retrieve information. That’s a tremendous capacity for memory. In fact, you perform fairly simple tasks that you would think a computer would easily be able to accomplish, but in fact it can’t.
Continue reading

Tagged , , , , , , , , , ,

Siri’s Inventors Are Building a Radical New AI That Does Anything You Ask

Viv was named after the Latin root meaning live. Its San Jose, California, offices are decorated with tsotchkes bearing the numbers six and five (VI and V in roman numerals). Ariel Zambelich

When Apple announced the iPhone 4S on October 4, 2011, the headlines were not about its speedy A5 chip or improved camera. Instead they focused on an unusual new feature: an intelligent assistant, dubbed Siri. At first Siri, endowed with a female voice, seemed almost human in the way she understood what you said to her and responded, an advance in artificial intelligence that seemed to place us on a fast track to the Singularity. She was brilliant at fulfilling certain requests, like “Can you set the alarm for 6:30?” or “Call Diane’s mobile phone.” And she had a personality: If you asked her if there was a God, she would demur with deft wisdom. “My policy is the separation of spirit and silicon,” she’d say.Over the next few months, however, Siri’s limitations became apparent. Ask her to book a plane trip and she would point to travel websites—but she wouldn’t give flight options, let alone secure you a seat. Ask her to buy a copy of Lee Child’s new book and she would draw a blank, despite the fact that Apple sells it. Though Apple has since extended Siri’s powers—to make an OpenTable restaurant reservation, for example—she still can’t do something as simple as booking a table on the next available night in your schedule. She knows how to check your calendar and she knows how to use Open­Table. But putting those things together is, at the moment, beyond her.
Continue reading

Tagged , , , , , , ,

Joi Ito: Want to innovate? Become a “now-ist”

Remember before the internet?” asks Joi Ito. “Remember when people used
to try to predict the future?
” In this engaging talk, the head of the
MIT Media Lab skips the future predictions and instead shares a new approach to creating in the moment: building quickly and improving constantly, without waiting for permission or for proof that you have the right idea
. This kind of bottom-up innovation is seen in the most fascinating, futuristic projects emerging today, and it starts, he says, with being open and alert to what’s going on around you right now.
Don’t be a futurist, he suggests: be a now-ist.
Tagged , , , , , ,

Preparing Your Students for the Challenges of Tomorrow

ORIGINAL: Edutopia
August 20, 2014

Right now, you have students. Eventually, those students will become the citizens — employers, employees, professionals, educators, and caretakers of our planet in 21st century. Beyond mastery of standards, what can you do to help prepare them? What can you promote to be sure they are equipped with the skill sets they will need to take on challenges and opportunities that we can’t yet even imagine?

Following are six tips to guide you in preparing your students for what they’re likely to face in the years and decades to come.

1. Teach Collaboration as a Value and Skill Set Students of today need new skills for the coming century that will make them ready to collaborate with others on a global level. Whatever they do, we can expect their work to include finding creative solutions to emerging challenges.
2. Evaluate Information Accuracy 
New information is being discovered and disseminated at a phenomenal rate. It is predicted that 50 percent of the facts students are memorizing today will no longer be accurate or complete in the near future. Students need to know
  • how to find accurate information, and
  • how to use critical analysis for
  • assessing the veracity or bias and
  • the current or potential uses of new information.
These are the executive functions that they need to develop and practice in the home and at school today, because without them, students will be unprepared to find, analyze, and use the information of tomorrow.
3. Teach Tolerance 
In order for collaboration to happen within a global community, job applicants of the future will be evaluated by their ability for communication with, openness to, and tolerance for unfamiliar cultures and ideas. To foster these critical skills, today’s students will need open discussions and experiences that can help them learn about and feel comfortable communicating with people of other cultures.
4. Help Students Learn Through Their Strengths 
Children are born with brains that want to learn. They’re also born with different strengths — and they grow best through those strengths. One size does not fit all in assessment and instruction. The current testing system and the curriculum that it has spawned leave behind the majority of students who might not be doing their best with the linear, sequential instruction required for this kind of testing. Look ahead on the curriculum map and help promote each student’s interest in the topic beforehand. Use clever “front-loading” techniques that will pique their curiosity.
5. Use Learning Beyond the Classroom
New “learning” does not become permanent memory unless there is repeated stimulation of the new memory circuits in the brain pathways
. This is the “practice makes permanent” aspect of neuroplasticity where neural networks that are the most stimulated develop more dendrites, synapses, and thicker myelin for more efficient information transmission. These stronger networks are less susceptible to pruning, and they become long-term memory holders. Students need to use what they learn repeatedly and in different, personally meaningful ways for short-term memory to become permanent knowledge that can be retrieved and used in the future. Help your students make memories permanent by providing opportunities for them to “transfer” school learning to real-life situations.
6. Teach Students to Use Their Brain Owner’s Manual
The most important manual that you can share with your students is the owner’s manual to their own brains. When they understand how their brains take in and store information (PDF, 139KB), they hold the keys to successfully operating the most powerful tool they’ll ever own. When your students understand that, through neuroplasticity, they can change their own brains and intelligence, together you can build their resilience and willingness to persevere through the challenges that they will undoubtedly face in the future.How are you preparing your students to thrive in the world they’ll inhabit as adults?

Tagged , , , , , , ,

Brainstorming Doesn’t Work; Try This Technique Instead

ORIGINAL: FastCompany
Ever been in a meeting where one loudmouth’s mediocre idea dominates?
Then you know brainstorming needs an overhaul.

 

Brainstorming, in its current form and by manymetrics, doesn’t work as well as the frequency of “team brainstorming meetings” would suggests it does. Early ideas tend to have disproportionate influence over the rest of the conversation.
Sharing ideas in groups isn’t the problem, it’s the “out-loud” part that, ironically, leads to groupthink, instead of unique ideas. “As sexy as brainstorming is, with people popping like champagne with ideas, what actually happens is when one person is talking you’re not thinking of your own ideas,Leigh Thompson, a management professor at the Kellogg School, told Fast Company. “Sub-consciously you’re already assimilating to my ideas.”
That process is called “anchoring,” and it crushes originality. “Early ideas tend to have disproportionate influence over the rest of the conversation,Loran Nordgren, also a professor at Kellogg, explained. “They establish the kinds of norms, or cement the idea of what are appropriate examples or potential solutions for the problem.

Continue reading

Tagged , , , ,

A Thousand Kilobots Self-Assemble Into Complex Shapes

ORIGINAL: IEEE Spectrum
By Evan Ackerman
14 Aug 2014
 Photo: Michael Rubenstein/Harvard Universit

When Harvard roboticists first introduced their Kilobots in 2011, they’d only made 25 of them. When we next saw the robots in 2013, they’d made 100. Now the researchers have built one thousand of them. That’s a whole kilo of Kilobots, and probably the most robots that have ever been in the same place at the same time, ever.

The researchers—Michael Rubenstein, Alejandro Cornejo, and Professor Radhika Nagpal of Harvard’s Self-Organizing Systems Research Group—describe their thousand-robot swarm in a paper published today in Science (they actually built 1024 robots, apparently following the computer science definition of “kilo”).

Despite their menacing name (KILL-O-BOTS!) and the robot swarm nightmares they may induce in some people, these little guys are harmless. Each Kilobot [pictured below] is a small, cheap-ish ($14) device that can move around by vibrating their legs and communicate with other robots with infrared transmitters and receivers.

Continue reading

Tagged , , , , , , , ,

IBM Chip Processes Data Similar to the Way Your Brain Does

A chip that uses a million digital neurons and 256 million synapses may signal the beginning of a new era of more intelligent computers.
WHY IT MATTERS

Computers that can comprehend messy data such as images could revolutionize what technology can do for us.

New thinking: IBM has built a processor designed using principles at work in your brain.
A new kind of computer chip, unveiled by IBM today, takes design cues from the wrinkled outer layer of the human brain. Though it is no match for a conventional microprocessor at crunching numbers, the chip consumes significantly less power, and is vastly better suited to processing images, sound, and other sensory data.
IBM’s SyNapse chip, as it is called, processes information using a network of just over one million “neurons,” which communicate with one another using electrical spikes—as actual neurons do. The chip uses the same basic components as today’s commercial chips—silicon transistors. But its transistors are configured to mimic the behavior of both neurons and the connections—synapses—between them.
The SyNapse chip breaks with a design known as the Von Neuman architecture that has underpinned computer chips for decades. Although researchers have been experimenting with chips modeled on brains—known as neuromorphic chips—since the late 1980s, until now all have been many times less complex, and not powerful enough to be practical (see “Thinking in Silicon”). Details of the chip were published today in the journal Science.
The new chip is not yet a product, but it is powerful enough to work on real-world problems. In a demonstration at IBM’s Almaden research center, MIT Technology Review saw one recognize cars, people, and bicycles in video of a road intersection. A nearby laptop that had been programed to do the same task processed the footage 100 times slower than real time, and it consumed 100,000 times as much power as the IBM chip. IBM researchers are now experimenting with connecting multiple SyNapse chips together, and they hope to build a supercomputer using thousands.
When data is fed into a SyNapse chip it causes a stream of spikes, and its neurons react with a storm of further spikes. The just over one million neurons on the chip are organized into 4,096 identical blocks of 250, an arrangement inspired by the structure of mammalian brains, which appear to be built out of repeating circuits of 100 to 250 neurons, says Dharmendra Modha, chief scientist for brain-inspired computing at IBM. Programming the chip involves choosing which neurons are connected, and how strongly they influence one another. To recognize cars in video, for example, a programmer would work out the necessary settings on a simulated version of the chip, which would then be transferred over to the real thing.
In recent years, major breakthroughs in image analysis and speech recognition have come from using large, simulated neural networks to work on data (see “Deep Learning”). But those networks require giant clusters of conventional computers. As an example, Google’s famous neural network capable of recognizing cat and human faces required 1,000 computers with 16 processors apiece (see “Self-Taught Software”).
Although the new SyNapse chip has more transistors than most desktop processors, or any chip IBM has ever made, with over five billion, it consumes strikingly little power. When running the traffic video recognition demo, it consumed just 63 milliwatts of power. Server chips with similar numbers of transistors consume tens of watts of power—around 10,000 times more.
The efficiency of conventional computers is limited because they store data and program instructions in a block of memory that’s separate from the processor that carries out instructions. As the processor works through its instructions in a linear sequence, it has to constantly shuttle information back and forth from the memory store—a bottleneck that slows things down and wastes energy.
IBM’s new chip doesn’t have separate memory and processing blocks, because its neurons and synapses intertwine the two functions. And it doesn’t work on data in a linear sequence of operations; individual neurons simply fire when the spikes they receive from other neurons cause them to.
Horst Simon, the deputy director of Lawrence Berkeley National Lab and an expert in supercomputing, says that until now the industry has focused on tinkering with the Von Neuman approach rather than replacing it, for example by using multiple processors in parallel, or using graphics processors to speed up certain types of calculations. The new chip “may be a historic development,” he says. “The very low power consumption and scalability of this architecture are really unique.”
One downside is that IBM’s chip requires an entirely new approach to programming. Although the company announced a suite of tools geared toward writing code for its forthcoming chip last year (see “IBM Scientists Show Blueprints for Brainlike Computing”), even the best programmers find learning to work with the chip bruising, says Modha: “It’s almost always a frustrating experience.” His team is working to create a library of ready-made blocks of code to make the process easier.
Asking the industry to adopt an entirely new kind of chip and way of coding may seem audacious. But IBM may find a receptive audience because it is becoming clear that current computers won’t be able to deliver much more in the way of performance gains. “This chip is coming at the right time,” says Simon.
ORIGINAL: Tech Review
August 7, 2014
Tagged , , , , , , , , ,

Google buys city guides app Jetpac, support to end on September 15

ORIGINAL: The Next Web
By Josh Ong

Google has acquired the team behind Jetpac, an iPhone app for crowdsourcing city guides from public Instagram photos. The app will be pulled from the App Store in coming days, and support for the service will be discontinued on September 15.

Jetpac’s deep learning software used a nifty trick of scanning our photos to evaluate businesses and venues around town. As MIT Technology Review notes, the app could tell whether visitors were tourists, whether a bar is dog-friendly and how fancy a place was.

It even employed humans to find hipster spots by training the system to count the number of mustaches and plaid shirts.

Interestingly, Jetpac’s technology was inspired by Google researcher Geoffrey Hinton, so it makes perfect sense for Google to bring the startup into its fold. If this means that Google Now will gain the ability to automatically alert me when I’m entering a hipster-infested area, then I’m an instant fan.

Jetpac also built two iOS apps that tapped into its Deep Belief neural network to offer users object recognition.

Imagine all photos tagged automatically, the ability to search the world by knowing what is in the world’s shared photos, and robots that can see like humans,” the App Store description for its Spotter app reads. If that’s not a Googly description, I don’t know what is.

Jetpac

(h/t Ouriel Ohayon)

Thumbnail image credit: GEORGES GOBET/AFP/Getty Images

Tagged , , , , ,

Building Mind-Controlled Gadgets Just Got Easier

ORIGINAL: IEEE Spectrum
By Eliza Strickland
11 Aug 2014
A new brain-computer interface lets DIYers access their brain waves
Photo: Chip AudetteEngineer Chip Audette used the OpenBCI system to control a robot spider with his mind.
The guys who decided to make a mind-reading tool for the masses are not neuroscientists. In fact, they’re artists who met at Parsons the New School for Design, in New York City. In this day and age, you don’t have to be a neuroscientist to muck around with brain signals.
With Friday’s launch of an online store selling their brain-computer interface (BCI) gear, Joel Murphy and Conor Russomanno hope to unleash a wave of neurotech creativity. Their system enables DIYers to use brain waves to control anything they can hack—a video game, a robot, you name it. “It feels like there’s going to be a surge,” says Russomanno. “The floodgates are about to open.” And since their technology is open source, the creators hope hackers will also help improve the BCI itself.

Photo: OpenBCI The OpenBCI board takes in data from up to eight electrodes.

Their OpenBCI system makes sense of an electroencephalograph (EEG), signal, a general measure of electrical activity in the brain captured via electrodes on the scalp. The fundamental hardware component is a relatively new chip from Texas Instruments, which takes in analog data from up to eight electrodes and converts it to a digital signal. Russomanno and Murphy used the chip and an Arduino board to create OpenBCI, which essentially amplifies the brain signal and sends it via Bluetooth to a computer for processing. “The big issue is getting the data off the chip and making it accessible,” Murphy says. Once it’s accessible, Murphy expects makers to build things he hasn’t even imagined yet.
The project got its start in 2011, when Russomanno was a student in Murphy’s physical computing class at Parsons and told his professor he wanted to hack an EEG toy made by Mattel. The toy’s EEG-enabled headset supposedly registered the user’s concentrated attention (which in the game activated a fan that made a ball float upward). But the technology didn’t seem very reliable, and since it wasn’t open source, Russomanno couldn’t study the game’s method of collecting and analyzing the EEG data. He decided that an open-source alternative was necessary if he wanted to have any real fun.
Happily, Russomanno and his professor soon connected with engineer Chip Audette, of the New Hampshire R&D firm Creare, who already had a grant from the U.S. Defense Advanced Research Projects Agency (DARPA) to develop a low-cost, high-quality EEG system for “nontraditional users.” Once the team had cobbled together a prototype of their OpenBCI system, they decided to offer their gear to the world with a Kickstarter campaign, which ended in January and raised more than twice the goal of US $100,000.
Murphy and Russomanno soon found that production would be more difficult and take longer than expected (as is the case with so many Kickstarter projects), so they had to push back their shipping date by several months. Now, though, they’re in business—and Russomanno says that shipping a product is only the beginning. “We don’t just want to sell something; we want to teach people how to use it and also develop a community,” he says. OpenBCI wants to be an online portal where experimenters can swap tips and post research projects.
So once a person’s brain-wave data is streaming into a computer, what is to be done with it? OpenBCI will make some simple software available, but mostly Russomanno and Murphy plan to watch as inventors come up with new applications for BCIs.
Audette, the engineer from Creare, is already hacking robotic “battle spiders” that are typically steered by remote control. Audette used an OpenBCI prototype to identify three distinct brain-wave patterns that he can reproduce at will, and he sent those signals to a battle spider to command it to turn left or right or to walk straight ahead. “The first time you get something to move with your brain, the satisfaction is pretty amazing,” Audette says. “It’s like, ‘I am king of the world because I got this robot to move.’
In Los Angeles, a group is using another prototype to give a paralyzed graffiti artist the ability to practice his craft again. The artist, Tempt One, was diagnosed with Lou Gehrig’s disease in 2003 and gradually progressed to the nightmarish “locked in” state. By 2010 he couldn’t move or speak and lay inert in a hospital bed—but with unimpaired consciousness, intellect, and creativity trapped inside his skull. Now his supporters are developing a system called the BrainWriter: They’re using OpenBCI to record the artist’s brain waves and are devising ways to use those brain waves to control the computer cursor so Tempt can sketch his designs on the screen.
Another early collaborator thinks that OpenBCI will be useful in mainstream medicine. David Putrino, director of telemedicine and virtual rehabilitation at the Burke Rehabilitation Center, in White Plains, N.Y., says he’s comparing the open-source system to the $60,000 clinic-grade EEG devices he typically works with. He calls the OpenBCI system robust and solid, saying, “There’s no reason why it shouldn’t be producing good signal.
Putrino hopes to use OpenBCI to build a low-cost EEG system that patients can take home from the hospital, and he imagines a host of applications. Stroke patients, for example, could use it to determine when their brains are most receptive to physical therapy, and Parkinson’s patients could use it to find the optimal time to take their medications. “I’ve been playing around with these ideas for a decade,” Putrino says, “but they kept failing because the technology wasn’t quite there.” Now, he says, it’s time to start building.
Tagged , , , , , , , ,

How the Web Became Our ‘External Brain,’ and What It Means for Our Kids

ORIGINAL: Wired
BY MICHAEL HARRIS
08.06.14
Getty
Recently, my two-year-old nephew Benjamin came across a copy of Vanity Fair abandoned on the floor. His eyes scanned the glossy cover, which shone less fiercely than the iPad he is used to but had a faint luster of its own. I watched his pudgy thumb and index finger pinch together and spread apart on Bradley Cooper’s smiling mug. At last, Benjamin looked over at me, flummoxed and frustrated, as though to say, “This thing’s broken.
Search YouTube for “baby” and “iPad” and you’ll find clips featuring one-year-olds attempting to manipulate magazine pages and television screens as though they were touch-sensitive displays. These children are one step away from assuming that such technology is a natural, spontaneous part of the material world. They’ll grow up thinking about the internet with the same nonchalance that I hold toward my toaster and teakettle. I can resist all I like, but for Benjamin’s generation resistance is moot. The revolution is already complete.
Technology Is Evolving Just Like Our DNA Does
With its theory of evolution, Charles Darwin’s The Origin of Species may have outlined, back in 1859, an idea that explains our children’s relationship with iPhones and Facebook. We are now witness to a new kind of evolution, one played out by our technologies.
Excerpted from The End of Absence: Reclaiming What We’ve Lost in a World of Constant Connection The “meme,” a term coined by evolutionary biologist Richard Dawkins in 1976, is an extension of Darwin’s Big Idea past the boundaries of genetics. A meme, put simply, is a cultural product that is copied. We humans are enamored of imitation and so become the ultimate “meme machines.” Memes—pieces of culture—copy themselves through history and enjoy a kind of evolution of their own, and they do so riding on the backs of successful genes: ours.
According to the memeticist Susan Blackmore, just as Darwinism submits that genes good at replicating will naturally become the most prevalent, technologies with a knack for replication rise to dominance. These “temes,” as she’s called these new replicators, could be copied, varied, and selected as digital information—thus establishing a new evolutionary process (and one far speedier than our genetic model). Blackmore’s work offers a fascinating explanation for why each generation seems less capable of managing solitude, and less likely to opt for technological disengagement.
YOUNG PEOPLE NOW COUNT ON THE INTERNET AS ‘THEIR EXTERNAL BRAIN’ AND HAVE BECOME SKILLFUL DECISION MAKERS—EVEN WHILE THEY ALSO ‘THIRST FOR INSTANT GRATIFICATION AND OFTEN MAKE QUICK, SHALLOW CHOICES.’
She suggests that temes are a different kind of replicator from the basic memes of everyday material culture. “Most memes . . . we forget how often we get them wrong,” Blackmore says. (Oral traditions of storytelling, for example, were characterized by constant twists in the tale.) “But with digital machines the fidelity is almost 100 percent. As it is, indeed, with our genes.” This is a startling thought: By delivering to the world technologies capable of replicating information with the same accuracy as DNA, we are playing a grand game indeed.
Old Ways of Thinking Are on the Verge of Extinction
The brains our children are born with are not substantively different from the brains our ancestors had 40,000 years ago. For all the wild variety of our cultures, personalities, and thought patterns, we’re all still operating with roughly the same three-pound lump of gray matter. But almost from day one, the allotment of neurons in those brains (and therefore the way they function) is different today from the way it was even one generation ago. Every second of your lived experience represents new connections among the roughly 86 billion neurons packed inside your brain. Children, then, can become literally incapable of thinking and feeling the way their grandparents did. A slower, less harried way of thinking may be on the verge of extinction.
In your brain, your billions of neurons are tied to each other by trillions of synapses, a portion of which are firing right now, forging (by still mysterious means) 

  • your memory of this sentence, 
  • your critique of this very notion, and 
  • your emotions as you reflect on this information. 

Our brains are so plastic that they will reengineer themselves to function optimally in whatever environment we give them. Repetition of stimuli produces a strengthening of responding neural circuits. Neglect of other stimuli will cause corresponding neural circuits to weaken. (Grannies who maintain their crossword puzzle regime knew that already.)

UCLA’s Gary Small is a pioneer of neuroplasticity research, and in 2008 he produced the first solid evidence showing that our brains are reorganized by our use of the internet. He placed a set of “internet naïve” people in MRI machines and made recordings of their brain activity while they took a stab at going online. Small then had each of them practice browsing the internet for an hour a day for a week. On returning to the MRI machine, those subjects now toted brains that lit up significantly in the frontal lobe, where there had been minimal neural activity beforehand. Neural pathways quickly develop when we give our brains new tasks, and Small had shown that this held true—over the course of just a few hours, in fact— following internet use.
WE CAN TELL THAT SOMETHING HAS CHANGED IN OUR MINDS, BUT WE STILL FEEL HELPLESS AGAINST IT, AND WE EVEN FEEL ADDICTED TO THE TECHNOLOGIES THAT ARE THAT CHANGE’S AGENTS.
We know that technology is changing our lives. It’s also changing our brains,” he announced. On the one hand, neuroplasticity gives him great hope for the elderly. “It’s not just some linear trajectory with older brains getting weaker,” he told me. The flip side of all this, though, is that young brains may be more equipped to deal with digital reality than with the decidedly less flashy reality that makes up our dirty, sometimes boring, material world.
In The Shallows, Nicholas Carr describes how the internet fundamentally works on our plastic minds to make them more capable of shallow thinking and less capable of deep thinking. After enough time in front of our screens, we learn to absorb more information less effectively, skip the bottom half of paragraphs, shift focus constantly; “the brighter the software, the dimmer the user,” he suggests at one point.

Kids These Days Can Think Quickly—But Not Deeply

The most startling example of our brain’s malleability, though, comes from new research by neural engineers at Boston University who now suggest that our children will be able to “incept” a person “to acquire new learning, skills, or memory, or possibly restore skills or knowledge that has been damaged through accident, disease, or aging, without a person’s awareness of what is learned or memorized. The team was able to use decoded functional magnetic resonance imaging (fMRI) to modify in highly specific ways the brain activity in the visual cortex of their human subjects.
The possibilities of such injections of “unearned” learning are as marvelous as they are quagmires for bioethical debate. Your grandchild’s brain could be trained in a certain direction while watching ads through digital contact lenses without his or her awareness (or, for that matter, acquiescence). For now, it’s easier to tell that something has changed in our minds, but we still feel helpless against it, and we even feel addicted to the technologies that are that change’s agents. But will our children feel the static?
SOME ARGUE THAT THE YOUNG ARE DEVELOPING NEW SKILLS BETTER SUITED TO THEIR OWN REALITY THAN TO AN OUTMODED PAST.
In 2012, Elon University worked with the Pew Internet and American Life Project to release a report that compiled the opinions of 1,021 critics, experts, and stakeholders, asking for their thoughts on digital natives. Their boiled-down message was that young people now count on the internet as “their external brain and have become skillful decision makers—even while they also “thirst for instant gratification and often make quick, shallow choices.
Some of those experts were optimistic about the future brains of the young. Susan Price, CEO and chief Web strategist at San Antonio’s Firecat Studio, suggested that “those who bemoan the perceived decline in deep thinking . . . fail to appreciate the need to evolve our processes and behaviors to suit the new realities and opportunities.” Price promises that the young are developing new skills and standards better suited to their own reality than to the outmoded reality of, say, 1992. Meanwhile, the report’s coauthor, Janna Anderson, noted that while many respondents were enthusiastic about the future of such minds, there was a clear dissenting voice: “Some said they are already witnessing deficiencies in young people’s abilities to focus their attention, be patient and think deeply. Some experts expressed concerns that trends are leading to a future in which most people become shallow consumers of information, endangering society.
We may be on our way to becoming servants to the evolution of our own technologies. The power shifts very quickly from the spark of human intention to the absorption of human will by a technology that seems to have intentions of its own.
But we’ll likely find there was no robotic villain behind the curtain. Our own capitalist drive pushes these technologies to evolve. We push the technology down an evolutionary path that results in the most addictive possible outcome. Yet even as we do this, it doesn’t feel as though we have any control. It feels, instead, like a destined outcome—a fate.
Excerpted from The End of Absence: Reclaiming What We’ve Lost in a World of Constant Connection by Michael Harris, in agreement with Current, an imprint of Penguin Random House. Copyright (c) Michael Harris, 2014.
Editor: Samantha Oltman (@samoltman)
Michael Harris
Michael Harris is a contributing editor at Western Living andVancouver magazine. His award-winning writing appears regularly in publications such as The Huffington Post and The Walrus. He is the author of The End of Absence and lives in Toronto, Canada.
Tagged , , , , , ,
%d bloggers like this: