How to Build a Brain: Chris Eliasmith at TEDxWaterloo 2013

By admin,

ORIGINAL: TEDx TalksMay 31, 2013Chris Eliasmith and his team’s Semantic Pointer Architecture Unified Network, SPAUN are determined to answer deep questions in computational neuroscience. SPAUN is currently the world’s largest functional brain simulation, and is unique because it’s the first model that can actually emulate behaviours while also modeling the physiology that underlies them.

He’s the creator of SPAUN the world’s largest brain simulation. Can he really make headway into mimicking the human brain?

Chris Eliasmith has cognitive flexibility on the brain. How do people manage to walk, chew gum and listen to music all at the same time? What is our brain doing as it switches between these tasks and how do we use the same components in head to do all those different things?

These are questions that Eliasmith and his team’s Semantic Pointer Architecture Unified Network (SPAUN) are determined to answer. SPAUN is currently the world’s largest functional brain simulation, and is unique because it’s the first model that can actually emulate behaviours while also modeling the physiology that underlies them.

 

 

Related articles

Enhancing Your Brain Plasticity

Energy Efficient Brain Simulator Outperforms Supercomputers

Researchers Connect Entropy To Intelligent Behaviour

This groundbreaking work was published in Science, and has been featured by CNN, BBC, Der Spiegel, Popular Science, The Economist and CBC. He is co-author of Neural Engineering , which describes a framework for building biologically realistic neural models and his new book, How to Build a Brain: A Neural Architecture for Biological Cognition (Oxford Series on Cognitive Models and Architectures) applies those methods to large-scale cognitive brain models.

Eliasmith holds a Canada Research Chair in Theoretical Neuroscience at the University of Waterloo. He is also Director of Waterloo’s Centre for Theoretical Neuroscience, and is jointly appointed in the Philosophy, Systems Design Engineering departments, as well as being cross-appointed to Computer Science.

For more on Chris, visit http://arts.uwaterloo.ca/~celiasmi/

Distributing the Edit History of Wikipedia Infoboxes

By admin,

ORIGINAL:

Posted by Enrique Alfonseca, Google Research
May 30, 2013
Aside from its value as a general-purpose encyclopedia, Wikipedia is also one of the most widely used resources to acquire, either automatically or semi-automatically, knowledge bases of structured data. Much research has been devoted to automatically building disambiguation resources, parallel corpora and structured knowledge from Wikipedia. Still, most of those projects have been based on single snapshots of Wikipedia, extracting the attribute values that were valid at a particular point in time. So about a year ago we compiled and released a data set that allows researchers to see how data attributes can change over time.

Figure 1. Infobox for the Republic of Palau in 2006 and 2013 showing the capital change.

Many attributes vary over time. These include the presidents of countries, the spouses of people, the populations of cities and the number of employees of companies. Every Wikipedia page has an associated history from which the users can view and compare past versions. Having the historical values of Infobox entries available would provide a historical overview of change affecting each entry, to understand which attributes are more likely to change over time or have a regularity in their changes, and which ones attract more user interest and are actually updated in a timely fashion. We believe that such a resource will also be useful in training systems to learn to extract data from documents, as it will allow us to collect more training examples by matching old values of an attribute inside old pages.

For this reason, we released, in collaboration with Wikimedia Deutschland e.V., a resource containing all the edit history of infoboxes in Wikipedia pages. While this was already available indirectly in Wikimedia’s full history dumps, the smaller size of the released dataset will make it easier to download and process this data. The released dataset contains 38,979,871 infobox attribute updates for 1,845,172 different entities, and it is available for download both from Google and from Wikimedia Deutschland’s Toolserver page. A description of the dataset can be found in our paper WHAD: Wikipedia Historical Attributes Data, accepted for publication at the Language Resources and Evaluation journal.

What kind of information can be learned from this data? Some examples from preliminary analyses include the following:

  • Every country in the world has a population in its Wikipedia attribute, which is updated at least yearly for more than 90% of them. The average error rate with respect to the yearly World Bank estimates is between two and three percent, mostly due to rounding.
  • 50% of deaths are updated into Wikipedia infoboxes within a couple of days… but for scientists it takes 31 days to reach 50% coverage!
  • For the last episode of TV shows, the airing date is updated for 50% of them within 9 days; for for the first episode of TV shows, it takes 106 days.

While infobox attribute updates will be much easier to process as they transition into the Wikidata project, we are not there yet and we believe that the availability of this dataset will facilitate the study of changing attribute values. We are looking forward to the results of those studies.

Thanks to Googler Jean-Yves Delort and Guillermo Garrido and Anselmo Peñas from UNED for putting this dataset together, and to Angelika Mühlbauer and Kai Nissen from Wikipedia Deutschland for their support. Thanks also to Thomas Hofmann and Fernando Pereira for making this data release possible.

Posted by

  Category: Data
  Comments: Comments Off on Distributing the Edit History of Wikipedia Infoboxes

Complex brain function depends on flexibility

By admin,

ORIGINAL: MIT News
Anne Trafton, MIT News Office
May 19, 2013
Neurons that can multitask greatly enhance the brain’s computational power, study finds.

An artist’s impression depicting a network of neurons of the nervous system. Image: Maurizio De Angelis/Wellcome Images

Over the past few decades, neuroscientists have made much progress in mapping the brain by deciphering the functions of individual neurons that perform very specific tasks, such as recognizing the location or color of an object.

However, there are many neurons, especially in brain regions that perform sophisticated functions such as thinking and planning, that don’t fit into this pattern. Instead of responding exclusively to one stimulus or task, these neurons react in different ways to a wide variety of things. MIT neuroscientist Earl Miller first noticed these unusual activity patterns about 20 years ago, while recording the electrical activity of neurons in animals that were trained to perform complex tasks.

We started noticing early on that there are a whole bunch of neurons in the prefrontal cortex that can’t be classified in the traditional way of one message per neuron,” recalls Miller, the Picower Professor of Neuroscience at MIT and a member of MIT’s Picower Institute for Learning and Memory.

In a paper appearing in Nature on May 19, Miller and colleagues at Columbia University report that these neurons are essential for complex cognitive tasks, such as learning new behavior. The Columbia team, led by the study’s senior author, Stefano Fusi, developed a computer model showing that without these neurons, the brain can learn only a handful of behavioral tasks.

You need a significant proportion of these neurons,” says Fusi, an associate professor of neuroscience at Columbia. “That gives the brain a huge computational advantage.

Lead author of the paper is Mattia Rigotti, a former grad student in Fusi’s lab. 

Multitasking neurons

Miller and other neuroscientists who first identified this neuronal activity observed that while the patterns were difficult to predict, they were not random. “In the same context, the neurons always behave the same way. It’s just that they may convey one message in one task, and a totally different message in another task,” Miller says.

For example, a neuron might distinguish between colors during one task, but issue a motor command under different conditions.

Miller and colleagues proposed that this type of neuronal flexibility is key to cognitive flexibility, including the brain’s ability to learn so many new things on the fly. “You have a bunch of neurons that can be recruited for a whole bunch of different things, and what they do just changes depending on the task demands,” he says.

At first, that theory encountered resistance “because it runs against the traditional idea that you can figure out the clockwork of the brain by figuring out the one thing each neuron does,” Miller says.

For the new Nature study, Fusi and colleagues at Columbia created a computer model to determine more precisely what role these flexible neurons play in cognition, using experimental data gathered by Miller and his former grad student, Melissa Warden. That data came from one of the most complex tasks that Miller has ever trained a monkey to perform: The animals looked at a sequence of two pictures and had to remember the pictures and the order in which they appeared.

During this task, the flexible neurons, known as “mixed selectivity neurons,” exhibited a great deal of nonlinear activity — meaning that their responses to a combination of factors cannot be predicted based on their response to each individual factor (such as one image).

Expanding capacity

Fusi’s computer model revealed that these mixed selectivity neurons are critical to building a brain that can perform many complex tasks. When the computer model includes only neurons that perform one function, the brain can only learn very simple tasks. However, when the flexible neurons are added to the model, “everything becomes so much easier and you can create a neural system that can perform very complex tasks,” Fusi says.

The flexible neurons also greatly expand the brain’s capacity to perform tasks. In the computer model, neural networks without mixed selectivity neurons could learn about 100 tasks before running out of capacity. That capacity greatly expanded to tens of millions of tasks as mixed selectivity neurons were added to the model. When mixed selectivity neurons reached about 30 percent of the total, the network’s capacity became “virtually unlimited,” Miller says — just like a human brain.

Mixed selectivity neurons are especially dominant in the prefrontal cortex, where most thought, learning and planning takes place. This study demonstrates how these mixed selectivity neurons greatly increase the number of tasks that this kind of neural network can perform, says John Duncan, a professor of neuroscience at Cambridge University.

Especially for higher-order regions, the data that have often been taken as a complicating nuisance may be critical in allowing the system actually to work,” says Duncan, who was not part of the research team.

Miller is now trying to figure out how the brain sorts through all of this activity to create coherent messages. There is some evidence suggesting that these neurons communicate with the correct targets by synchronizing their activity with oscillations of a particular brainwave frequency.

The idea is that neurons can send different messages to different targets by virtue of which other neurons they are synchronized with,” Miller says. “It provides a way of essentially opening up these special channels of communications so the preferred message gets to the preferred neurons and doesn’t go to neurons that don’t need to hear it.

The research was funded by the Gatsby Foundation, the Swartz Foundation and the Kavli Foundation.

  Category: Neuroscience
  Comments: Comments Off on Complex brain function depends on flexibility

Michio Kaku: The Intelligence Revolution

By admin,

ORIGINAL: BBC

Publicado el 11/09/2012

Professor Michio Kaku, famous Theoretical Physicist and one of the inventors of Light Cone String Field Theory (one of the Relativistic forms of M-Theory), hosts a documentary on the use of computer technology, cognitive science, mathematical groupings, faster computers, sophisticated algorithms and, most importantly, better education that will will lead to “The Intelligence Revolution”.

Ubiquitous computing is fast approaching us, with computer technology quickly becoming present in almost every facet of society and technology. Soon computers will be so ubiquitous that they will toil away in almost pure invisibility: in our glasses, in our clothing even in our own body.
The synthesis between computer fabrication, computer connectivity and of nanotechnology will mean that computers will be smaller, more connected and everywhere with minimal impact environmentally and spatially but creating a renaissance in information control by the individual.

Dr. Kaku was a high achiever in his youth, to say the very least. He constructed a small-scale, but fully functional, Cyclotron Particle Accelerator in his senior years at high school. His goal was no less than to fabricate antimatter. Such ambition did not go unnoticed and got the attention of a very famous physicist, Dr. Edward Teller.
Dr. Teller noticed that young Michio was a very talented young man and promptly offered him a Harvard scholarship, starting his academic career. Michio’s initial education of physics and inspiration which he had to build a reasonably complicated device was triggered by his own education of the subject early in life mixed in with a bold curiosity.

Michio got his knowledge through books, children today are advanced data retrievers in comparison and can scour the internet for most of the content of human knowledge. Its no wonder children and young people are making such powerful innovations and ideas.
This synergy of Information Retrieval, Information Processing and Information Implementation is the engine of creativity and prosperity that, if we plan it correctly, could solve most of the problems facing us today:
Global Warming, Environmental disasters, Disease, War, Hunger and Energy shortages are all problems which need quick access to information to solve and fast processing of plans to implement action. The Internet has given us one way to do this, but with more work on smart computing, cloud computing and the expanding field of Neural Networking and Grid Computing we may be able to solve problems as a Global Civilization would.

The fact that our world will be governed by computer intelligence in the necessities such as travel, health and even fabrication itself may spur a revolution on par with the development of agriculture: we may have all the time in the world to simply ponder existence and have an armada of computer brains to help us find the secrets of existence.

In this documentary Michio Kaku shows us how the information revolution may reach these amazing new heights and how we may use our intelligence to reinforce our wisdom for generations by educating ourselves to think critically on real world problems.

  Category: Uncategorized
  Comments: Comments Off on Michio Kaku: The Intelligence Revolution

NASA And Google Partner To Work With A D-Wave Quantum Computer

By admin,

D-Wave 512-Qubit Bonded Processor – Recent Generation (Credit: D-Wave)

D-Wave, the Canadian-based company that is the first to offer a commercial quantum computer, announced today that it’s sold its second $10 million D-Wave Two system. The contract is between the Universities Space Research Association and D-Wave. Google, USRA, and NASA will be collaborating on the use of the machine.

The system will be installed at a new lab, which will be located at NASA’s Ames Research Center. The computer is expected to go online in the third quarter of 2013. In addition to the sale, D-Wave will also be providing ongoing services such as maintenance. The company also expects to work closely with NASA, Google and USRA on the system.

Lockheed Martin Installs Quantum Computer 
Jeff Bezos And The CIA Invest In D-Wave’s Quantum Computer
D-Wave Adds Two Silicon Valley Vets To Its C-Suite 

“I expect this to be a collaboration,” D-Wave’s U.S. President Bo Ewald told me. “Some of our scientists, mathematicians and computer scientists will be working at the Center.

Prior to selecting the contract with D-Wave, the partnership first conducted a series of benchmarks on the 512-qubit D-Wave Two system, and found that its specifications were met or exceeded. The computer will be upgraded to a 2,048 qubit system once D-Wave has perfected that chip.

It’s important to note that the D-Wave system is not a general computer like your PC. Rather, it’s optimized to solve particular types of problem, and it likely uses quantum effects to solve those problems.

(Whether the D-Wave system uses a quantum process for its computation has been a matter of hot dispute in academia. However, recent research by a USC team working with Lockheed Martin LMT -0.05%‘s D-Wave system appears to show that there are, indeed, quantum effects happening with the system. Whether those quantum effects produce a “speedup” – that is, computation faster than classical methods – is still an open question.)

The laboratory at Ames will be using the D-Wave System for a number of applications, but they’ll be focused on improving algorithms that are used to improve machine learning and artificial intelligence. The lab will also investigate whether the system can optimize the search for planets outside of our solar system.

We hope it helps researchers construct more efficient, effective models for everything from speech recognition, to web search, to protein folding,” Google said in a statement.

Under the terms of the agreement, 20% of the usage of the computer will be granted to University research. Research teams will compete to have their proposal use the machine selected. Once they’ve passed through that selection process, however, they’ll be granted use of the system free of charge.

For his part, Ewald is pretty excited about this step for the fledgling company. “For a company that’s just starting out, having Lockheed Martin as our first customer, then Google and NASA as number two? Well, that’s just a great way to start.

Update: An earlier version of this article indicated that NASA had partnered to purchase the D-Wave System. A NASA spokesperson clarified that while NASA is partnered with Google and USRA to use the system, NASA is “not purchasing or leasing it”.

Visualizing the Connectome

By admin,

ORIGINAL: Discover
May 12, 2013 4:49 am
Last year, I blogged about a new and very pretty way of displaying the data about the human ‘connectome’ – the wiring between different parts of the brain. 
But there are many beautiful ways of visualizing the brain’s connections, as neuroscientists Daniel Margulies and colleagues of Leipzig discuss in a colourful paper showcasing these techniques.
Here, for example, are two ways of showing the brain’s white matter tracts, as studied with diffusion tensor imaging (DTI):
Another striking image is this one, a representation of the brain’s functional connectivity – the degree to which activation in each part of the brain is correlated with activity in every other part.
The functional connectome is inherently difficult to visualize in 2D (or even 3D), but in this ingenious display, the brain’s surface is shown covered with hundreds of little brains, each one a colour-coded map of the connectivity from that particular point:
The Margulies paper is about more than just pretty pictures, though. The authors also discuss the scientific questions and theoretical tensions that surround the choice of one visualization over another:
Scientific figure and illustrations are – to paraphrase Tufte – where seeing turns into showing. The capacity of these images to influence our interpretation of data and to direct the questions of the scientific community make visualizations worthy of careful consideration during their production… 
 
If we present a figure that clarifies the scientific content, but does so by creating a distortion of brain space, is that bad practice? What if the caption and methods explicitly stated that the contents of the figure were not to be taken literally? To what degree should a visualization be allowed to stand alone? 
In my view, the study of connections has been dominated by images, more than any other branch of neuroscience. It’s rarely easy to say where ‘method’ or ‘analysis’ ends and ‘visualization’ begins. 
This is not a bad thing – connectivity is spatial, by definition, and to understand space is to visualize it. But it does mean that in the connectome, there is always a danger of valuing aesthetics over accuracy, beauty above brains.
Margulies DS, Böttger J, Watanabe A, & Gorgolewski KJ (2013). Visualizing the Human Connectome. NeuroImage PMID: 23660027

Quandl is the easiest way to find and use numerical data on the internet.

By admin,

ORIGINAL: Quandl

Quandl is the easiest way to find and use numerical data on the internet.

Quandl has indexed over 5 million time-series datasets from over 400 sources. All of Quandl’s datasets are open and free.

You can download any Quandl dataset in any format that you want. You can also visualize, save, share, authenticate, validate, upload, index, merge and transform data .

Our long-term goal is to make all the numerical data on the internet easy to find and easy to use.

Further Reading

Learn more about Quandl:

  • Quandl’s long-term goal is to make all the numerical data on the internet easy to find and easy to use. Read more on our vision page.
  • Quandl currently has over 5 million datasets from over 400 sources. Explore Quandl’s data on our sources page.
  • Every dataset on Quandl is available via a simple and consistent API.
  • You can also access Quandl data using our R, Python, Matlab, Excel, Maple, Julia, Clojure, and Stata packages.
  • Check out what’s new on Quandl (data, features, topics) on our news page.
  • Explore Quandl’s features and what’s in the development pipeline. If you have suggestions, please email us.
  • Learn how developers are building innovative applications that use the Quandl data platform, on our partners page.
  • Questions or comments? Try our FAQ or email us.
  • Quandl is a collaborative project. Find out how you can get involved on our community page.

Quick Links: Vision | Sources | API | Packages | News | Features | Pipeline | FAQ | Community

Quandl is an index. It is a conduit to data published on various locations on the internet. Like any search engine, Quandl makes no claim to own the data it indexes or caches. Quandl endeavours to respect copyright. If you believe this site is indexing your copyright data and you would rather it not do so we have a simple take-down mechanism in place.

Contact Us

We’d love to hear from you. Email us with your comments, suggestions and feedback.

  Category: Data, Open
  Comments: Comments Off on Quandl is the easiest way to find and use numerical data on the internet.

“To Understand Is To Perceive Patterns

By admin,

ORIGINAL: Vimeo
from Jason Silva PLUS 1 year ago NOT YET RATED
Follow me on Twitter: twitter.com/JasonSilva
@JasonSilva and @notthisbody
Special thanks to filmmaker/photographer Rob Whitworth for allowing a clip from his video (vimeo.com/32958521) to be featured.
Check out his website: robwhitworth.co.uk
My videos:
Beginning of Infinity – vimeo.com/29938326
Imagination – vimeo.com/34902950
INSPIRATION:
The Imaginary Foundation says “To Understand Is To Perceive Patterns“…
Albert-László Barabási, think about NETWORKS:
Networks are everywhere. The brain is a network of nerve cells connected by axons, and cells themselves are networks of molecules connected by biochemical reactions. Societies, too, are networks of people linked by friendships, familial relationships and professional ties. On a larger scale, food webs and ecosystems can be represented as networks of species.
For decades, we assumed that the components of such complex systems as the cell, the society, or the Internet are randomly wired together.
Steven Johnson, author of Where Good Ideas Come From, writes about recurring patterns and networks:
Coral reefs are sometimes called “the cities of the sea”, and we need to take the metaphor seriously: the reef ecosystem is so innovative because it shares some defining characteristics with actual cities. These patterns of innovation and creativity are fractal: they reappear in recognizable form as you zoom in and out, from molecule to neuron to pixel to sidewalk. Whether you’re looking at original innovations of carbon-based life, or the explosion of news tools on the web, the same shapes keep turning up… when life gets creative, it has a tendency to gravitate toward certain recurring patterns, whether those patterns are self-organizing, or whether they are deliberately crafted by human agents
“Put simply: cities are like ant colonies are like software is like slime molds are like evolution is like disease is like sewage systems are like poetry is like the neural pathways in our brain. Everything is connected.
 
“…Johnson uses ‘The Long Zoom’ to define the way he looks at the world—if you concentrate on any one level, there are patterns that you miss. When you step back and simultaneously consider, say, the sentience of a slime mold, the cultural life of downtown Manhattan and the behavior of artificially intelligent computer code, new patterns emerge.”
 
Geoffrey West, from The Santa Fe Institute,
…Network systems can sustain life at all scales, whether intracellularly or within you and me or in ecosystems or within a city…. If you have a million citizens in a city or if you have 1014 cells in your body, they have to be networked together in some optimal way for that system to function, to adapt, to grow, to mitigate, and to be long term resilient.
Author Paul Stammetts writes about The Mycelial Archetype: He compares the mushroom mycelium with the overlapping information-sharing systems that comprise the Internet, with the networked neurons in the brain, and with a computer model of dark matter in the universe.
Adrian Bejan takes the recurring patterns in nature—trees, tributaries, air passages, neural networks, and lightning bolts—and reveals how a single principle of physics, the Constructal Law, accounts for the evolution of these and all other designs in our world.
Everything—from biological life to inanimate systems—generates shape and structure and evolves in a sequence of ever-improving designs in order to facilitate flow. River basins, cardiovascular systems, and bolts of lightning are very efficient flow systems to move a current—of water, blood, or electricity.
Geoffrey WEST on The sameness of organisms, cities, and corporations:
Stephen Johnson’s LONG VIEW
A collaboration of /Jason Silva and /Notthisbody incorporating:
/Aaron Koblin
/entpm
/Andrea Tseng
/Genki Ito
/ItoWorld
/Dominic
/Cheryl Colan
/TheNightElfik
/Paulskiart
/Grant Kayl
/blyon
/resonance
/gtAlumniMag
/Katie Armstrong
/Page Stephenson
/Jesse Kanda
/Jared Raab
/Angela Palmer
/elliottsellers
/flight404
/Pedro Miguel Cruz
/Takuya Hosogane
/kimpimmel
/Rob Whitworth
**and some original animations from Tiffany Shlain’s film CONNECTED: An Autoblogography about Love, Death & Technology // music is Clint Mansell’s “We’re going home” from Moon Soundtrack. Buy it on iTunes!

  Category: AI, Neuroscience
  Comments: Comments Off on “To Understand Is To Perceive Patterns

Interview: How Ray Kurzweil Plans To Revolutionize Search At Google

By admin,

ORIGINAL: Forbes

Ray Kurzweil

Raymond Kurzweil (Photo: Wikipedia)

When Google announced in January that Ray Kurzweil would be joining the company, a lot of people wondered why the phenomenally accomplished entrepreneur and futurist would want to work for a large company he didn’t start.

Kurzweil’s answer: No one but Google could provide the kind of computing and engineering resources he needed to fulfill his life’s work. Ever since age 14, the 65-year-old inventor of everything from music synthesizers to speech recognition systems has aimed to create a true artificial intelligence, even going so far as to predict that machines would match human intelligence by 2029.

Now, as a director of engineering at Google, he’s focusing specifically on enabling computers to truly understand and even speak in natural language. As I outlined in a recent story on deep learning–a fast-rising branch of AI that attempts to mimic the human neocortex to recognize patterns in speech, images, and other data–Kurzweil eventually wants to help create a “cybernetic friend” that knows what you want before you do.

Kurzweil’s focus is timely from a competitive standpoint as well. Google upped the ante on Apr. 29 by bringing its Google Now voice search app to the iPhone and iPad, in direct competition with Apple’s Siri. And Facebook just revealed that itbuilt a natural-language interface for its Graph Search service announced earlier this year. It’s becoming clear that search is already starting to move beyond the “caveman queries” that characterized effective search techniques until recently.

In a recent interview I conducted for the story, Kurzweil revealed a surprising amount of detail about his planned work at Google. No doubt the nature of that work will evolve as he settles in at the company, but so far, this interview provides possibly the deepest look so far at his plans.

At least initially, that work won’t relate directly to advertising, the main subject of this blog. But marketers will need to understand how profoundly Kurzweil’s and others’ work at Google could change not only what search will become in the age of more and more intelligent machines, but  the way we interact with information and even each other. All that is sure to mean big changes in the nature of advertising and marketing–well before 2029.

Q: In your book, How to Create a Mind, you lay out a theory of how the brain works. Can you explain it briefly?

A: The world is hierarchical. Only mammals have a neocortex, and the neocortex evolved to provide a better understanding of the structure of the world so you can do a better job of modifying it to your needs and solving problems within a hierarchical world. We think in a hierarchical manner. Our first invention was language, and language is hierarchical.

  Category: Uncategorized
  Comments: Comments Off on Interview: How Ray Kurzweil Plans To Revolutionize Search At Google

Talking about the Computational Future at SXSW 2013

By admin,

March 19, 2013
Last week I gave a talk at SXSW 2013 in Austin about some of the things I’m thinking about these days—including quite a few that I’ve never talked publicly about before. Here’s a video, and a slightly edited transcript:
Well, this is a pretty exciting time for me. Because it turns out that a whole bunch of things that I’ve been working on for more than 30 years are all finally converging, in a very nice way. And what I’d like to do here today is tell you a bit about that, and about some things I’ve figured out recently—and about what it all means for our future.
 
This is going to be a bit of a wild talk in some ways. It’s going to go from pretty intellectual stuff about basic science and so on, to some really practical technology developments, with a few sneak peeks at things I’ve never shown before.
 
Let’s start from some science. And you know, a lot of what I’ll say today connects back to what I thought at first was a small discovery that I made about 30 years ago. Let me tell you the story.
 
I started out at a pretty young age as a physicist. Diligently doing physics pretty much the way it had been done for 300 years. Starting from this-or-that equation, and then doing the math to figure out predictions from it. That worked pretty well in some cases. But there were too many cases where it just didn’t work. So I got to wondering whether there might be some alternative; a different approach.
 
At the time I’d been using computers as practical tools for quite a while—and I’d even created a big software system that was a forerunner of Mathematica. And what I gradually began to think was that actually computers—and computation—weren’t just useful tools; they were actually the main event. And that one could use them to generalize how one does science: to think not just in terms of math and equations, but in terms of arbitrary computations and programs.
 
So, OK, what kind of programs might nature use? Given how complicated the things we see in nature are, we might think the programs it’s running must be really complicated. Maybe thousands or millions of lines of code. Like programs we write to do things.
 
But I thought: let’s start simple. Let’s find out what happens with tiny programs—maybe a line or two of code long. And let’s find out what those do. So I decided to do an experiment. Just set up programs like that, and run them. Here’s one of the ones I started with. It’s called a cellular automaton. It consists of a line of cells, each one either black or not. And it runs down the page computing the new color of each cell using the little rule at the bottom there.
 
 
OK, so there’s a simple program, and it does something simple. But let’s point our computational telescope out into the computational universe and just look at all simple programs that work like the one here.
 
 
Well, we see a bunch of things going on. Often pretty simple. A repeating pattern. Sometimes a fractal. But you don’t have to go far before you see much stranger stuff.
 
This is a program I call “rule 30“. What’s it doing? Let’s run it a little longer.
 
 
That’s pretty complicated. And if we just saw this somewhere out there, we’d probably figure it was pretty hard to make. But actually, it all comes just from that tiny program at the bottom. That’s it. And when I first saw this, it was my sort of little modern “Galileo moment”. I’d seen something through my computational telescope that eventually made me change my whole world view. And made me realize that computation—even as done by a tiny program like the one here—is vastly more powerful and important than I’d ever imagined.
 
 
Well, I’ve spent the past few decades working through the consequences of this. And it’s led me to build a new kind of science, to create all sorts of practical technology, and to make me think about almost everything in a different way. I published a big book about the science about ten years ago. And at the time when the book came out, there was a quite a bit of “paradigm shift turbulence“. But looking back it’s really nice to see how well the science has taken root.
 
 
 
And for example there are models based on my kinds of simple programs showing up everywhere. After 300 years of being dominated by Newton-style equations and math, the frontiers are definitely now going to simple programs and the new kind of science.
 
But there’s still one ultimate app out there to be done: to figure out the fundamental theory of physics—to figure out how our whole universe works. It’s kind of tantalizing. We see these very simple programs, with very complex behavior.
 
 
It makes one think that maybe there’s a simple program for our whole universe. And that even though physics seems to involve more and more complicated equations, that somewhere underneath it all there might just be a tiny little program. We don’t know if things work that way. But if out there in the computational universe of possible programs, the program for our universe is just sitting there waiting to be found, it seems embarrassing not to be looking for it.
 
Now if there is indeed a simple program for our universe, it’s sort of inevitable that it has to operate kind of underneath our standard notions like space and time and so on. Maybe it’s a little like this.
 
 
A giant network of nodes, that make up space a bit like molecules make up the air in this room. Well, you can start just trying possible programs that create such things. Each one is in a sense a candidate universe.
 
 
And when you do this, you can pretty quickly say most of them can’t be our universe. Time stops after an instant. There are an infinite number of dimensions. There can’t be particles or matter. Or other pathologies.

What is a Smarter Planet? Instrumented. Intelligent. Interconnected.

By admin,

ORIGINAL: IBM

On a smarter planet, we want to change the paradigm from react to anticipate

For five years, IBMers have been working with companies, cities and communities around the world to build a Smarter Planet.

We’ve seen enormous advances, as leaders are using an explosion of data to transform their enterprises and institutions through analytics, mobile technology, social business and the cloud.

We’ve also seen how this new era is starting to create winners. They’re changing how their decisions are made. They’re redesigning how their teams work, reassessing how to serve their customers, and changing the very nature of business.

It’s the ability to harness data that gives these leaders their competitive advantage in the era of “smart.”

Today, conventions once universally held are giving way to new perspectives, new ways of working, and new solutions acrossindustries. Roles are changing. And more than ever, leaders need a partner to help them adapt.

 

What can you do on a smarter planet?

To outperform on a smarter planet, enterprises face some fundamental needs:

Turn information into insights

Organizations are overwhelmed with data. On a smarter planet, the most successful organizations can turn this data into valuable insights about customers, operations, even pricing. With advanced analytics, you can open new opportunities for business optimization by enabling rapid, informed and confident decisions and actions.

Read more about Smarter Analytics.

Connect and empower people

Innovation comes from collaboration. And collaboration comes from everywhere. Firms that embrace the power of social technologies will unleash the productivity and innovation throughout the entire value chain—from employees to partners to suppliers to customers.

Read more about Social Business.

 

The cloud removes restraints

Smarter comes at a cost: hardware, programs, people to run them. Cloud computing offers multiple ways to reduce that cost through efficient use of resources. Utilizing the cloud means not having to power idle equipment and being able to rethink and redistribute software quickly and easily. It also means a nimbler, more efficient organization.

Read more about Cloud Computing.

 

Customers come of age

There’s a new breed of customers today. Empowered by technology, transparency and abundant information, they want to engage with companies on their own terms―when they want and how they want. To engage and keep these customers, organizations need a whole new integrated approach. There’s no room for business and usual.

Read more about Smarter Commerce.

 

Business moves to mobility

Even as storefronts had to adapt to the Internet, commerce is adapting to mobility. Armed with smartphones and tablets, consumers want to use those devices to browse, shop and pay. Today’s leaders recognize that desire and are building mobile enterprises in response.

Read more about Mobile Enterprise.

Manage risk, security and compliance

Even on a smarter planet there are risks: security, credit, market, operational, environmental and compliance risks…to name a few. With the right process and system improvements, leaders can identify, assess and monitor these risks to mitigate and prevent them.

Read more about Smarter Security.

 

Integrated solutions pave the way

While many enterprises share similarities, those are mostly superficial. To achieve the most from an information technology system today, your organization needs a solution that is tailored to your objectives and needs. Integrating the hardware and software into a single system provides the most power, the least pain and the best outcomes.

Read more about PureSystems and PureData.

Drive enterprises' effectiveness and efficiency

In a slow growth environment, organizations must do more with less. To succeed, your organization must drive continuous and sustainable operational improvements to lower costs and reduce complexity.

Read more about Smarter Computing.

 

 

  Category: Computing, IBM, Sensors, Smart City
  Comments: Comments Off on What is a Smarter Planet? Instrumented. Intelligent. Interconnected.

MIT’s 2013 Top 10 Breakthrough Technologies – 1: Deep Learning

By admin,

Deep Learning
With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.
Image by Jimmy Turrell

When Ray Kurzweil met with Google CEO Larry Page last July, he wasn’t looking for a job. A respected inventor who’s become a machine-intelligence futurist, Kurzweil wanted to discuss his upcoming book How to Create a Mind. He told Page, who had read an early draft, that he wanted to start a company to develop his ideas about how to build a truly intelligent computer: one that could understand language and then make inferences and decisions on its own.




It quickly became obvious that such an effort would require nothing less than Google-scale data and computing power. “I could try to give you some access to it,” Page told Kurzweil. “But it’s going to be very difficult to do that for an independent company.” So Page suggested that Kurzweil, who had never held a job anywhere but his own companies, join Google instead. It didn’t take Kurzweil long to make up his mind: in January he started working for Google as a director of engineering. “This is the culmination of literally 50 years of my focus on artificial intelligence,” he says.
Kurzweil was attracted not just by Google’s computing resources but also by the startling progress the company has made in a branch of AI called deep learning. Deep-learning software attempts to mimic the activity in layers of neurons in the neocortex, the wrinkly 80 percent of the brain where thinking occurs. The software learns, in a very real sense, to recognize patterns in digital representations of sounds, images, and other data.
The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs. But because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before.
With this greater depth, they are producing remarkable advances in speech and image recognition. Last June, a Google deep-learning system that had been shown 10 million images from YouTube videos proved almost twice as good as any previous image recognition effort at identifying objects such as cats. Google also used the technology to cut the error rate on speech recognition in its latest Android mobile software. In October, Microsoft chief research officer Rick Rashid wowed attendees at a lecture in China with a demonstration of speech software that transcribed his spoken words into English text with an error rate of 7 percent, translated them into Chinese-language text, and then simulated his own voice uttering them in Mandarin. That same month, a team of three graduate students and two professors won a contest held by Merck to identify molecules that could lead to new drugs. The group used deep learning to zero in on the molecules most likely to bind to their targets.
Google in particular has become a magnet for deep learning and related AI talent. In March the company bought a startup cofounded by Geoffrey Hinton, a University of Toronto computer science professor who was part of the team that won the Merck contest. Hinton, who will split his time between the university and Google, says he plans to “take ideas out of this field and apply them to real problems” such as image recognition, search, and natural-language understanding, he says.
All this has normally cautious AI researchers hopeful that intelligent machines may finally escape the pages of science fiction. Indeed, machine intelligence is starting to transform everything from communications and computing to medicine, manufacturing, and transportation. The possibilities are apparent in IBM’s Jeopardy!-winning Watson computer, which uses some deep-learning techniques and is now being trained to help doctors make better decisions. Microsoft has deployed deep learning in its Windows Phone and Bing voice search.
Extending deep learning into applications beyond speech and image recognition will require more conceptual and software breakthroughs, not to mention many more advances in processing power. And we probably won’t see machines we all agree can think for themselves for years, perhaps decades—if ever. But for now, says Peter Lee, head of Microsoft Research USA, “deep learning has reignited some of the grand challenges in artificial intelligence.
Building a Brain
There have been many competing approaches to those challenges. One has been to feed computers with information and rules about the world, which required programmers to laboriously write software that is familiar with the attributes of, say, an edge or a sound. That took lots of time and still left the systems unable to deal with ambiguous data; they were limited to narrow, controlled applications such as phone menu systems that ask you to make queries by saying specific words.
Neural networks, developed in the 1950s not long after the dawn of AI research, looked promising because they attempted to simulate the way the brain worked, though in greatly simplified form. A program maps out a set of virtual neurons and then assigns random numerical values, or “weights,” to connections between them. These weights determine how each simulated neuron responds—with a mathematical output between 0 and 1—to a digitized feature such as an edge or a shade of blue in an image, or a particular energy level at one frequency in a phoneme, the individual unit of sound in spoken syllables.
Some of today’s artificial neural networks can train themselves to recognize complex patterns.
Programmers would train a neural network to detect an object or phoneme by blitzing the network with digitized versions of images containing those objects or sound waves containing those phonemes. If the network didn’t accurately recognize a particular pattern, an algorithm would adjust the weights. The eventual goal of this training was to get the network to consistently recognize the patterns in speech or sets of images that we humans know as, say, the phoneme “d” or the image of a dog. This is much the same way a child learns what a dog is by noticing the details of head shape, behavior, and the like in furry, barking animals that other people call dogs.
But early neural networks could simulate only a very limited number of neurons at once, so they could not recognize patterns of great complexity. They languished through the 1970s.


In the mid-1980s, Hinton and others helped spark a revival of interest in neural networks with so-called “deep” models that made better use of many layers of software neurons. But the technique still required heavy human involvement: programmers had to label data before feeding it to the network. And complex speech or image recognition required more computer power than was then available.

Finally, however, in the last decade ­Hinton and other researchers made some fundamental conceptual breakthroughs. In 2006, Hinton developed a more efficient way to teach individual layers of neurons.

  • The first layer learns primitive features, like an edge in an image or the tiniest unit of speech sound. It does this by finding combinations of digitized pixels or sound waves that occur more often than they should by chance.
  • Once that layer accurately recognizes those features, they’re fed to the next layer, which trains itself to recognize more complex features, like a corner or a combination of speech sounds.
  • The process is repeated in successive layers until the system can reliably recognize phonemes or objects.

Like cats.Last June, Google demonstrated one of the largest neural networks yet, with more than a billion connections. A team led by Stanford computer science professor Andrew Ng and Google Fellow Jeff Dean showed the system images from 10 million randomly selected YouTube videos. One simulated neuron in the software model fixated on images of cats. Others focused on human faces, yellow flowers, and other objects. And thanks to the power of deep learning, the system identified these discrete objects even though no humans had ever defined or labeled them.

What stunned some AI experts, though, was the magnitude of improvement in image recognition. The system correctly categorized objects and themes in the ­YouTube images 16 percent of the time. That might not sound impressive, but it was 70 percent better than previous methods. And, Dean notes, there were 22,000 categories to choose from; correctly slotting objects into some of them required, for example, distinguishing between two similar varieties of skate fish. That would have been challenging even for most humans. When the system was asked to sort the images into 1,000 more general categories, the accuracy rate jumped above 50 percent.
Big Data
Training the many layers of virtual neurons in the experiment took 16,000 computer processors—the kind of computing infrastructure that Google has developed for its search engine and other services. At least 80 percent of the recent advances in AI can be attributed to the availability of more computer power, reckons Dileep George, cofounder of the machine-learning startup Vicarious.
There’s more to it than the sheer size of Google’s data centers, though. Deep learning has also benefited from the company’s method of splitting computing tasks among many machines so they can be done much more quickly. That’s a technology Dean helped develop earlier in his 14-year career at Google. It vastly speeds up the training of deep-learning neural networks as well, enabling Google to run larger networks and feed a lot more data to them.
Already, deep learning has improved voice search on smartphones. Until last year, Google’s Android software used a method that misunderstood many words. But in preparation for a new release of Android last July, Dean and his team helped replace part of the speech system with one based on deep learning. Because the multiple layers of neurons allow for more precise training on the many variants of a sound, the system can recognize scraps of sound more reliably, especially in noisy environments such as subway platforms. Since it’s likelier to understand what was actually uttered, the result it returns is likelier to be accurate as well. Almost overnight, the number of errors fell by up to 25 percent—results so good that many reviewers now deem Android’s voice search smarter than Apple’s more famous Siri voice assistant.
For all the advances, not everyone thinks deep learning can move artificial intelligence toward something rivaling human intelligence. Some critics say deep learning and AI in general ignore too much of the brain’s biology in favor of brute-force computing.
One such critic is Jeff Hawkins, founder of Palm Computing, whose latest venture, Numenta, is developing a machine-learning system that is biologically inspired but does not use deep learning. Numenta’s system can help predict energy consumption patterns and the likelihood that a machine such as a windmill is about to fail. Hawkins, author of On Intelligence, a 2004 book on how the brain works and how it might provide a guide to building intelligent machines, says deep learning fails to account for the concept of time. Brains process streams of sensory data, he says, and human learning depends on our ability to recall sequences of patterns: when you watch a video of a cat doing something funny, it’s the motion that matters, not a series of still images like those Google used in its experiment. “Google’s attitude is: lots of data makes up for everything,” Hawkins says.

But if it doesn’t make up for everything, the computing (data & human)resources a company like Google throws at these problems can’t be dismissed. They’re crucial, say deep-learning advocates, because the brain itself is still so much more complex than any of today’s neural networks. “You need lots of computational resources to make the ideas work at all,” says Hinton.

What’s Next
Although Google is less than forthcoming about future applications, the prospects are intriguing. Clearly, better image search would help YouTube, for instance. And Dean says deep-learning models can use phoneme data from English to more quickly train systems to recognize the spoken sounds in other languages. It’s also likely that more sophisticated image recognition could make Google’s self-driving cars much better. Then there’s search and the ads that underwrite it. Both could see vast improvements from any technology that’s better and faster at recognizing what people are really looking for—maybe even before they realize it.
Sergey Brin has said he wants to build a benign version of HAL in 2001: A Space Odyssey.

This is what intrigues Kurzweil, 65, who has long had a vision of intelligent machines. In high school, he wrote software that enabled a computer to create original music in various classical styles, which he demonstrated in a 1965 appearance on the TV show I’ve Got a Secret. Since then, his inventions have included several firsts—

  • a print-to-speech reading machine,
  • software that could scan and digitize printed text in any font,
  • music synthesizers that could re-create the sound of orchestral instruments, and
  • a speech recognition system with a large vocabulary.
Today, he envisions a “cybernetic friend” that listens in on your phone conversations, reads your e-mail, and tracks your every move—if you let it, of course—so it can tell you things you want to know even before you ask. This isn’t his immediate goal at Google, but it matches that of Google cofounder Sergey Brin, who said in the company’s early days that he wanted to build the equivalent of the sentient computer HAL in 2001: A Space Odyssey—except one that wouldn’t kill people.
For now, Kurzweil aims to help computers understand and even speak in natural language.My mandate is to give computers enough understanding of natural language to do useful things—do a better job of search, do a better job of answering questions,” he says. Essentially, he hopes to create a more flexible version of IBM’s Watson, which he admires for its ability to understand Jeopardy!queries as quirky as “a long, tiresome speech delivered by a frothy pie topping.” (Watson’s correct answer: “What is a meringue harangue?”)
Kurzweil isn’t focused solely on deep learning, though he says his approach to speech recognition is based on similar theories about how the brain works. He wants to model the actual meaning of words, phrases, and sentences, including ambiguities that usually trip up computers. “I have an idea in mind of a graphical way to represent the semantic meaning of language,” he says.
That in turn will require a more comprehensive way to graph the syntax of sentences. Google is already using this kind of analysis to improve grammar in translations. Natural-language understanding will also require computers to grasp what we humans think of as common-sense meaning. For that, Kurzweil will tap into the Knowledge Graph Google’s catalogue of some 700 million topics, locations, people, and more, plus billions of relationships among them. It was introduced last year as a way to provide searchers with answers to their queries, not just links.
Finally, Kurzweil plans to apply deep-learning algorithms to help computers deal with the “soft boundaries and ambiguities in language.” If all that sounds daunting, it is. “Natural-language understanding is not a goal that is finished at some point, any more than search,” he says. “That’s not a project I think I’ll ever finish.
Though Kurzweil’s vision is still years from reality, deep learning is likely to spur other applications beyond speech and image recognition in the nearer term. For one, there’s drug discovery. The surprise victory by Hinton’s group in the Merck contest clearly showed the utility of deep learning in a field where few had expected it to make an impact.
That’s not all. Microsoft’s Peter Lee says there’s promising early research on potential uses of deep learning in machine vision—technologies that use imaging for applications such as industrial inspection and robot guidance. He also envisions personal sensors that deep neural networks could use to predict medical problems. And sensors throughout a city might feed deep-learning systems that could, for instance, predict where traffic jams might occur.
In a field that attempts something as profound as modeling the human brain, it’s inevitable that one technique won’t solve all the challenges. But for now, this one is leading the way in artificial intelligence. “Deep learning,” says Dean, “is a really powerful metaphor for learning about the world.