Category: Intelligence


A lab founded by a tech billionaire just unveiled a major leap forward in cracking your brain’s code

By Hugo Angel,

This is definitely not a scene from “A Clockwork Orange.” Allen Brain Observatory
As the mice watched a computer screen, their glowing neurons pulsed through glass windows in their skulls.
Using a device called a two-photon microscope, researchers at the Allen Institute for Brain Science could peer through those windows and record, layer by layer, the workings of their little minds.
The result, announced July 13, is a real-time record of the visual cortex — a brain region shared in similar form across mammalian species — at work. The data set that emerged is so massive and complete that its creators have named it the Allen Brain Observatory.
Bred for the lab, the mice were genetically modified so that specific cells in their brains would fluoresce when they became active. Researchers had installed the brain-windows surgically, slicing away tiny chunks of the rodents’ skulls and replacing them with five-millimeter skylights.
Sparkling neurons of the mouse visual cortex shone through the glass as images and short films flashed across the screen. Each point of light the researchers saw translated, with hours of careful processing, into data: 
  • Which cell lit up? 
  • Where in the brain? 
  • How long did it glow? 
  • What was the mouse doing at the time? 
  • What was on the screen?

The researchers imaged the neurons in small groups, building a map of one microscopic layer before moving down to the next. When they were finished, the activities of 18,000 cells from several dozen mice were recorded in their database.

This is the first data set where we’re watching large populations of neurons’ activity in real time, at the cellular level,” said Saskia de Vries, a scientist who worked on the project, at the private research center launched by Microsoft co-founder Paul Allen.
The problem the Brain Observatory wants to solve is straightforward. Science still does not understand the brain’s underlying code very well, and individual studies may turn up odd results that are difficult to interpret in the context of the whole brain.
A decade ago, for example, a widely-reported study appeared to find a single neuron in a human brain that always — and only — winked on when presented with images of Halle Berry. Few scientists suggested that this single cell actually stored the subject’s whole knowledge of Berry’s face. But without more context about what the cells around it were doing, a more complete explanation remained out of reach.
When you’re listening to a cell with an electrode, all you’re hearing is [its activity level] spiking,” said Shawn Olsen, another researcher on the project. “And you don’t know where exactly that cell is, you don’t know its precise location, you don’t know its shape, you don’t know who it connects to.
Imagine trying to assemble a complete understanding of a computer given only facts like under certain circumstances, clicking the mouse makes lights on the printer blink.
To get beyond that kind of feeling around in the dark, the Allen Institute has taken what Olsen calls an “industrial” approach to mapping out the brain’s activity.
Our goal is to systematically march through the different cortical layers, and the different cell types, and the different areas of the cortex to produce a systematic, mostly comprehensive survey of the activity,” Olsen explained. “It doesn’t just describe how one cell type is responding or one particular area, but characterizes as much as we can a complete population of cells that will allow us to draw inferences that you couldn’t describe if you were just looking at one cell at a time.
In other words, this project makes its impact through the grinding power of time and effort.
A visualization of cells examined in the project. Allen Brain Observatory

Researchers showed the mice moving horizontal or vertical lines, light and dark dots on a surface, natural scenes, and even clips from Hollywood movies.

The more abstract displays target how the mind sees and interprets light and dark, lines, and motion, building on existing neuroscience. Researchers have known for decades that particular cells appear to correspond to particular kinds of motion or shape, or positions in the visual field. This research helps them place the activity of those cells in context.
One of the most obvious results was that the brain is noisy, messy, and confusing.
Even though we showed the same image, we could get dramatically different responses from the same cell. On one trial it may have a strong response, on another it may have a weak response,” Olsen said.
All that noise in their data is one of the things that differentiates it from a typical study, de Vries said.
If you’re inserting an electrode you’re going to keep advancing until you find a cell that kind of responds the way you want it to,” he said. “By doing a survey like this we’re going to see a lot of cells that don’t respond to the stimuli in the way that we think they should. We’re realizing that the cartoon model that we have of the cortex isn’t completely accurate.

Olsen said they suspect a lot of that noise emerges from whatever the mouse is thinking about or doing that has nothing to do with what’s on screen. They recorded videos of the mice during data collection to help researchers combing their data learn more about those effects.
The best evidence for this suspicion? When they showed the mice more interesting visuals, like pictures of animals or clips from the film “Touch of Evil,” the neurons behaved much more consistently.
We would present each [clip] ten different times,” de Vries said. “And we can see from trial to trial many cells at certain times almost always respond — reliable, repeatable, robust responses.
In other words, it appears the mice were paying attention.
Allen Brain Observatory

The Brain Observatory was turned loose on the internet Wednesday, with its data available for researchers and the public to comb through, explore, and maybe critique.

But the project isn’t over.
In the next year-and-a-half, the researchers intend to add more types of cells and more regions of the visual cortex to their observatory. And their long-term ambitions are even grander.
Ultimately,” Olson said,”we want to understand how this visual information in the mouse’s brain gets used to guide behavior and memory and cognition.
Right now, the mice just watch screens. But by training them to perform tasks based on what they see, he said they hope to crack the mysteries of memory, decision-making, and problem-solving. Another parallel observatory created using electrode arrays instead of light through windows will add new levels of richness to their data.
So the underlying code of mouse — and human — brains remains largely a mystery, but the map that we’ll need to unlock it grows richer by the day.
ORIGINAL: Tech Insider

Jul. 13, 2016

Where does intelligence come from?

By Hugo Angel,

Add caption
It is amazing how intelligent we can be. We can construct shelter, find new ways of hunting, and create boats and machines. Our unique intelligence has been responsible for the emergence of civilization.
But how does a set of living cells become intelligent? How can flesh and blood turn into something that can create bicycles and airplanes or write novels?
This is the question of the origin of intelligence.
This problem has puzzled many theorists and scientists, and it is particularly important if we want to build intelligent machines. They still lag well behind us. Although computers calculate millions of times faster than we do, it is we who understand the big picture in which these calculations fit. Even animals are much more intelligent than machines. A mouse can find its way in a hostile forest and survive. This cannot be said for our computers or robots.
The question of how to achieve intelligence remains a mystery for scientists.
Recently, however a new theory has been proposed that may resolve this very question. The theory is called practopoiesis and is founded in the most fundamental capability of all biological organisms—their ability to adapt.
Darwin’s theory of evolution describes one way how our genomes adapt. By creating offspring new combinations of genes are tested; the good ones are kept and the bad ones are disposed of. The result is a genome better adapted to the environment.
Practopoiesis tells us that somewhat similar adaptation mechanisms of trials and errors occur while an organism grows, while it digests food and also, while it acts intelligently or thinks.
For example, the growth of our body is not precisely programmed by the genes. Instead, our genes perform experiments, which require feedback from the environment and corrections of errors. Only with trial and errors can our body properly grow.
Our genes contain an elaborate knowledge of which experiments need to be done, and this knowledge of trial-and-error approaches has been acquired through eons of evolution. We kept whatever worked well for our ancestors.
However, this knowledge alone is not enough to make us intelligent.
To create intelligent behavior such as thinking, decision making, understanding a poem, or simply detecting one’s friend in a crowd of strangers, our bodies require yet another type of trial-and-error knowledge. There are mechanisms in our body that also contain elaborate knowledge for experimenting, but they are much faster. The knowledge of these mechanisms is not collected through evolution but through the development over the lifetime of an individual.
These fast adaptive mechanisms continually adjust the big network of our connected nerve cells. These adaptation mechanisms can change in an eye-blink the way the brain networks are effectively connected. It may take less than a second to make a change necessary to recognize one’s own grandmother, or to make a decision, or to get a new idea on how to solve a problem.
The slow and the fast adaptive mechanisms share one thing: They cannot be successful without receiving feedback and thus iterating through several stages of trial and error; for example, testing several possibilities of who this person in distance could be.
Practopoiesis states that the slow and fast adaptive mechanisms are collectively responsible for creation of intelligence and are organized into a hierarchy. 
  • First, evolution creates genes at a painstakingly slow tempo. Then genes slowly create the mechanisms of fast adaptations
  • Next, adaptation mechanisms change the properties of our nerve cells within seconds
  • And finally, the resulting adjusted networks of nerve cells route sensory signals to muscles with the speed of lightning. 
  • At the end behavior is created.
Probably the most groundbreaking aspect of practopoietic theory is that our intelligent minds are not primarily located in the connectivity matrix of our neural networks, as it has been widely held, but instead in the elaborate knowledge of the fast adaptive mechanisms. The more knowledge our genes store into our quick abilities to adapt nerve cells, the more capability we have to adjust in novel situations, solve problems, and generally, act intelligently.
Therefore, our intelligence seems to come from the hierarchy of adaptive mechanisms, from the very slow evolution that enables the genome to adapt over a lifetime, to the quick pace of neural adaptation expressing knowledge acquired through its lifetime. Only when these adaptations have been performed successfully can our networks of neurons perform tasks with wonderful accuracy.
Our capability to survive and create originates, then, 
  • from the adaptive mechanisms that operate at different levels and 
  • the vast amounts of knowledge accumulated by each of the levels.
 The combined result of all of them together is what makes us intelligent.
May 16, 2016
Danko Nikolić
About the Author:
Danko Nikolić is a brain and mind scientist, running an electrophysiology lab at the Max Planck Institute for Brain Research, and is the creator of the concept of ideasthesia. More about practopoiesis can be read here

“AI & The Future Of Civilization” A Conversation With Stephen Wolfram

By Hugo Angel,

“AI & The Future Of Civilization” A Conversation With Stephen Wolfram [3.1.16]
Stephen Wolfram
What makes us different from all these things? What makes us different is the particulars of our history, which gives us our notions of purpose and goals. That’s a long way of saying when we have the box on the desk that thinks as well as any brain does, the thing it doesn’t have, intrinsically, is the goals and purposes that we have. Those are defined by our particulars—our particular biology, our particular psychology, our particular cultural history.

The thing we have to think about as we think about the future of these things is the goals. That’s what humans contribute, that’s what our civilization contributes—execution of those goals; that’s what we can increasingly automate. We’ve been automating it for thousands of years. We will succeed in having very good automation of those goals. I’ve spent some significant part of my life building technology to essentially go from a human concept of a goal to something that gets done in the world.

There are many questions that come from this. For example, we’ve got these great AIs and they’re able to execute goals, how do we tell them what to do?…


STEPHEN WOLFRAM, distinguished scientist, inventor, author, and business leader, is Founder & CEO, Wolfram Research; Creator, Mathematica, Wolfram|Alpha & the Wolfram Language; Author, A New Kind of Science. Stephen Wolfram’s EdgeBio Page

THE REALITY CLUB: Nicholas Carr

AI & THE FUTURE OF CIVILIZATION
Some tough questions. One of them is about the future of the human condition. That’s a big question. I’ve spent some part of my life figuring out how to make machines automate stuff. It’s pretty obvious that we can automate many of the things that we humans have been proud of for a long time. What’s the future of the human condition in that situation?


More particularly, I see technology as taking human goals and making them able to be automatically executed by machines. The human goals that we’ve had in the past have been things like moving objects from here to there and using a forklift rather than our own hands. Now, the things that we can do automatically are more intellectual kinds of things that have traditionally been the professions’ work, so to speak. These are things that we are going to be able to do by machine. The machine is able to execute things, but something or someone has to define what its goals should be and what it’s trying to execute.

People talk about the future of the intelligent machines, and whether intelligent machines are going to take over and decide what to do for themselves. What one has to figure out, while given a goal, how to execute it into something that can meaningfully be automated, the actual inventing of the goal is not something that in some sense has a path to automation.

How do we figure out goals for ourselves? How are goals defined? They tend to be defined for a given human by their own personal history, their cultural environment, the history of our civilization. Goals are something that are uniquely human. It’s something that almost doesn’t make any sense. We ask, what’s the goal of our machine? We might have given it a goal when we built the machine.

The thing that makes this more poignant for me is that I’ve spent a lot of time studying basic science about computation, and I’ve realized something from that. It’s a little bit of a longer story, but basically, if we think about intelligence and things that might have goals, things that might have purposes, what kinds of things can have intelligence or purpose? Right now, we know one great example of things with intelligence and purpose and that’s us, and our brains, and our own human intelligence. What else is like that? The answer, I had at first assumed, is that there are the systems of nature. They do what they do, but human intelligence is far beyond anything that exists naturally in the world. It’s something that’s the result of all of this elaborate process of evolution. It’s a thing that stands apart from the rest of what exists in the universe. What I realized, as a result of a whole bunch of science that I did, was that is not the case.

Monster Machine Cracks The Game Of Go

By Hugo Angel,

Illustration: Google DeepMind/Nature
A computer program has defeated a master of the ancient Chinese game of Go, achieving one of the loftiest of the Grand Challenges of AI at least a decade earlier than anyone had thought possible.
The programmers, at Google’s Deep Mind laboratory, in London, .write in today’s issue of .Nature that their program .AlphaGo defeated .Fan Hui, the European Go champion, 5 games to nil, in a match held last October in the company’s offices. Earlier, the program had won 494 out of 495 games against the best rival Go programs.
AlphaGo’s creators now hope to seal their victory at a 5-game match against .Lee Se-dol, the best Go player in the world. That match, for a $1 million prize fund, is scheduled to take place in March in Seoul, South Korea.
The program’s victory marks the rise not merely of the machines but of new methods of computer programming based on self-training neural networks. In support of their claim that this method can be applied broadly, the researchers cited their success, which we .reported a year ago, in getting neural networks to learn how to play an entire set of video games from Atari. Future applications, they say, may include financial trading, climate modeling and medical diagnosis.
Not all of AlphaGo’s skill is self-taught. First, the programmers jumpstarted the training by having the program predict moves in a database of master games. It eventually reached a success rate of 57 percent, versus 44 percent for the best rival programs.
Then, to go beyond mere human performance, the program conducted its own research through a trial-and-error approach that involved playing millions of games against itself. In this fashion it discovered, one by one, many of the rules of thumb that textbooks have been imparting to Go students for centuries. Google DeepMind calls the self-guided method reinforced learning, but it’s really just another word for “.deep learning,” the current AI buzzword.
Not only can self-trained machines surpass the game-playing powers of their creators, they can do so in ways that programmers can’t even explain. It’s a different world from the one that AI’s founders envisaged decades ago.
Commenting on the death yesterday of AI pioneer Marvin Minksy, Demis Hassabis, the lead author of the Nature paper, said “It would be interesting to see what he would have said,” said Hassabis. “I suspect he would have been pretty surprised at how quickly this has arrived.
That’s because, as programmers would say, Go is such a bear. Then again, even chess was a bear, at first. Back in 1957, the late Herbert Simon famously predicted that a computer would beat the world champion at chess within a decade. But it was only in 1997 that World Chess Champion Garry Kasparov lost to IBM’s Deep Blue—a multimillion-dollar, purpose-built machine that filled a small room. Today you can .download a $100 program to a decently powered laptop and watch it utterly cream any chess player in the world.
Go is harder for machines because the positions are harder to judge and there are a whole lot more positions.
Judgement is harder because the pieces, or “stones,” are all of equal value, whereas those in chess have varying values—a Queen, for instance, is worth nine times more than a pawn, on average. Chess programmers can thus add up those values (and throw in numerical estimates for the placement of pieces and pawns) to arrive at a quick-and-dirty score of a game position. No such expedient exists for Go.
There are vastly more positions to judge than in chess because Go offers on average 10 times more options at every move and there are about three times as many moves in an game. The number of possible board configurations in Go is estimated at 10 to the 170th power—“more than the number of atoms in the universe,” said Hassabis.
Some researchers .tried to adapt to Go some of the forward-search techniques devised for chess; others .relied on random simulations of games in the aptly named Monte Carlo method. The Google DeepMind people leapfrogged them all with deep, or convolutional, neural networks, so named because they imitate the brain (up to a point).
A neural network links units that are the computing equivalent to a brain’s neurons—first by putting them into layers, then by stacking the layers. AlphaGo’s are 12 layers deep. Each “neuron” connects to its neighbors in its own layer and also those in the layers directly above and below it. A signal sent to one neuron causes it to strengthen or weaken its connections to other ones, so over time, the network changes its configuration.
To train the system

  • you first expose it to input data
  • Next, you test the output signal against the metric you’re using—say, predicting a master’s move—and 
  • reinforce correct decisions by strengthening the underlying connections. 
  • Over time, the system produces better outputs. You might say that it learns.
AlphaGo has two networks

  • The policy network cuts down on the number of moves to look at, and 
  • the evaluation network allows you to cut short the depth of that search,” or the number of moves the machine must look ahead, Hassabis said. 

 “Both neural networks together make the search tractable.”

The main difference from the system that played Atari is the inclusion of a search-ahead function: “In Atari you can do well by reacting quickly to current events,” said Hassabis. “In Go you need a plan.”
After exhaustive training the two networks, taken by themselves, could play Go as well as any program did. But when the researchers coupled the neural networks to a forward-searching algorithm, the machine was able to dominate rival programs completely. Not only did it win all but one of the hundreds of games it played against them, it was even able to give them a handicap of four extra moves, made at the beginning of the game, and still beat them.
About that one defeat: “The search has stochastic [random] element, so there’s always a possibility that it will make a mistake,” David Silver said. “As we improve, we reduce probability of making a mistake, but mistakes happen. As in that one particular game.
Anyone might cock an eyebrow at the claim that AlphaGo will have practical spin-offs. Games programmers have often justified their work by promising such things but so far they’ve had little to show for their efforts. IBM’s Deep Blue did nothing but play chess, and IBM’s Watson—.designed to beat the television game show Jeopardy!—will need laborious retraining to be of service in its .next appointed task of helping doctors diagnose and treat patients.
But AlphaGo’s creators say that because of the generalized nature of their approach, direct spin-offs really will come—this time for sure. And they’ll get started on them just as soon as the March match against the world champion is behind them.
ORIGINAL: IEEE Spectrum
Posted 27 Jan 2016

Memory capacity of brain is 10 times more than previously thought

By Hugo Angel,

Data from the Salk Institute shows brain’s memory capacity is in the petabyte range, as much as entire Web

LA JOLLA—Salk researchers and collaborators have achieved critical insight into the size of neural connections, putting the memory capacity of the brain far higher than common estimates. The new work also answers a longstanding question as to how the brain is so energy efficient and could help engineers build computers that are incredibly powerful but also conserve energy.
This is a real bombshell in the field of neuroscience,” said Terry Sejnowski from the Salk Institute for Biological Studies. “Our new measurements of the brain’s memory capacity increase conservative estimates by a factor of 10 to at least a petabyte (215 Bytes = 1000 TeraBytes), in the same ballpark as the World Wide Web.
Our memories and thoughts are the result of patterns of electrical and chemical activity in the brain. A key part of the activity happens when branches of neurons, much like electrical wire, interact at certain junctions, known as synapses. An output ‘wire’ (an axon) from one neuron connects to an input ‘wire’ (a dendrite) of a second neuron. Signals travel across the synapse as chemicals called neurotransmitters to tell the receiving neuron whether to convey an electrical signal to other neurons. Each neuron can have thousands of these synapses with thousands of other neurons.
When we first reconstructed every dendrite, axon, glial process, and synapse from a volume of hippocampus the size of a single red blood cell, we were somewhat bewildered by the complexity and diversity amongst the synapses,” says Kristen Harris, co-senior author of the work and professor of neuroscience at the University of Texas, Austin. “While I had hoped to learn fundamental principles about how the brain is organized from these detailed reconstructions, I have been truly amazed at the precision obtained in the analyses of this report.
Synapses are still a mystery, though their dysfunction can cause a range of neurological diseases. Larger synapses—with more surface area and vesicles of neurotransmitters—are stronger, making them more likely to activate their surrounding neurons than medium or small synapses.
The Salk team, while building a 3D reconstruction of rat hippocampus tissue (the memory center of the brain), noticed something unusual. In some cases, a single axon from one neuron formed two synapses reaching out to a single dendrite of a second neuron, signifying that the first neuron seemed to be sending a duplicate message to the receiving neuron.
At first, the researchers didn’t think much of this duplicity, which occurs about 10 percent of the time in the hippocampus. But Tom Bartol, a Salk staff scientist, had an idea: if they could measure the difference between two very similar synapses such as these, they might glean insight into synaptic sizes, which so far had only been classified in the field as small, medium and large.
In a computational reconstruction of brain tissue in the hippocampus, Salk scientists and UT-Austin scientists found the unusual occurrence of two synapses from the axon of one neuron (translucent black strip) forming onto two spines on the same dendrite of a second neuron (yellow). Separate terminals from one neuron’s axon are shown in synaptic contact with two spines (arrows) on the same dendrite of a second neuron in the hippocampus. The spine head volumes, synaptic contact areas (red), neck diameters (gray) and number of presynaptic vesicles (white spheres) of these two synapses are almost identical. Credit: Salk Institut
To do this, researchers used advanced microscopy and computational algorithms they had developed to image rat brains and reconstruct the connectivity, shapes, volumes and surface area of the brain tissue down to a nanomolecular level.
The scientists expected the synapses would be roughly similar in size, but were surprised to discover the synapses were nearly identical.
We were amazed to find that the difference in the sizes of the pairs of synapses were very small, on average, only about 8 percent different in size,” said Tom Bartol, one of the scientists. “No one thought it would be such a small difference. This was a curveball from nature.
Because the memory capacity of neurons is dependent upon synapse size, this eight percent difference turned out to be a key number the team could then plug into their algorithmic models of the brain to measure how much information could potentially be stored in synaptic connections.
It was known before that the range in sizes between the smallest and largest synapses was a factor of 60 and that most are small.
But armed with the knowledge that synapses of all sizes could vary in increments as little as eight percent between sizes within a factor of 60, the team determined there could be about 26 categories of sizes of synapses, rather than just a few.
Our data suggests there are 10 times more discrete sizes of synapses than previously thought,” says Bartol. In computer terms, 26 sizes of synapses correspond to about 4.7 “bits” of information. Previously, it was thought that the brain was capable of just one to two bits for short and long memory storage in the hippocampus.
This is roughly an order of magnitude of precision more than anyone has ever imagined,” said Sejnowski.
What makes this precision puzzling is that hippocampal synapses are notoriously unreliable. When a signal travels from one neuron to another, it typically activates that second neuron only 10 to 20 percent of the time.
We had often wondered how the remarkable precision of the brain can come out of such unreliable synapses,” says Bartol. One answer, it seems, is in the constant adjustment of synapses, averaging out their success and failure rates over time. The team used their new data and a statistical model to find out how many signals it would take a pair of synapses to get to that eight percent difference.
The researchers calculated that
  • for the smallest synapses, about 1,500 events cause a change in their size/ability (20 minutes) and
  • for the largest synapses, only a couple hundred signaling events (1 to 2 minutes) cause a change.
This means that every 2 or 20 minutes, your synapses are going up or down to the next size,” said Bartol. “The synapses are adjusting themselves according to the signals they receive.
From left: Terry Sejnowski, Cailey Bromer and Tom Bartol. Credit: Salk Institute
Our prior work had hinted at the possibility that spines and axons that synapse together would be similar in size, but the reality of the precision is truly remarkable and lays the foundation for whole new ways to think about brains and computers,” says Harris. “The work resulting from this collaboration has opened a new chapter in the search for learning and memory mechanisms.” Harris adds that the findings suggest more questions to explore, for example, if similar rules apply for synapses in other regions of the brain and how those rules differ during development and as synapses change during the initial stages of learning.
The implications of what we found are far-reaching. Hidden under the apparent chaos and messiness of the brain is an underlying precision to the size and shapes of synapses that was hidden from us.
The findings also offer a valuable explanation for the brain’s surprising efficiency. The waking adult brain generates only about 20 watts of continuous power—as much as a very dim light bulb. The Salk discovery could help computer scientists build ultra-precise but energy-efficient computers, particularly ones that employ deep learning and neural nets techniques capable of sophisticated learning and analysis, such as speech, object recognition and translation.
This trick of the brain absolutely points to a way to design better computers,”said Sejnowski. “Using probabilistic transmission turns out to be as accurate and require much less energy for both computers and brains.
Other authors on the paper were Cailey Bromer of the Salk Institute; Justin Kinney of the McGovern Institute for Brain Research; and Michael A. Chirillo and Jennifer N. Bourne of the University of Texas, Austin.
The work was supported by the NIH and the Howard Hughes Medical Institute.
ORIGINAL: Salk.edu
January 20, 2016

Scientists have discovered brain networks linked to intelligence for the first time

By Hugo Angel,

Neurons Shutterstock 265323554_1024
Ralwel/Shutterstock.com
And we may even be able to manipulate them.
For the first time ever, scientists have identified clusters of genes in the brain that are believed to be linked to human intelligence.
The two clusters, called M1 and M3, are networks each consisting of hundreds of individual genes, and are thought to influence our

  • cognitive functions, including 
  • memory, 
  • attention, 
  • processing speed, and 
  • reasoning.
Most provocatively, the researchers who identified M1 and M3 say that these clusters are probably under the control of master switches that regulate how the gene networks function. If this hypothesis is correct and scientists can indeed find these switches, we might even be able to manipulate our genetic intelligence and boost our cognitive capabilities.
“We know that genetics plays a major role in intelligence but until now haven’t known which genes are relevant,said neurologist Michael Johnson, at Imperial College London in the UK. “This research highlights some of the genes involved in human intelligence, and how they interact with each other.
The researchers made their discovery by examining the brains of patients who had undergone neurosurgery for the treatment of epilepsy. They analysed thousands of genes expressed in the brain and combined the findings with two sets of data: genetic information from healthy people who had performed IQ tests, and from people with neurological disorders and intellectual disability.
Comparing the results, the researchers discovered that some of the genes that influence human intelligence in healthy people can also cause significant neurological problems if they end up mutating.
Traits such as intelligence are governed by large groups of genes working together – like a football team made up of players in different positions,said Johnson. “We used computer analysis to identify the genes in the human brain that work together to influence our cognitive ability to make new memories or sensible decisions when faced with lots of complex information. We found that some of these genes overlap with those that cause severe childhood onset epilepsy or intellectual disability.
The research, which is reported in Nature Neuroscience, is at an early stage, but the authors believe their analysis could have a significant impact – not only on how we understand and treat brain diseases, but one day perhaps altering brainpower itself.
Eventually, we hope that this sort of analysis will provide new insights into better treatments for neurodevelopmental diseases such as epilepsy, and ameliorate or treat the cognitive impairments associated with these devastating diseases,” said Johnson. “Our research suggests that it might be possible to work with these genes to modify intelligence, but that is only a theoretical possibility at the moment – we have just taken a first step along that road.
ORIGINAL: Science Alert
PETER DOCKRILL
22 DEC 2015

How swarm intelligence could save us from the dangers of AI

By Hugo Angel,

Image Credit: diez artwork/Shutterstock
We’ve heard a lot of talk recently about the dangers of artificial intelligence. From Stephen Hawking and Bill Gates, to Elon Musk, and Steve Wozniak, luminaries around the globe have been sounding the alarm, warning that we could lose control over this powerful technology — after all, AI is about creating systems that have minds of their own. A true AI could one day adopt goals and aspirations that harm us.
But what if we could enjoy the benefits of AI while ensuring that human values and sensibilities remain an integral part of the system?
This is where something called Artificial Swarm Intelligence comes in – a method for building intelligent systems that keeps humans in the loop, merging the power of computational algorithms with the wisdom, creativity, and intuition of real people. A number of companies around the world are already exploring swarms.

  • There’s Enswarm, a UK startup that is using swarm technologies to assist with recruitment and employment decisions
  • There’s Swarm.fund, a startup using swarming and crypto-currencies like Bitcoin as a new model for fundraising
  • And the human swarming company I founded, Unanimous A.I., creates a unified intellect from any group of networked users.
This swarm intelligence technology may sound like science fiction, but it has its roots in nature.
It all goes back to the birds and the bees – fish and ants too. Across countless species, social groups have developed methods of amplifying their intelligence by working together in closed-loop systems. Known commonly as flocks, schools, colonies, and swarms, these natural systems enable groups to combine their insights and thereby outperform individual members when solving problems and making decisions. Scientists call this “Swarm Intelligence” and it supports the old adage that many minds are better than one.
But what about us humans?
Clearly, we lack the natural ability to form closed-loop swarms, but like many other skills we can’t do naturally, emerging technologies are filling a void. Leveraging our vast networking infrastructure, new software techniques are allowing online groups to form artificial swarms that can work in synchrony to answer questions, reach decisions, and make predictions, all while exhibiting the same types of intelligence amplifications as seen in nature. The approach is sometimes called “blended intelligence” because it combines the hardware and software technologies used by AI systems with populations of real people, creating human-machine systems that have the potential of outsmarting both humans and pure-software AIs alike.
It should be noted that swarming” is different from traditional “crowdsourcing,” which generally uses votes, polls, or surveys to aggregate opinions. While such methods are valuable for characterizing populations, they don’t employ the real-time feedback loops used by artificial swarms to enable a unique intelligent system to emerge. It’s the difference between measuring what the average member of a group thinks versus allowing that group to think together and draw conclusions based upon their combined knowledge and intuition.
Outside of the companies I mentioned above, where else can such collective technologies be applied? One area that’s currently being explored is medical diagnosis, a process that requires deep factual knowledge along with the experiential wisdom of the practitioner. Can we merge the knowledge and wisdom of many doctors into a single emergent diagnosis that outperforms the diagnosis of a single practitioner? The answer appears to be yes. In a recent study conducted by Humboldt-University of Berlin and RAND Corporation, a computational collective of radiologists outperformed single practitioners when viewing mammograms, reducing false positives and false negatives. In a separate study conducted by John Carroll University and the Cleveland Clinic, a collective of 12 radiologists diagnosed skeletal abnormalities. As a computational collective, the radiologists produced a significantly higher rate of correct diagnosis than any single practitioner in the group. Of course, the potential of artificially merging many minds into a single unified intelligence extends beyond medical diagnosis to any field where we aim to exceed natural human abilities when making decisions, generating predictions, and solving problems.
Now, back to the original question of why Artificial Swarm Intelligence is a safer form of AI.
Although heavily reliant on hardware and software, swarming keeps human sensibilities and moralities as an integral part of the processes. As a result, this “human-in-the-loop” approach to AI combines the benefits of computational infrastructure and software efficiencies with the unique values that each person brings to the table:

  • creativity, 
  • empathy, 
  • morality, and 
  • justice. 

And because swarm-based intelligence is rooted in human input, the resulting intelligence is far more likely to be aligned with humanity – not just with our values and morals, but also with our goals and objectives.

How smart can an Artificial Swarm Intelligence get?
That’s still an open question, but with the potential to engage millions, even billions of people around the globe, each brimming with unique ideas and insights, swarm intelligence may be society’s best hope for staying one step ahead of the pure machine intelligences that emerge from busy AI labs around the world.
Louis Rosenberg is CEO of swarm intelligence company Unanimous A.I. He did his doctoral work at Stanford University in robotics, virtual reality, and human-computer interaction. He previously developed the first immersive augmented reality system as a researcher for the U.S. Air Force in the early 1990s and founded the VR company Immersion Corp and the 3D digitizer company Microscribe.
ORIGINAL: VentureBeat
NOVEMBER 22, 2015

A Visual History of Human Knowledge | Manuel Lima | TED Talks

By Hugo Angel,

How does knowledge grow? 

Source: EPFL Blue Brain Project. Blue Brain Circuit

Sometimes it begins with one insight and grows into many branches. Infographics expert Manuel Lima explores the thousand-year history of mapping data — from languages to dynasties — using trees of information. It’s a fascinating history of visualizations, and a look into humanity’s urge to map what we know.

ORIGINAL: TED

Sep 10, 2015

How Your Brain Is Wired Reveals the Real You

By Hugo Angel,

The Human Connectome Project finds surprising correlations between brain architecture and behavior
©iStock.com
The brain’s wiring patterns can shed light on a person’s positive and negative traits, researchers report in Nature Neuroscience. The finding, published on September 28, is the first from the Human Connectome Project (HCP), an international effort to map active connections between neurons in different parts of the brain.
The HCP, which launched in 2010 at a cost of US$40 million, seeks to scan the brain networks, or connectomes, of 1,200 adults. Among its goals is to chart the networks that are active when the brain is idle; these are thought to keep the different parts of the brain connected in case they need to perform a task.
In April, a branch of the project led by one of the HCP’s co-chairs, biomedical engineer Stephen Smith at the University of Oxford, UK, released a database of resting-state connectomes from about 460 people between 22 and 35 years old. Each brain scan is supplemented by information on approximately 280 traits, such as the person’s age, whether they have a history of drug use, their socioeconomic status and personality traits, and their performance on various intelligence tests.
Axis of connectivity
Smith and his colleagues ran a massive computer analysis to look at how these traits varied among the volunteers, and how the traits correlated with different brain connectivity patterns. The team was surprised to find a single, stark difference in the way brains were connected. People with more ‘positive’ variables, such as more education, better physical endurance and above-average performance on memory tests, shared the same patterns. Their brains seemed to be more strongly connected than those of people with ‘negative’ traits such as smoking, aggressive behaviour or a family history of alcohol abuse.
Marcus Raichle, a neuroscientist at Washington University in St Louis, Missouri, is impressed that the activity and anatomy of the brains alone were enough to reveal this ‘positive-negative’ axis. “You can distinguish people with successful traits and successful lives versus those who are not so successful,” he says.
But Raichle says that it is impossible to determine from this study how different traits relate to one another and whether the weakened brain connections are the cause or effect of negative traits. And although the patterns are clear across the large group of HCP volunteers, it might be some time before these connectivity patterns could be used to predict risks and traits in a given individual. Deanna Barch, a psychologist at Washington University who co-authored the latest study, says that once these causal relationships are better understood, it might be possible to push brains toward the ‘good’ end of the axis.
Van Wedeen, a neuroscientist at Massachusetts General Hospital in Boston, says that the findings could help to prioritize future research. For instance, one of the negative traits that pulled a brain farthest down the negative axis was marijuana use in recent weeks. Wedeen says that the finding emphasizes the importance of projects such as one launched by the US National Institute on Drug Abuse last week, which will follow 10,000 adolescents for 10 years to determine how marijuana and other drugs affect their brains.
Wedeen finds it interesting that the wiring patterns associated with people’s general intelligence scores were not exactly the same as the patterns for individual measures of cognition—people with good hand–eye coordination, for instance, fell farther down the negative axis than did those with good verbal memory. This suggests that the biology underlying cognition might be more complex than our current definition of general intelligence, and that it could be influenced by demographic and behavioural factors. “Maybe it will cause us to reconsider what [the test for general intelligence] is measuring,” he says. “We have a new mystery now.
Much more connectome data should emerge in the next few years. The Harvard Aging Brain Study, for instance, is measuring active brain connections in 284 people aged between 65 and 90, and released its first data earlier this year. And Smith is running the Developing Human Connectome Project in the United Kingdom, which is imaging the brains of 1,200 babies before and after birth. He expects to release its first data in the next few months. Meanwhile, the HCP is analysing genetic data from its participants, which include a large number of identical and fraternal twins, to determine how genetic and environmental factors relate to brain connectivity patterns.
This article is reproduced with permission and was first published on September 28, 2015.
September 28, 2015

Meet Amelia, the AI Platform That Could Change the Future of IT

By admin,

Chetah Dube. Image credit: Photography by Jesse Dittmar

Her name is Amelia, and she is the complete package: smart, sophisticated, industrious and loyal. No wonder her boss, Chetan Dube, can’t get her out of his head.

My wife is convinced I’m having an affair with Amelia,” Dube says, leaning forward conspiratorially. “I have a great deal of passion and infatuation with her.

He’s not alone. Amelia beguiles everyone she meets, and those in the know can’t stop buzzing about her. The blue-eyed blonde’s star is rising so fast that if she were a Hollywood ingénue or fashion model, the tabloids would proclaim her an “It” girl, but the tag doesn’t really apply. Amelia is more of an IT girl, you see. In fact, she’s all IT.

Amelia is an artificial intelligence platform created by Dube’s managed IT services firm IPsoft, a virtual agent avatar poised to redefine how enterprises operate by automating and enhancing a wide range of business processes. The product of an obsessive and still-ongoing 16-year developmental cycle, she—yes, everyone at IPsoft speaks about Amelia using feminine pronouns—

leverages cognitive technologies to interface with consumers and colleagues in astoundingly human terms,

  • parsing questions,
  • analyzing intent and
  • even sensing emotions to resolve issues more efficiently and effectively than flesh-and-blood customer service representatives.

Install Amelia in a call center, for example, and her patent-pending intelligence algorithms absorb in a matter of seconds the same instruction manuals and guidelines that human staffers spend weeks or even months memorizing. Instead of simply recognizing individual words, Amelia grasps the deeper implications of what she reads, applying logic and making connections between concepts. She relies on that baseline information to reply to customer email and answer phone calls; if she understands the query, she executes the steps necessary to resolve the issue, and if she doesn’t know the answer, she scans the web or the corporate intranet for clues. Only when Amelia cannot locate the relevant information does she escalate the case to a human expert, observing the response and filing it away for the next time the same scenario unfolds.

… Continue reading

A deep learning machine just beat humans in an IQ test

By admin,

ORIGINAL: Science Alert
FIONA MACDONALD
19 JUN 2015

 

Image: dhammza/Flickr
I, for one, welcome our new computer overlords.
For the first time ever, a computer has outperformed humans in the verbal reasoning portion of an IQ test.
The machine was programmed by researchers in China using a technique known as deep learning, which involves converting data into a set of algorithms that a computer can make sense of.
Until now, computers have been pretty successful at beating humans in two out of the three parts of a standard intelligence quotient test, or IQ test – the mathematical questions and the logic question – but they’d struggled to master the verbal reasoning portion, which looks at things like analogies and classifications. You know, those questions that ask you to find the word that doesn’t fit in with the others, or “Which of these words is the opposite of ubiquitous?
This is where the deep learning comes in. In the past, the furthest programmers had gotten was to build machines that were capable of analysing millions of millions of texts to figure out which words are often associated with each other, essentially turning words into vectors that could be compared, added and subtracted.
But this approach has a well-known shortcoming: it assumes that each word has a single meaning represented by a single vector. Not only is that often not the case, verbal tests tend to focus on words with more than one meaning as a way of making questions harder,writes MIT Technology Review about the research.
The researchers, from the University of Science and Technology of China and Microsoft Research in Beijing, tried a different tack – they looked at words and the words that often appeared nearby in big bodies of text. Using an algorithm, they worked out how the words are clustered, and they then looked up the different definitions of each word in a dictionary. This allowed them to match each cluster to a meaning.
This can be done automatically because the dictionary definition includes sample sentences in which the word is used in each different way. So by calculating the vector representation of these sentences and comparing them to the vector representation in each cluster, it is possible to match them.
This means that the machine is able to recognise the different meanings of words for the first time.
The team helped the computers out further by feeding them multiple examples of questions so that they were able to recognise the question type and match it to the appropriate answering strategy.
They then tested the computer against 200 human participants of various ages and educational backgrounds.
To our surprise, the average performance of human beings is a little lower than that of our proposed method,the team writes in arXiv.org, where the results were published. “Our model can reach the competitive performance between [participants] with the bachelor degrees and those with the master degrees.”
This is a big step forward for artificial intelligence, and shows just how powerful deep learning can be. The strategy has also been used to teach computers how to beat us at 49 old-school Atari games, recognise food calories from a photo and even cook by watching YouTube videos.
With appropriate uses of the deep learning technologies, we could be a further step closer to the true human intelligence,the authors write.

Google a step closer to developing machines with human-like intelligence

By admin,

ORIGINAL: The Guardian
Algorithms developed by Google designed to encode thoughts, could lead to computers with ‘common sense’ within a decade, says leading AI scientist
Joaquin Phoenix and his virtual girlfriend in the film Her. Professor Hinton think that there’s no reason why computers couldn’t become our friends, or even flirt with us. Photograph: Allstar/Warner Bros/Sportsphoto Ltd.
Computers will have developed “common sense” within a decade and we could be counting them among our friends not long afterwards, one of the world’s leading AI scientists has predicted.
Professor Geoff Hinton, who was hired by Google two years ago to help develop intelligent operating systems, said that the company is on the brink of developing algorithms with the capacity for logic, natural conversation and even flirtation.
The researcher told the Guardian said that Google is working on a new type of algorithm designed to encode thoughts as sequences of numbers – something he described as “thought vectors.
Although the work is at an early stage, he said there is a plausible path from the current software to a more sophisticated version that would have something approaching human-like capacity for reasoning and logic. “Basically, they’ll have common sense.”
The idea that thoughts can be captured and distilled down to cold sequences of digits is controversial, Hinton said. “There’ll be a lot of people who argue against it, who say you can’t capture a thought like that,” he added. “But there’s no reason why not. I think you can capture a thought by a vector.
Hinton, who is due to give a talk at the Royal Society in London on Friday, believes that the “thought vector” approach will help crack two of the central challenges in artificial intelligence:

  • mastering natural, conversational language, and
  • the ability to make leaps of logic.
He painted a picture of the near-future in which people will chat with their computers, not only to extract information, but for fun – reminiscent of the film, Her, in which Joaquin Phoenix falls in love with his intelligent operating system.
It’s not that far-fetched,” Hinton said. “I don’t see why it shouldn’t be like a friend. I don’t see why you shouldn’t grow quite attached to them.
In the past two years, scientists have already made significant progress in overcoming this challenge.
Richard Socher, an artificial intelligence scientist at Stanford University, recently developed a program called NaSent that he taught to recognise human sentiment by training it on 12,000 sentences taken from the film review website Rotten Tomatoes
  Sentiment Analysis

Sentiment Analysis Information Live Demo Sentiment Treebank |Help the Model Source Code

 

Please enter text to see its parses and sentiment prediction results:
You can also upload a file (limit 200 lines):

 

Part of the initial motivation for developing “thought vectors” was to improve translation software, such as Google Translate, which currently uses dictionaries to translate individual words and searches through previously translated documents to find typical translations for phrases. Although these methods often provide the rough meaning, they are also prone to delivering nonsense and dubious grammar.
Thought vectors, Hinton explained, work at a higher level by extracting something closer to actual meaning.
The technique works by ascribing each word a set of numbers (or vector) that define its position in a theoretical “meaning space” or cloud. A sentence can be looked at as a path between these words, which can in turn be distilled down to its own set of numbers, or thought vector.
The “thought” serves as a the bridge between the two languages because it can be transferred into the French version of the meaning space and decoded back into a new path between words.
 
The key is working out which numbers to assign each word in a language – this is where deep learning comes in. Initially the positions of words within each cloud are ordered at random and the translation algorithm begins training on a dataset of translated sentences.
At first the translations it produces are nonsense, but a feedback loop provides an error signal that allows the position of each word to be refined until eventually the positions of words in the cloud captures the way humans use them – effectively a map of their meanings.
Hinton said that the idea that language can be deconstructed with almost mathematical precision is surprising, but true. “If you take the vector for Paris and subtract the vector for France and add Italy, you get Rome,” he said. “It’s quite remarkable.”
Dr Hermann Hauser, a Cambridge computer scientist and entrepreneur, said that Hinton and others could be on the way to solving what programmers call the “genie problem”.
With machines at the moment, you get exactly what you wished for,” Hauser said. “The problem is we’re not very good at wishing for the right thing. When you look at humans, the recognition of individual words isn’t particularly impressive, the important bit is figuring out what the guy wants.
Hinton is our number one guru in the world on this at the moment,” he added.
Some aspects of communication are likely to prove more challenging, Hinton predicted. “Irony is going to be hard to get,” he said. “You have to be master of the literal first. But then, Americans don’t get irony either. Computers are going to reach the level of Americans before Brits.
A flirtatious program would “probably be quite simple” to create, however. “It probably wouldn’t be subtly flirtatious to begin with, but it would be capable of saying borderline politically incorrect phrases,” he said.
Many of the recent advances in AI have sprung from the field of deep learning, which Hinton has been working on since the 1980s. At its core is the idea that computer programs learn how to carry out tasks by training on huge datasets, rather than being taught a set of inflexible rules.
With the advent of huge datasets and powerful processors, the approach pioneered by Hinton decades ago has come into the ascendency and underpins the work of Google’s artificial intelligence arm, DeepMind, and similar programs of research at Facebook and Microsoft.
Hinton played down concerns about the dangers of AI raised by those such as the American entrepreneur Elon Musk, who has described the technologies under development as humanity’s greatest existential threat. “The risk of something seriously dangerous happening is in the five year timeframe. Ten years at most,” Musk warned last year.
I’m more scared about the things that have already happened, said Hinton in response. “The NSA is already bugging everything that everybody does. Each time there’s a new revelation from Snowden, you realise the extent of it.
I am scared that if you make the technology work better, you help the NSA misuse it more,” he added. “I’d be more worried about that than about autonomous killer robots.