How D-Wave Built Quantum Computing Hardware for the Next Generation

ORIGINAL: IEEE Spectrum
By Jeremy Hsu
11 Jul 2014

Photo: D-Wave Systems

One second is here and gone before most of us can think about it. But a delay of one second can seem like an eternity in a quantum computer capable of running calculations in millionths of a second. That’s why engineers at D-Wave Systems worked hard to eliminate the one-second computing delay that existed in the D-Wave One—the first-generation version of what the company describes as the world’s first commercial quantum computer.

Such lessons learned from operating D-Wave One helped shape the hardware design of D-Wave Two, a second-generation machine that has already been leased by customers such as Google, NASA, and Lockheed Martin. Such machines have not yet proven that they can definitely outperform classical computers in a way that would support D-Wave’s particular approach to building quantum computers. But the hardware design philosophy behind D-Wave’s quantum computing architecture points to how researchers could build increasingly more powerful quantum computers in the future.

We have room for increasing the complexity of the D-Wave chip,” says Jeremy Hilton, vice president of processor development at D-Wave Systems. “If we can fix the number of control lines per processor regardless of size, we can call it truly scalable quantum computing technology.

D-Wave recently explained the hardware design choices it made in going from D-Wave One to D-Wave Two in the June 2014 issue of the journal IEEE Transactions on Applied Superconductivity. Such details illustrate the engineering challenges that researchers still face in building a practical quantum computer capable of surpassing classical computers. (See IEEE Spectrum’s overview of the D-Wave machines’ performance from the December 2013 issue.)

  

Photo: D-Wave SystemsD-Wave’s Year of Computing Dangerously

Quantum computing holds the promise of speedily solving tough problems that ordinary computers would take practically forever to crack. Unlike classical computing that represents information as bits of either a 1 or 0, quantum computers take advantage of quantum bits (qubits) that can exist as both a 1 and 0 at the same time, enabling them to perform many simultaneous calculations.

Classical computer hardware has relied upon silicon transistors that can switch between “on” and “off” to represent the 1 or 0 in digital information. By comparison, D-Wave’s quantum computing hardware relies on metal loops of niobium that have tiny electrical currents running through them. A current running counterclockwise through the loop creates a tiny magnetic field pointing up, whereas a clockwise current leads to a magnetic field pointing down. Those two magnetic field states represent the equivalent of 1 or 0.

The niobium loops become superconductors when chilled to frigid temperatures of 20 millikelvin (-273 degrees C). At such low temperatures, the currents and magnetic fields can enter the strange quantum state known as “superposition” that allows them to represent both 1 and 0 states simultaneously. That allows D-Wave to use these “superconducting qubits” as the building blocks for making a quantum computing machine. Each loop also contains a number of Josephson junctions—two layers of superconductor separated by a thin insulating layer—that act as a framework of switches for routing magnetic pulses to the correct locations.

But a bunch of superconducting qubits and their connecting couplers—separate superconducting loops that allow qubits to exchange information—won’t do any computing all by themselves. D-Wave initially thought it would rely on analog control lines that could apply a magnetic field to the superconducting qubits and control their quantum states in that manner. However, the company realized early on in development that it would need at least six or seven control lines per qubit, for a programmable computer. The dream of eventually building more powerful machines with thousands of qubits would become an “impossible engineering challenge” with such design requirements, Hilton says.

The solution came in the form of digital-to-analog flux converters (DAC)—each about the size of a human red blood cell at 10 micrometers in width— that act as control devices and sit directly on the quantum computer chip. Such devices can replace control lines by acting as a form of programmable magnetic memory that produces a static magnetic field to affect nearby qubits. D-Wave can reprogram the DACs digitally to change the “bias” of their magnetic fields, which in turn affects the quantum computing operations.

Most researchers have focused on building quantum computers using the traditional logic-gate model of computing. But D-Wave has focused on a more specialized approach known as “quantum annealing —a method of tackling optimization problems. Solving optimization problems means finding the lowest “valley” that represents the best solution in a problem “landscape” with peaks and valleys. In practical terms, D-Wave starts a group of qubits in their lowest energy state and then gradually turns on interactions between the qubits, which encodes a quantum algorithm. When the qubits settle back down in their new lowest-energy state, D-Wave can read out the qubits to get the results.

Both the D-Wave One (128 qubits) and D-Wave Two (512 qubits) processors have DACs. But the circuitry setup of D-Wave One created some problems between the programming DAC phase and the quantum annealing operations phase. Specifically, the D-Wave One programming phase temporarily raised the temperature to as much as 500 millikelvin, which only dropped back down to the 20 millikelvin temperature necessary for quantum annealing after one second. That’s a significant delay for a machine that can perform quantum annealing in just 20 microseconds (20 millionths of a second).

By simplifying the hardware architecture and adding some more control lines, D-Wave managed to largely eliminate the temperature rise. That in turn reduced the post-programming delay to about 10 milliseconds (10 thousandths of a second)— a “factor of 100 improvement achieved within one processor generation,” Hilton says. D-Wave also managed to reduce the physical size of the DAC “footprint” by about 50 percent in D-Wave Two.

Building ever-larger arrays of qubits continues to challenge D-Wave’s engineers. They must always be aware of how their hardware design—packed with many classical computing components—can affect the fragile quantum states and lead to errors or noise that overwhelms the quantum annealing operations.

We were nervous about going down this path,” Hilton says. “This architecture requires the qubits and the quantum devices to be intermingled with all these big classical objects. The threat you worry about is noise and impact of all this stuff hanging around the qubits. Traditional experiments in quantum computing have qubits in almost perfect isolation. But if you want quantum computing to be scalable, it will have to be immersed in a sea of computing complexity.”

Still, D-Wave’s current hardware architecture, code-named “Chimera,” should be capable of building quantum computing machines of up to 8000 qubits, Hilton says. The company is also working on building a larger processor containing 1000 qubits.

The architecture isn’t necessarily going to stay the same, because we’re constantly learning about performance and other factors,” Hilton says. “But each time we implement a generation, we try to give it some legs so we know it’s extendable.”

Can The Human Brain Project Succeed?

ORIGINAL: Spectrum
By Rachel Courtland
Posted 9 Jul 2014 | 17:00 GMT

Image: Getty Images

An ambitious effort to build human brain simulation capability is meeting with some very human resistance. On Monday, a group of researchers sent an open letter to the European Commission protesting the management of the Human Brain Project, one of two Flagship initiatives selected last year to receive as much as €1 billion over the course of 10 years (the other award went to a far less controversy-courting project devoted to graphene).

The letter, which now has more than 450 signatories, questions the direction of the project and calls for a careful, unbiased review. Although he’s not mentioned by name in the letter, news reports cited resistance to the path chosen by project leader Henry Markram of the Swiss Federal Institute of Technology in Lausanne. One particularly polarizing change was the recent elimination of a subproject, called Cognitive Architectures, as the project made its bid for the next round of funding.

According to Markram, the fuss all comes down to differences in scientific culture. He has described the project, which aims to build six different computing platforms for use by researchers, as an attempt to build a kind of CERN for brain research, a means by which disparate disciplines and vast amounts of data can be brought together. This is a “methodological paradigm shift” for neuroscientists accustomed to individual research grants, Markram told Science, and that’s what he says the letter signers are having trouble with.

But some question the main goals of the project, and whether we’re actually capable of achieving them at this point. The program’s Brain Simulation Platform aims to build the technology needed to reconstruct the mouse brain and eventually the human brain in a supercomputer. Part of the challenge there is technological. Markram has said that an exascale-level machine (one capable of executing 1000 or more petaflops) would be needed to “get a first draft of the human brain”, and the energy requirements of such machines are daunting.

Crucially, some experts say that even if we had the computational might to simulate the brain, we’re not ready to. “The main apparent goal of building the capacity to construct a larger-scale simulation of the human brain is radically premature,” signatory Peter Dayan, who directs a computational neuroscience department at University College London, told the Guardian. He called the project a “waste of money” that “can’t but fail from a scientific perspective“. To Science, he saidthe notion that we know enough about the brain to know what we should simulate is crazy, quite frankly.”

This last comment resonated with me, as it reminded me of a feature that Steve Furber of the University of Manchester wrote for IEEE Spectrum a few years ago. Furber, one of the co-founders of the mobile chip design powerhouse ARM, is now in the process of stringing a million or so of the low-power processors together to build a massively parallel computer capable of simulating 1 billion neurons, about 1% as many as are contained in the human brain.

Furber and his collaborators designed their computing architecture quite carefully in order to take into account the fact that there are still a host of open questions when it comes to basic brain operation. General-purpose computers are power-hungry and slow when it comes to brain simulation. Analog circuitry, which is also on the Human Brain Project‘s list, might better mimic the way neurons actually operate, but, he wrote,

as speedy and efficient as analog circuits are, they’re not very flexible; their basic behavior is pretty much baked right into them. And that’s unfortunate, because neuroscientists still don’t know for sure which biological details are crucial to the brain’s ability to process information and which can safely be abstracted away

The Human Brain Project’s website admits that exascale computing will be hard to reach: “even in 2020, we expect that supercomputers will have no more than 200 petabytes.” To make up for the shortfall, it says, “what we plan to do is build fast storage random-access storage systems next to the supercomputer, store the complete detailed model there, and then allow our multi-scale simulation software to call in a mix of detailed or simplified models (models of neurons, synapses, circuits, and brain regions) that matches the needs of the research and the available computing power. This is a pragmatic strategy that allows us to keep build ever more detailed models, while keeping our simulations to the level of detail we can support with our current supercomputers.

This does sound like a flexible approach. But, as is par for the course with any ambitious research project, particularly one that involves a great amount of synthesis of disparate fields, it’s not yet clear whether it will pay off.

And any big changes in direction may take a while. Although the proposal for the second round of funding will be reviewed this year, according to Science, which reached out to the European Commission, the first review of the project itself won’t begin until January 2015.

Rachel Courtland can be found on Twitter at @rcourt.

DARPA Wants a Memory Prosthetic for Injured Vets—and Wants It Now

ORIGINAL: Spectrum
By Eliza Strickland
9 Jul 2014

Photo: Getty Images
No one will ever fault DARPA, the Defense Department’s mad science wing, for not being ambitious enough. Over the next four years, the first grantees in its Restoring Active Memory (RAM) program are expected to develop and test prosthetic memory devices that can be implanted in the human brain. 

 

It’s hoped that such synthetic devices can help veterans with traumatic brain injuries, and other people whose natural memory function is impaired. The two teams, led by researchers Itzhak Fried at UCLA and Mike Kahana at the University of Pennsylvania, will start with the fundamentals.
They’ll look for neural signals associated with the formation and recall of memories, and they’ll work on computational models to describe how neurons carry out these processes, and to determine how an artificial device can replicate them. They’ll also work with partners to develop real hardware suitable for the human brain. Such devices should ultimately be capable of recording the electrical activity of neurons, processing the information, and then stimulating other neurons as needed.The RAM research derives from an engineering approach to memory that’s gaining traction. (Spectrum covered the work of one of its leading proponents, Ted Berger, in the recent article The End of Disability.) If the brain is essentially a collection of circuits, the thinking goes, a memory is formed by the sequential actions of many neurons. If a person has a brain injury that knocks out some of those neurons, the whole circuit may malfunction, and the person will experience memory problems. But if electrodes can pick up the signal in the neurons upstream from the problem spot, and then convey that signal around the damage to intact neurons downstream, then the memory should function as normal.
In a press briefing yesterday, program manager Justin Sanchez said that the first human experiments will be conducted with hospitalized epilepsy patients who have electrodes implanted in their brains as they await surgery (this is done so their doctors can pinpoint the origin of their seizures). Since epilepsy patients often experience memory loss as well, Sanchez said they’re a natural fit for the research. Eventually trials would include military servicemembers who suffer the aftereffects of traumatic brain injuries, and finally civilians with similar injuries.
DARPA recently decided to beef up its research in biological technologies, spurred in part by the needs of veterans returning from Iraq and Afghanistan. But it seems likely that the agency’s increased attention to programs like RAM was also prompted by the recognition that neural engineering is one of the most exciting frontiers in science, with the neural technologies advancing faster than the science that guides it.

The RAM program is part of the overarching federal BRAIN Initiative, announced with much fanfare by President Obama in 2013. With a first-year budget of $110 million parceled out to three agencies and considerable cooperation from deep-pocketed private institutions, you can expect this decade to be a brainy one.

 

The Most Ambitious Artificial Intelligence Project In The World Has Been Operating In Near-Secrecy For 30 Years

ORIGINAL: Business Insider
Dylan Love
Jul. 2, 2014

douglas lenat

Screenshot. Doug Lenat”We’ve been keeping a very low profile, mostly intentionally,” said Doug Lenat, President and CEO of Cycorp. “No outside investments, no debts. We don’t write very many articles or go to conferences, but for the first time, we’re close to having this be applicable enough that we want to talk to you.”IBM‘s Watson and Apple‘s Siri stirred up a hunger and awareness throughout the United States for something like a Star Trek computer that really worked — an artificially intelligent system that could receive instructions in plain, spoken language, make the appropriate inferences, and carry out its instructions without needing to have millions and millions of subroutines hard-coded into it.

As we’ve established, that stuff is very hard. But Cycorp’s goal is to codify general human knowledge and common sense so that computers might make use of it.

Cycorp charged itself with figuring out the tens of millions of pieces of data we rely on as humans — the knowledge that helps us understand the world — and to represent them in a formal way that machines can use to reason. The company’s been working continuously since 1984 and next month marks its 30th anniversary.

Many of the people are still here from 30 years ago — Mary Shepherd and I started [Cycorp] in August of 1984 and we’re both still working on it,” Lenat said. “It’s the most important project one could work on, which is why this is what we’re doing. It will amplify human intelligence.

It’s only a slight stretch to say Cycorp is building a brain out of software, and they’re doing it from scratch.

Any time you look at any kind of real life piece of text or utterance that one human wrote or said to another human, it’s filled with analogies, modal logic, belief, expectation, fear, nested modals, lots of variables and quantifiers,” Lenat said. “Everyone else is looking for a free-lunch way to finesse that. Shallow chatbots show a veneer of intelligence or statistical learning from large amounts of data. Amazon and Netflix recommend books and movies very well without understanding in any way what they’re doing or why someone might like something.

It’s the difference between someone who understands what they’re doing and someone going through the motions of performing something.”

Cycorp’s product, Cyc, isn’t “programmed” in the conventional sense. It’s much more accurate to say it’s being “taught.” Lenat told us that most people think of computer programs as “procedural, [like] a flowchart,” but building Cyc is “much more like educating a child.

We’re using a consistent language to build a model of the world,” he said.

This means Cyc can see “the white space rather than the black space in what everyone reads and writes to each other.” An author might explicitly choose certain words and sentences as he’s writing, but in between the sentences are all sorts of things you expect the reader to infer; Cyc aims to make these inferences.

Consider the sentence, “John Smith robbed First National Bank and was sentenced to thirty years in prison.” It leaves out the details surrounding his being caught, arrested, put on trial, and found guilty. A human would never actually go through all that detail because it’s alternately boring, confusing, or insulting. You can safely assume other people know what you’re talking about. It’s like pronoun use — he, she, it — one assumes people can figure out the referent. This stuff is very hard for computers to understand and get right, but Cyc does both.

If computers were human,” Lenat told us, “they’d present themselves as autistic, schizophrenic, or otherwise brittle. It would be unwise or dangerous for that person to take care of children and cook meals, but it’s on the horizon for home robots. That’s like saying, ‘We have an important job to do, but we’re going to hire dogs and cats to do it.’

If you consider the world’s current and imagined robots, it’s hard to imagine them not benefitting from Cyc-endowed abilities that grant them a more human-like understanding of the world.



Just like computers with operating systems, we might one day install Cyc on a home robot to make it incredibly knowledgable and useful to us. And because Cycorp started from zero and was built up with a knowledge of nearly everything, it could be used for a wide variety of applications. It’s already being used to teach math to sixth graders.

Cyc can pretend to be a confused sixth grader, and the user’s role is to help the AI agent understand and learn sixth grade math. There’s an emotional investment, a need to think about it, and so on. Our program of course understands the math, but is simply listening to what students say and diagnosing their confusion. It figures out what behavior can it carry out that would be most useful to help them understand things. It’s a possibility to revolutionize sixth grade math, but also other grade levels and subjects. There’s no reason couldn’t be used in common core curriculum as well.

We asked Lenat what famed author and thinker Douglas Hofstadter might think of Cyc:

[Hofstadter] might know what needs to be done for things to be intelligent, but it has taken someone, unfortunately me, the decades of time to drag that mattress out of the road so we can do the work. It’s not done by any means, but it’s useful.

Neuroscientists Join the Open-Source Hardware Movement

By Eliza Strickland
Posted 11 Jun 2014
Two MIT grad students offer up DIY brain-recording gear

Photo: Open Ephys

Graduate students Josh Siegle and Jakob Voigts were planning an ambitious series of experiments at their MIT neuroscience labs in 2011 when they ran into a problem. They needed to record complex brain signals from mice, but they couldn’t afford the right equipment: The recording systems cost upward of US $60,000 each, and they wanted at least four. So they decided to solve their dilemma by building their own gear on the cheap. And knowing that they wouldn’t be the last neuroscientists to encounter such a problem, they decided to give away their designs. Now their project, Open Ephys, is the hub of a nascent open-source hardware community for neural technology.



Siegle and Voigts weren’t knowledgeable about either circuit design or coding, but they learned as they went along. By July 2013, they were ready to manufacture 50 of their recording systems, which they gave to collaborators for beta testing. This spring they manufactured 100 improved units, which are now arriving in neuroscience labs around the world. They estimate that each system costs about $3,000 to produce.

Neuroscience has a history of hackers, Siegle says, with researchers cobbling together their own gear or customizing commercial systems to meet their particular needs. But those new tools rarely leave the labs they are built in. So scientists spend a lot of time reinventing the wheel. The goal of Open Ephys (which is short for open-source electrophysiology) is not just to distribute the tools that Siegle and Voigts have come up with so far but to encourage researchers to put resources into developing open-source tools for the benefit of the whole community. “In addition to changing the tools, we also want to change the culture,” Siegle says.

Photo: Open Ephys Open Ephys just distributed 100 of its acquisition boards to neuroscience labs around the world.

The flagship tool that Siegle and Voigts developed is an acquisition board, which makes sense of the electric signals from electrodes implanted in an animal’s brain. The board interfaces with up to eight headstages that amplify, filter, multiplex, and digitize signals from the brain, and then sends those signals to a computer for further processing. Commercial systems typically have individual ICs perform each of those four functions, but Siegle and Voigts’s system uses a single microchip for the four steps. The chip was recently developed by Intan Technologies, based in Los Angeles. “Once we realized these chips were available, it seemed kind of silly to keep buying the big systems,” Siegle says.

The president and cofounder of Intan, Reid Harrison, says that shrinking and consolidating the gear wasn’t that complicated—it mostly required initiative. “It’s such a niche market that no one else had tried to miniaturize the technology,” he says. “It’s not exactly on the scale of CPUs and cellphones, which drive most IC technology.” However, Harrison says he recognized a need for his small, multipurpose chips. Neuroscientists are always trying to fit more electrodes into an animal’s brain to record more neural activity, he says, which requires ever tinier devices with the electronics close to the electrodes. “You could put 1,000 electrodes in the brain, but you don’t want 1,000 wires on an animal that’s supposed to be mobile,” he says. The Intan chips take information from up to 64 electrodes and turn it into one digital signal, eliminating the confusion of wiring.

The major neural technology companies have designed products that incorporate Intan’s chips, but they also swear by their larger, multichip systems. Keith Stengel, the founder of Neuralynx, in Bozeman, Mont., says that in his big systems, each component is optimized for peak performance. “A lot of our customers have said that you buy a Neuralynx system for the serious work that you’re going to publish, and then you get an Open Ephys system as a second system, for grad students to start their research on,” he says.

 

Illustration: Open Ephys Open Ephys offers building instructions for this head-mounted neural implant system for mice.

Andy Gotshalk, CEO of Blackrock Microsystems, in Salt Lake City, also argues that the commercial products will continue to be the gold standard. “You’re not going to be moving into FDA clinical trials using an Open Ephys system,” he says. The commercial products come with guarantees of quality and reliability, he says, as well as intensive customer support. Gotshalk says his customers are willing to pay a premium for that backing.

Both Stengel and Gotshalk say they welcome Open Ephys to the market and think that its systems can fill a niche. They’re also willing to work with the upstart to make sure their commercial software works with the Open Ephys hardware. Harrison agrees that the community is happy to have another option to work with, and he draws a parallel to the computing industry. “The existing tools are like the PCs and the Macs of the neuroscience world, but now we also have this Linux,” Harrison says. “It’s a lot less expensive, and you can hack it yourself, but it’s not for everyone.

ORIGINAL: IEEE Spectrum

Mathematical Model Of Consciousness Proves Human Experience Cannot Be Modelled On A Computer

ORIGINAL: Medium

A new mathematical model of consciousness implies that your PC will never be conscious in the way you are
One of the most profound advances in science in recent years is the way researchers from a variety of fields are beginning to think about consciousness. Until now, the c-word was been taboo for most scientists. Any suggestion that a researchers was interested in this area would be tantamount to professional suicide.
That has begun to change thanks to a new theory of consciousness developed in the last ten years or so by Giulio Tononi, a neuroscientist at the University of Wisconsin in Madison, and others. Tononi’s key idea is that consciousness is phenomenon in which information is integrated in the brain in a way that cannot be broken down.
So each instant of consciousness integrates the smells, sounds and sights of that moment of experience. And consciousness is simply the feeling of this integrated information experience.
What makes Tononi’s ideas different from other theories of consciousness is that it can be modelled mathematically using ideas from physics and information theory. That doesn’t mean this theory is correct. But it does mean that, for the first time, neuroscientists, biologists physicists and anybody else can all reason about consciousness using the universal language of science: mathematics.
This has led to an extraordinary blossoming of ideas about consciousness. A few months ago, for example, we looked at how physicists are beginning to formulate the problem consciousness in terms of quantum mechanics and information theory.
Today, Phil Maguire at the National University of Ireland and a few pals take this mathematical description even further. These guys make some reasonable assumptions about the way information can leak out of a consciousness system and show that this implies that consciousness is not computable. In other words, consciousness cannot be modelled on a computer.
Maguire and co begin with a couple of thought experiments that demonstrate the nature of integrated information in Tononi’s theory. They start by imagining the process of identifying chocolate by its smell. For a human, the conscious experience of smelling chocolate is unified with everything else that a person has smelled (or indeed seen, touched, heard and so on).
This is entirely different from the process of automatically identifying chocolate using an electronic nose, which measures many different smells and senses chocolate when it picks out the ones that match some predefined signature.
A key point here is that it would be straightforward to access the memory in an electronic nose and edit the information about its chocolate experience. You could delete this with the press of a button.
But ask a neuroscientist to do the same for your own experience of the smell of chocolate—to somehow delete this—and he or she would be faced with an impossible task since the experience is correlated with many different parts of the brain.
Indeed, the experience will be integrated with all kinds of other experiences. “According to Tononi, the information generated by such [an electronic nose] differs from that generated by a human insofar as it is not integrated,” say Maguire and co.
This process of integration is then crucial and Maguire and co focus on the mathematical properties it must have. For instance, they point out that the process of integrating information, of combining it with many other aspects of experience, can be thought of as a kind of information compression.
This compression allows the original experience to be constructed but does not keep all of the information it originally contained.
To better understand this, they give as an analogy the sequence of numbers: 4, 6, 8, 12, 14, 18, 20, 24…. This is an infinite series defined as: odd primes plus 1. This definition does not contain all the infinite numbers but it does allow it be reproduced. It is clearly a compression of the information in the original series.
The brain, say Maguire and co, must work like this when integrating information from a conscious experience. It must allow the reconstruction of the original experience but without storing all the parts.
That leads to a problem. This kind of compression inevitably discards information. And as more information is compressed, the loss becomes greater.
But if our memories were like that cannot be like that, they would be continually haemorrhaging meaningful content. “Memory functions must be vastly non-lossy, otherwise retrieving them repeatedly would cause them to gradually decay,” say Maguire and co.
The central part of their new work is to describe the mathematical properties of a system that can store integrated information in this way but without it leaking away. And this leads them to their central proof. “The implications of this proof are that we have to abandon either the idea that people enjoy genuinely [integrated] consciousness or that brain processes can be modelled computationally,” say Maguire and co.
Since Tononi’s main assumption is that consciousness is the experience of integrated information, it is the second idea that must be abandoned: brain processes cannot be modelled computationally.
 
They go on to discuss this in more detail. If a person’s behaviour cannot be analysed independently from the rest of their conscious experience, it implies that something is going on in their brain that is so complex it cannot feasibly be reversed, they say.
In other words, the difference between cognition and computation is that computation is reversible whereas cognition is not. And they say that is reflected in the inability of a neuroscientist to operate and remove a particular memory of the small of chocolate.
That’s an interesting approach but it is one that is likely to be controversial. The laws of physics are computable, as far as we know. So critics might ask how the process of consciousness can take place at all if it is non-computable. Critics might even say this is akin to saying that consciousness is in some way supernatural, like magic.
But Maguire and go counter this by saying that their theory doesn’t imply that consciousness is objectively non-computable only subjectively so. In other words, a God-like observer with perfect knowledge of the brain would not consider it non-computable. But for humans, with their imperfect knowledge of the universe, it is effectively non-computable.
There is something of a card trick about this argument. In mathematics, the idea of non-computability is not observer-dependent so it seems something of a stretch to introduce it as an explanation.
What’s more, critics might point to other weaknesses in the formulation of this problem. For example, the proof that conscious experience is non-computable depends critically on the assumption that our memories are non-lossy.
But everyday experience is surely the opposite—our brains lose most of the information that we experience consciously. And the process of repeatedly accessing memories can cause them to change and degrade. Isn’t the experience of forgetting a face of a known person well documented?
Then again, critics of Maguire and co’s formulation of the problem of consciousness must not lose sight of the bigger picture—that the debate about consciousness can occur on a mathematical footing at all. That’s indicative of a sea change in this most controversial of fields.
Of course, there are important steps ahead. Perhaps the most critical is that the process of mathematical modelling must lead to hypotheses that can be experimentally tested. That’s the process by which science distinguishes between one theory and another. Without a testable hypothesis, a mathematical model is not very useful.
For example, Maguire and co could use their model to make predictions about the limits in the way information can leak from a conscious system. These limits might be testable in experiments focusing on the nature of working memory or long-term memory in humans.
That’s the next challenge for this brave new field of consciousness.
Ref: arxiv.org/abs/1405.0126 : Is Consciousness Computable? Quantifying Integrated Information Using Algorithmic Information Theory
Follow the Physics arXiv Blog on Twitter at @arxivblog, on Facebook and by hitting the Follow button below.

Roadmap to Immortality – Artificial Intelligence

ORIGINAL: Maria Konovalenko
May 28, 2014

I’d like to present the last part of the Human Physical Immortality Roadmap, which is devoted to Artificial Intelligence. There’s a good chance this technology will be the game changer for the humanity at large. It may define our future.

We can write a book about the evolution of different technologies and how they are going to influence human condition and help achieve immortality. This Roadmap can serve as an illustrated table of contents for this book. The only problem that we have is we don’t know which publishing house to go to. If some of you have any contacts at a publisher that might be interested in such a book, please, let me know.

%d bloggers like this: