Self Reflected, 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The entire Self Reflected microetching under violet and white light. (photo by Greg Dunn and Will Drinker)
Anyone who thinks that scientists can’t be artists need look no further than Dr. Greg Dunn and Dr. Brian Edwards. The neuroscientist and applied physicist have paired together to create an artistic series of images that the artists describe as “the most fundamental self-portrait ever created.” Literally going inside, the pair has blown up a thin slice of the brain 22 times in a series called Self-Reflected.
Traveling across 500,000 neurons, the images took two years to complete, as Dunn and Edwards developed special technology for the project. Using a technique they’ve called reflective microetching, they microscopically manipulated the reflectivity of the brain’s surface. Different regions of the brain were hand painted and digitized, later using a computer program created by Edwards to show the complex choreography our mind undergoes as it processes information.
After printing the designs onto transparencies, the duo added 1,750 gold leaf sheets to increase the art’s reflectivity. The astounding results are images that demonstrate the delicate flow and balance of our brain’s activity. “Self Reflected was created to remind us that the most marvelous machine in the known universe is at the core of our being and is the root of our shared humanity,” the artists share.
Self Reflected is an unprecedented look inside the brain.
Self Reflected (detail), 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The parietal gyrus where movement and vision are integrated. (photo by Greg Dunn and Will Drinker)
Self Reflected (detail), 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The brainstem and cerebellum, regions that control basic body and motor functions. (photo by Greg Dunn and Will Drinker)
An astounding achievement in scientific art, the artists applied 1,750 leaves of gold to the final microetchings.
Self Reflected (detail), 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The laminar structure of the cerebellum, a region involved in movement and proprioception (calculating where your body is in space).
Self Reflected (detail), 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The pons, a region involved in movement and implicated in consciousness. (photo by Greg Dunn and Will Drinker)
Self Reflected (detail), 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. Raw colorized microetching data from the reticular formation.
Self Reflected (detail), 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The visual cortex, the region located at the back of the brain that processes visual information.
Self Reflected (detail), 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The thalamus and basal ganglia, sorting senses, initiating movement, and making decisions. (photo by Greg Dunn and Will Drinker)
Self Reflected, 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The entire Self Reflected microetching under white light. (photo by Greg Dunn and Will Drinker)
Self Reflected (detail), 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The midbrain, an area that carries out diverse functions in reward, eye movement, hearing, attention, and movement. (photo by Greg Dunn and Will Drinker)
This video shows how the etched neurons twinkle as a light source is moved.
Interested in learning more? Watch Dr. Greg Dunn present the project at The Franklin Institute.
Developments and advances in artificial intelligence (AI) have been due in large part to technologies that mimic how the human brain works. In the world of information technology, such AI systems are called neural networks.
These contain algorithms that can be trained, among other things, to imitate how the brain recognises speech and images. However, running an Artificial Neural Network consumes a lot of time and energy.
It paves the way for intelligent systems that required less time and energy to learn, and it can learn autonomously.
In the human brain, synapses work as connections between neurons. The connections are reinforced and learning is improved the more these synapses are stimulated.
The memristor works in a similar fashion. It’s made up of a thin ferroelectric layer (which can be spontaneously polarised) that is enclosed between two electrodes.
Using voltage pulses, their resistance can be adjusted, like biological neurons. The synaptic connection will be strong when resistance is low, and vice-versa.
(a) Sketch of pre- and post-neurons connected by a synapse. The synaptic transmission is modulated by the causality (Δt) of neuron spikes. (b) Sketch of the ferroelectric memristor where a ferroelectric tunnel barrier of BiFeO3 (BFO) is sandwiched between a bottom electrode of (Ca,Ce)MnO3 (CCMO) and a top submicron pillar of Pt/Co. YAO stands for YAlO3. (c) Single-pulse hysteresis loop of the ferroelectric memristor displaying clear voltage thresholds ( and ). (d) Measurements of STDP in the ferroelectric memristor. Modulation of the device conductance (ΔG) as a function of the delay (Δt) between pre- and post-synaptic spikes. Seven data sets were collected on the same device showing the reproducibility of the effect. The total length of each pre- and post-synaptic spike is 600 ns. Source: Nature Communications
The memristor’s capacity for learning is based on this adjustable resistance.
AI systems have developed considerably in the past couple of years. Neural networks built with learning algorithms are now capable of performing tasks which synthetic systems previously could not do.
Deep-learning machines already have superhuman skills when it comes to tasks such as
video-game playing, and
even the ancient Chinese game of Go.
So it’s easy to think that humans are already outgunned.
But not so fast. Intelligent machines still lag behind humans in one crucial area of performance: the speed at which they learn. When it comes to mastering classic video games, for example, the best deep-learning machines take some 200 hours of play to reach the same skill levels that humans achieve in just two hours.
So computer scientists would dearly love to have some way to speed up the rate at which machines learn.
Today, Alexander Pritzel and pals at Google’s DeepMind subsidiary in London claim to have done just that. These guys have built a deep-learning machine that is capable of rapidly assimilating new experiences and then acting on them. The result is a machine that learns significantly faster than others and has the potential to match humans in the not too distant future.
First, some background.
Deep learning uses layers of neural networks to look for patterns in data. When a single layer spots a pattern it recognizes, it sends this information to the next layer, which looks for patterns in this signal, and so on.
So in face recognition,
one layer might look for edges in an image,
the next layer for circular patterns of edges (the kind that eyes and mouths make), and
the next for triangular patterns such as those made by two eyes and a mouth.
When all this happens, the final output is an indication that a face has been spotted.
Of course, the devil is in the details. There are various systems of feedback to allow the system to learn by adjusting various internal parameters such as the strength of connections between layers. These parameters must change slowly, since a big change in one layer can catastrophically affect learning in the subsequent layers. That’s why deep neural networks need so much training and why it takes so long.
Pritzel and co have tackled this problem with a technique they call Neural Episodic Control. “Neural episodic control demonstrates dramatic improvements on the speed of learning for a wide range of environments,” they say. “Critically, our agent is able to rapidly latch onto highly successful strategies as soon as they are experienced, instead of waiting for many steps of optimisation.”
The basic idea behind DeepMind’s approach is to copy the way humans and animals learn quickly. The general consensus is that humans can tackle situations in two different ways.
If the situation is familiar, our brains have already formed a model of it, which they use to work out how best to behave. This uses a part of the brain called the prefrontal cortex.
But when the situation is not familiar, our brains have to fall back on another strategy. This is thought to involve a much simpler test-and-remember approach involving the hippocampus. So we try something and remember the outcome of this episode. If it is successful, we try it again, and so on. But if it is not a successful episode, we try to avoid it in future.
This episodic approach suffices in the short term while our prefrontal brain learns. But it is soon outperformed by the prefrontal cortex and its model-based approach.
Pritzel and co have used this approach as their inspiration. Their new system has two approaches.
The first is a conventional deep-learning system that mimics the behaviur of the prefrontal cortex.
The second is more like the hippocampus. When the system tries something new, it remembers the outcome.
But crucially, it doesn’t try to learn what to remember. Instead, it remembers everything. “Our architecture does not try to learn when to write to memory, as this can be slow to learn and take a significant amount of time,” say Pritzel and co. “Instead, we elect to write all experiences to the memory, and allow it to grow very large compared to existing memory architectures.”
They then use a set of strategies to read from this large memory quickly. The result is that the system can latch onto successful strategies much more quickly than conventional deep-learning systems.
They go on to demonstrate how well all this works by training their machine to play classic Atari video games, such as Breakout, Pong, and Space Invaders. (This is a playground that DeepMind has used to train many deep-learning machines.)
The team, which includes DeepMind cofounder Demis Hassibis, shows that neural episodic control vastly outperforms other deep-learning approaches in the speed at which it learns. “Our experiments show that neural episodic control requires an order of magnitude fewer interactions with the environment,” they say.
That’s impressive work with significant potential. The researchers say that an obvious extension of this work is to test their new approach on more complex 3-D environments.
It’ll be interesting to see what environments the team chooses and the impact this will have on the real world. We’ll look forward to seeing how that works out.
Neuromorphic chips are being designed to specifically mimic the human brain – and they could soon replace CPUs
BRAIN ACTIVITY MAP
AI services like Apple’s Siri and others operate by sending your queries to faraway data centers, which send back responses. The reason they rely on cloud-based computing is that today’s electronics don’t come with enough computing power to run the processing-heavy algorithms needed for machine learning. The typical CPUs most smartphones use could never handle a system like Siri on the device. But Dr. Chris Eliasmith, a theoretical neuroscientist and co-CEO of Canadian AI startup Applied Brain Research, is confident that a new type of chip is about to change that.
“Many have suggested Moore’s law is ending and that means we won’t get ‘more compute’ cheaper using the same methods,” Eliasmith says. He’s betting on the proliferation of ‘neuromorphics’ — a type of computer chip that is not yet widely known but already being developed by several major chip makers.
Traditional CPUs process instructions based on “clocked time” – information is transmitted at regular intervals, as if managed by a metronome. By packing in digital equivalents of neurons, neuromorphics communicate in parallel (and without the rigidity of clocked time) using “spikes” – bursts of electric current that can be sent whenever needed. Just like our own brains, the chip’s neurons communicate by processing incoming flows of electricity – each neuron able to determine from the incoming spike whether to send current out to the next neuron.
What makes this a big deal is that these chips require far less power to process AI algorithms. For example, one neuromorphic chip made by IBM contains five times as many transistors as a standard Intel processor, yet consumes only 70 milliwatts of power. An Intel processor would use anywhere from 35 to 140 watts, or up to 2000 times more power.
Eliasmith points out that neuromorphics aren’t new and that their designs have been around since the 80s. Back then, however, the designs required specific algorithms be baked directly into the chip. That meant you’d need one chip for detecting motion, and a different one for detecting sound. None of the chips acted as a general processor in the way that our own cortex does.
This was partly because there hasn’t been any way for programmers to design algorithms that can do much with a general purpose chip. So even as these brain-like chips were being developed, building algorithms for them has remained a challenge.
Eliasmith and his team are keenly focused on building tools that would allow a community of programmers to deploy AI algorithms on these new cortical chips.
Central to these efforts is Nengo, a compiler that developers can use to build their own algorithms for AI applications that will operate on general purpose neuromorphic hardware. Compilers are a software tool that programmers use to write code, and that translate that code into the complex instructions that get hardware to actually do something. What makes Nengo useful is its use of the familiar Python programming language – known for it’s intuitive syntax – and its ability to put the algorithms on many different hardware platforms, including neuromorphic chips. Pretty soon, anyone with an understanding of Python could be building sophisticated neural nets made for neuromorphic hardware.
“Things like vision systems, speech systems, motion control, and adaptive robotic controllers have already been built with Nengo,” Peter Suma, a trained computer scientist and the other CEO of Applied Brain Research, tells me.
Perhaps the most impressive system built using the compiler is Spaun, a project that in 2012 earned international praise for being the most complex brain model ever simulated on a computer. Spaun demonstrated that computers could be made to interact fluidly with the environment, and perform human-like cognitive tasks like recognizing images and controlling a robot arm that writes down what it’s sees. The machine wasn’t perfect, but it was a stunning demonstration that computers could one day blur the line between human and machine cognition. Recently, by using neuromorphics, most of Spaun has been run 9000x faster, using less energy than it would on conventional CPUs – and by the end of 2017, all of Spaun will be running on Neuromorphic hardware.
Eliasmith won NSERC’s John C. Polyani award for that project — Canada’s highest recognition for a breakthrough scientific achievement – and once Suma came across the research, the pair joined forces to commercialize these tools.
“While Spaun shows us a way towards one day building fluidly intelligent reasoning systems, in the nearer term neuromorphics will enable many types of context aware AIs,” says Suma. Suma points out that while today’s AIs like Siri remain offline until explicitly called into action, we’ll soon have artificial agents that are ‘always on’ and ever-present in our lives.
“Imagine a SIRI that listens and sees all of your conversations and interactions. You’ll be able to ask it for things like – “Who did I have that conversation about doing the launch for our new product in Tokyo?” or “What was that idea for my wife’s birthday gift that Melissa suggested?,” he says.
When I raised concerns that some company might then have an uninterrupted window into even the most intimate parts of my life, I’m reminded that because the AI would be processed locally on the device, there’s no need for that information to touch a server owned by a big company. And for Eliasmith, this ‘always on’ component is a necessary step towards true machine cognition. “The most fundamental difference between most available AI systems of today and the biological intelligent systems we are used to, is the fact that the latter always operate in real-time. Bodies and brains are built to work with the physics of the world,” he says.
Already, major efforts across the IT industry are heating up to get their AI services into the hands of users. Companies like Apple, Facebook, Amazon, and even Samsung, are developing conversational assistants they hope will one day become digital helpers.
This could be where consciousness forms. For the first time, scientists have detected a giant neuron wrapped around the entire circumference of a mouse’s brain, and it’s so densely connected across both hemispheres, it could finally explain the origins of consciousness.
Using a new imaging technique, the team detected the giant neuron emanating from one of the best-connected regions in the brain, and say it could be coordinating signals from different areas to create conscious thought.
This recently discovered neuron is one of three that have been detected for the first time in a mammal’s brain, and the new imaging technique could help us figure out if similar structures have gone undetected in our own brains for centuries.
Lead researcher Christof Kochtold Sara Reardon at Nature that they’ve never seen neurons extend so far across both regions of the brain before.
Oddly enough, all three giant neurons happen to emanate from a part of the brain that’s shown intriguing connections to human consciousness in the past – the claustrum, a thin sheet of grey matter that could be the most connected structure in the entire brain, based on volume.
This relatively small region is hidden between the inner surface of the neocortex in the centre of the brain, and communicates with almost all regions of cortex to achieve many higher cognitive functions such as
long-term planning, and
advanced sensory tasks such as
“Advanced brain-imaging techniques that look at the white matter fibres coursing to and from the claustrum reveal that it is a neural Grand Central Station,” Koch wrote for Scientific American back in 2014. “Almost every region of the cortex sends fibres to the claustrum.”
The claustrum is so densely connected to several crucial areas in the brain that Francis Crick of DNA double helix fame referred to it a “conductor of consciousness” in a 2005 paper co-written with Koch.
They suggested that it connects all of our external and internal perceptions together into a single unifying experience, like a conductor synchronises an orchestra, and strange medical cases in the past few years have only made their case stronger.
Back in 2014, a 54-year-old woman checked into the George Washington University Medical Faculty Associates in Washington, DC, for epilepsy treatment.
This involved gently probing various regions of her brain with electrodes to narrow down the potential source of her epileptic seizures, but when the team started stimulating the woman’s claustrum, they found they could effectively ‘switch’ her consciousness off and on again.
Helen Thomson reported for New Scientist at the time:
“When the team zapped the area with high frequency electrical impulses, the woman lost consciousness. She stopped reading and stared blankly into space, she didn’t respond to auditory or visual commands and her breathing slowed.
As soon as the stimulation stopped, she immediately regained consciousness with no memory of the event. The same thing happened every time the area was stimulated during two days of experiments.”
According to Koch, who was not involved in the study, this kind of abrupt and specific ‘stopping and starting‘ of consciousness had never been seen before.
Another experiment in 2015 examined the effects of claustrum lesions on the consciousness of 171 combat veterans with traumatic brain injuries.
They found that claustrum damage was associated with the duration, but not frequency, of loss of consciousness, suggesting that it could play an important role in the switching on and off of conscious thought, but another region could be involved in maintaining it.
And now Koch and his team have discovered extensive neurons in mouse brains emanating from this mysterious region.
In order to map neurons, researchers usually have to inject individual nerve cells with a dye, cut the brain into thin sections, and then trace the neuron’s path by hand.
It’s a surprisingly rudimentary technique for a neuroscientist to have to perform, and given that they have to destroy the brain in the process, it’s not one that can be done regularly on human organs.
Koch and his team wanted to come up with a technique that was less invasive, and engineered mice that could have specific genes in their claustrum neurons activated by a specific drug.
“When the researchers fed the mice a small amount of the drug, only a handful of neurons received enough of it to switch on these genes,” Reardon reports for Nature.
“That resulted in production of a green fluorescent protein that spread throughout the entire neuron. The team then took 10,000 cross-sectional images of the mouse brain, and used a computer program to create a 3D reconstruction of just three glowing cells.”
We should keep in mind that just because these new giant neurons are connected to the claustrum doesn’t mean that Koch’s hypothesis about consciousness is correct – we’re a long way from proving that yet.
It’s also important to note that these neurons have only been detected in mice so far, and the research has yet to be published in a peer-reviewed journal, so we need to wait for further confirmation before we can really delve into what this discovery could mean for humans.
But the discovery is an intriguing piece of the puzzle that could help up make sense of this crucial, but enigmatic region of the brain, and how it could relate to the human experience of conscious thought.
Guessing the location of a randomly chosen Street View image is hard, even for well-traveled humans. But Google’s latest artificial-intelligence machine manages it with relative ease. Here’s a tricky task. Pick a photograph from the Web at random. Now try to work out where it was taken using only the image itself. If the image shows a famous building or landmark, such as the Eiffel Tower or Niagara Falls, the task is straightforward. But the job becomes significantly harder when the image lacks specific location cues or is taken indoors or shows a pet or food or some other detail.Nevertheless, humans are surprisingly good at this task. To help, they bring to bear all kinds of knowledge about the world such as the type and language of signs on display, the types of vegetation, architectural styles, the direction of traffic, and so on. Humans spend a lifetime picking up these kinds of geolocation cues.So it’s easy to think that machines would struggle with this task. And indeed, they have.
Today, that changes thanks to the work of Tobias Weyand, a computer vision specialist at Google, and a couple of pals. These guys have trained a deep-learning machine to work out the location of almost any photo using only the pixels it contains.
Their new machine significantly outperforms humans and can even use a clever trick to determine the location of indoor images and pictures of specific things such as pets, food, and so on that have no location cues.
Their approach is straightforward, at least in the world of machine learning.
Weyand and co begin by dividing the world into a grid consisting of over 26,000 squares of varying size that depend on the number of images taken in that location.
So big cities, which are the subjects of many images, have a more fine-grained grid structure than more remote regions where photographs are less common. Indeed, the Google team ignored areas like oceans and the polar regions, where few photographs have been taken.
Next, the team created a database of geolocated images from the Web and used the location data to determine the grid square in which each image was taken. This data set is huge, consisting of 126 million images along with their accompanying Exif location data.
Weyand and co used 91 million of these images to teach a powerful neural network to work out the grid location using only the image itself. Their idea is to input an image into this neural net and get as the output a particular grid location or a set of likely candidates.
They then validated the neural network using the remaining 34 million images in the data set.
Finally they tested the network—which they call PlaNet—in a number of different ways to see how well it works.
The results make for interesting reading. To measure the accuracy of their machine, they fed it 2.3 million geotagged images from Flickr to see whether it could correctly determine their location. “PlaNet is able to localize 3.6 percent of the images at street-level accuracy and 10.1 percent at city-level accuracy,” say Weyand and co. What’s more, the machine determines the country of origin in a further 28.4 percent of the photos and the continent in 48.0 percent of them.
That’s pretty good. But to show just how good, Weyand and co put PlaNet through its paces in a test against 10 well-traveled humans. For the test, they used an online game that presents a player with a random view taken from Google Street View and asks him or her to pinpoint its location on a map of the world.
Anyone can play at www.geoguessr.com. Give it a try—it’s a lot of fun and more tricky than it sounds.
GeoGuesser Screen Capture Example
Needless to say, PlaNet trounced the humans. “In total, PlaNet won 28 of the 50 rounds with a median localization error of 1131.7 km, while the median human localization error was 2320.75 km,” say Weyand and co. “[This] small-scale experiment shows that PlaNet reaches superhuman performance at the task of geolocating Street View scenes.”
An interesting question is how PlaNet performs so well without being able to use the cues that humans rely on, such as vegetation, architectural style, and so on. But Weyand and co say they know why: “We think PlaNet has an advantage over humans because it has seen many more places than any human can ever visit and has learned subtle cues of different scenes that are even hard for a well-traveled human to distinguish.”
They go further and use the machine to locate images that do not have location cues, such as those taken indoors or of specific items. This is possible when images are part of albums that have all been taken at the same place. The machine simply looks through other images in the album to work out where they were taken and assumes the more specific image was taken in the same place.
That’s impressive work that shows deep neural nets flexing their muscles once again. Perhaps more impressive still is that the model uses a relatively small amount of memory unlike other approaches that use gigabytes of the stuff. “Our model uses only 377 MB, which even fits into the memory of a smartphone,” say Weyand and co.
That’s a tantalizing idea—the power of a superhuman neural network on a smartphone. It surely won’t be long now!
New software does in seconds what took staff 360,000 hours Bank seeking to streamline systems, avoid redundancies
At JPMorgan Chase & Co., a learning machine is parsing financial deals that once kept legal teams busy for thousands of hours.
The program, called COIN, for Contract Intelligence, does the mind-numbing job of interpreting commercial-loan agreements that, until the project went online in June, consumed 360,000 hours of work each year by lawyers and loan officers. The software reviews documents in seconds, is less error-prone and never asks for vacation.
Attendees discuss software on Feb. 27, the eve of JPMorgan’s Investor Day.
Photographer: Kholood Eid/Bloomberg
While the financial industry has long touted its technological innovations, a new era of automation is now in overdrive as cheap computing power converges with fears of losing customers to startups. Made possible by investments in machine learning and a new private cloud network, COIN is just the start for the biggest U.S. bank. The firm recently set up technology hubs for teams specializing in big data, robotics and cloud infrastructure to find new sources of revenue, while reducing expenses and risks.
The push to automate mundane tasks and create new tools for bankers and clients — a growing part of the firm’s $9.6 billion technology budget — is a core theme as the company hosts its annual investor day on Tuesday.
Behind the strategy, overseen by Chief Operating Operating Officer Matt Zames and Chief Information Officer Dana Deasy, is an undercurrent of anxiety: Though JPMorgan emerged from the financial crisis as one of few big winners, its dominance is at risk unless it aggressively pursues new technologies, according to interviews with a half-dozen bank executives.
That was the message Zames had for Deasy when he joined the firm from BP Plc in late 2013. The New York-based bank’s internal systems, an amalgam from decades of mergers, had too many redundant software programs that didn’t work together seamlessly.“Matt said, ‘Remember one thing above all else: We absolutely need to be the leaders in technology across financial services,’” Deasy said last week in an interview. “Everything we’ve done from that day forward stems from that meeting.”
After visiting companies including Apple Inc. and Facebook Inc. three years ago to understand how their developers worked, the bank set out to create its own computing cloud called Gaia that went online last year. Machine learning and big-data efforts now reside on the private platform, which effectively has limitless capacity to support their thirst for processing power. The system already is helping the bank automate some coding activities and making its 20,000 developers more productive, saving money, Zames said. When needed, the firm can also tap into outside cloud services from Amazon.com Inc., Microsoft Corp. and International Business Machines Corp.
Tech SpendingJPMorgan will make some of its cloud-backed technology available to institutional clients later this year, allowing firms like BlackRock Inc. to access balances, research and trading tools. The move, which lets clients bypass salespeople and support staff for routine information, is similar to one Goldman Sachs Group Inc. announced in 2015.JPMorgan’s total technology budget for this year amounts to 9 percent of its projected revenue — double the industry average, according to Morgan Stanley analyst Betsy Graseck. The dollar figure has inched higher as JPMorgan bolsters cyber defenses after a 2014 data breach, which exposed the information of 83 million customers.
“We have invested heavily in technology and marketing — and we are seeing strong returns,” JPMorgan said in a presentation Tuesday ahead of its investor day, noting that technology spending in its consumer bank totaled about $1 billion over the past two years.
One-third of the company’s budget is for new initiatives, a figure Zames wants to take to 40 percent in a few years. He expects savings from automation and retiring old technology will let him plow even more money into new innovations.
Not all of those bets, which include several projects based on a distributed ledger, like blockchain, will pay off, which JPMorgan says is OK. One example executives are fond of mentioning: The firm built an electronic platform to help trade credit-default swaps that sits unused.
‘Can’t Wait’“We’re willing to invest to stay ahead of the curve, even if in the final analysis some of that money will go to product or a service that wasn’t needed,” Marianne Lake, the lender’s finance chief, told a conference audience in June. That’s “because we can’t wait to know what the outcome, the endgame, really looks like, because the environment is moving so fast.”As for COIN, the program has helped JPMorgan cut down on loan-servicing mistakes, most of which stemmed from human error in interpreting 12,000 new wholesale contracts per year, according to its designers.
JPMorgan is scouring for more ways to deploy the technology, which learns by ingesting data to identify patterns and relationships. The bank plans to use it for other types of complex legal filings like credit-default swaps and custody agreements. Someday, the firm may use it to help interpret regulations and analyze corporate communications.
Another program called X-Connect, which went into use in January, examines e-mails to help employees find colleagues who have the closest relationships with potential prospects and can arrange introductions.
Creating Bots For simpler tasks, the bank has created bots to perform functions like granting access to software systems and responding to IT requests, such as resetting an employee’s password, Zames said. Bots are expected to handle 1.7 million access requests this year, doing the work of 140 people.
Photographer: Kholood Eid/Bloomberg
While growing numbers of people in the industry worry such advancements might someday take their jobs, many Wall Street personnel are more focused on benefits. A survey of more than 3,200 financial professionals by recruiting firm Options Group last year found a majority expect new technology will improve their careers, for example by improving workplace performance.
“Anything where you have back-office operations and humans kind of moving information from point A to point B that’s not automated is ripe for that,” Deasy said. “People always talk about this stuff as displacement. I talk about it as freeing people to work on higher-value things, which is why it’s such a terrific opportunity for the firm.”
To help spur internal disruption, the company keeps tabs on 2,000 technology ventures, using about 100 in pilot programs that will eventually join the firm’s growing ecosystem of partners. For instance, the bank’s machine-learning software was built with Cloudera Inc., a software firm that JPMorgan first encountered in 2009.
“We’re starting to see the real fruits of our labor,” Zames said. “This is not pie-in-the-sky stuff.”
A new brain mechanism hiding in plain sight. Researchers have discovered a brand new mechanism that controls the way nerve cells in our brain communicate with each other to regulate learning and long-term memory.
The fact that a new brain mechanism has been hiding in plain sight is a reminder of how much we have yet to learn about how the human brain works, and what goes wrong in neurodegenerative disorders such as Alzheimer’s and epilepsy.
“These discoveries represent a significant advance and will have far-reaching implications for the understanding of
“We believe that this is a groundbreaking study that opens new lines of inquiry which will increase understanding of the molecular details of synaptic function in health and disease.”
The human brain contains around 100 billion nerve cells, and each of those makes about 10,000 connections – known as synapses – with other cells.
That’s a whole lot of connections, and each of them is strengthened or weakened depending on different brain mechanisms that scientists have spent decades trying to understand.
Until now, one of the best known mechanisms to increase the strength of information flow across synapses was known as LTP, or long-term potentiation.
LTP intensifies the connection between cells to make information transfer more efficient, and it plays a role in a wide range of neurodegenerative conditions –
too much LTP, and you risk disorders such as epilepsy,
too little, and it could cause dementia or Alzheimer’s disease.
As far as researchers were aware, LTP is usually controlled by the activation of special proteins called NMDA receptors.
But now the UK team has discovered a brand new type of LTP that’s regulated in an entirely different way.
After investigating the formation of synapses in the lab, the team showed that this new LTP mechanism is controlled by molecules known as kainate receptors, instead of NMDA receptors.
“These data reveal a new and, to our knowledge, previously unsuspected role for postsynaptic kainate receptors in the induction of functional and structural plasticity in the hippocampus,” the researchers write in Nature Neuroscience.
This means we’ve now uncovered a previously unexplored mechanism that could control learning and memory.
“Untangling the interactions between the signal receptors in the brain not only tells us more about the inner workings of a healthy brain, but also provides a practical insight into what happens when we form new memories,” said one of the researchers,Milos Petrovic from the University of Central Lancashire.
“If we can preserve these signals it may help protect against brain diseases.”
Not only does this open up a new research pathway that could lead to a better understanding of how our brains work, but if researchers can find a way to target these new pathways, it could lead to more effective treatments for a range of neurodegenerative disorders.
It’s still early days, and the discovery will now need to be verified by independent researchers, but it’s a promising new field of research.
“This is certainly an extremely exciting discovery and something that could potentially impact the global population,” said Petrovic.
In a new automotive application, we have used convolutional neural networks (CNNs) to map the raw pixels from a front-facing camera to the steering commands for a self-driving car. This powerful end-to-end approach means that with minimum training data from humans, the system learns to steer, with or without lane markings, on both local roads and highways. The system can also operate in areas with unclear visual guidance such as parking lots or unpaved roads.
Figure 1: NVIDIA’s self-driving car in action.
We designed the end-to-end learning system using an NVIDIA DevBox running Torch 7 for training. An NVIDIA DRIVETM PXself-driving car computer, also with Torch 7, was used to determine where to drive—while operating at 30 frames per second (FPS). The system is trained to automatically learn the internal representations of necessary processing steps, such as detecting useful road features, with only the human steering angle as the training signal. We never explicitly trained it to detect, for example, the outline of roads. In contrast to methods using explicit decomposition of the problem, such as lane marking detection, path planning, and control, our end-to-end system optimizes all processing steps simultaneously.
We believe that end-to-end learning leads to better performance and smaller systems. Better performance results because the internal components self-optimize to maximize overall system performance, instead of optimizing human-selected intermediate criteria, e. g., lane detection. Such criteria understandably are selected for ease of human interpretation which doesn’t automatically guarantee maximum system performance. Smaller networks are possible because the system learns to solve the problem with the minimal number of processing steps.
Convolutional Neural Networks to Process Visual Data
CNNs have revolutionized the computational pattern recognition process. Prior to the widespread adoption of CNNs, most pattern recognition tasks were performed using an initial stage of hand-crafted feature extraction followed by a classifier. The important breakthrough of CNNs is that features are now learned automatically from training examples. The CNN approach is especially powerful when applied to image recognition tasks because the convolution operation captures the 2D nature of images. By using the convolution kernels to scan an entire image, relatively few parameters need to be learned compared to the total number of operations.
While CNNs with learned features have been used commercially for over twenty years , their adoption has exploded in recent years because of two important developments.
First, large, labeled data sets such as the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) are now widely available for training and validation.
Second, CNN learning algorithms are now implemented on massively parallel graphics processing units (GPUs), tremendously accelerating learning and inference ability.
The CNNs that we describe here go beyond basic pattern recognition. We developed a system that learns the entire processing pipeline needed to steer an automobile. The groundwork for this project was actually done over 10 years ago in a Defense Advanced Research Projects Agency (DARPA) seedling project known as DARPA Autonomous Vehicle (DAVE), in which a sub-scale radio control (RC) car drove through a junk-filled alley way. DAVE was trained on hours of human driving in similar, but not identical, environments. The training data included video from two cameras and the steering commands sent by a human operator.
In many ways, DAVE was inspired by the pioneering work of Pomerleau, who in 1989 built the Autonomous Land Vehicle in a Neural Network (ALVINN)system. ALVINN is a precursor to DAVE, and it provided the initial proof of concept that an end-to-end trained neural network might one day be capable of steering a car on public roads. DAVE demonstrated the potential of end-to-end learning, and indeed was used to justify starting the DARPA Learning Applied to Ground Robots (LAGR) program, but DAVE’s performance was not sufficiently reliable to provide a full alternative to the more modular approaches to off-road driving. (DAVE’s mean distance between crashes was about 20 meters in complex environments.)
About a year ago we started a new effort to improve on the original DAVE, and create a robust system for driving on public roads. The primary motivation for this work is to avoid the need to recognize specific human-designated features, such as lane markings, guard rails, or other cars, and to avoid having to create a collection of “if, then, else” rules, based on observation of these features. We are excited to share the preliminary results of this new effort, which is aptly named: DAVE–2.
The DAVE-2 System
Figure 2: High-level view of the data collection system.
Figure 2 shows a simplified block diagram of the collection system for training data of DAVE-2. Three cameras are mounted behind the windshield of the data-acquisition car, and timestamped video from the cameras is captured simultaneously with the steering angle applied by the human driver. The steering command is obtained by tapping into the vehicle’s Controller Area Network (CAN) bus. In order to make our system independent of the car geometry, we represent the steering command as 1/r, where r is the turning radius in meters. We use 1/r instead of r to prevent a singularity when driving straight (the turning radius for driving straight is infinity). 1/r smoothly transitions through zero from left turns (negative values) to right turns (positive values).
Training data contains single images sampled from the video, paired with the corresponding steering command (1/r). Training with data from only the human driver is not sufficient; the network must also learn how to recover from any mistakes, or the car will slowly drift off the road. The training data is therefore augmented with additional images that show the car in different shifts from the center of the lane and rotations from the direction of the road.
The images for two specific off-center shifts can be obtained from the left and the right cameras. Additional shifts between the cameras and all rotations are simulated through viewpoint transformation of the image from the nearest camera. Precise viewpoint transformation requires 3D scene knowledge which we don’t have, so we approximate the transformation by assuming all points below the horizon are on flat ground, and all points above the horizon are infinitely far away. This works fine for flat terrain, but for a more complete rendering it introduces distortions for objects that stick above the ground, such as cars, poles, trees, and buildings. Fortunately these distortions don’t pose a significant problem for network training. The steering label for the transformed images is quickly adjusted to one that correctly steers the vehicle back to the desired location and orientation in two seconds.
Figure 3: Training the neural network.
Figure 3 shows a block diagram of our training system. Images are fed into a CNN that then computes a proposed steering command. The proposed command is compared to the desired command for that image, and the weights of the CNN are adjusted to bring the CNN output closer to the desired output. The weight adjustment is accomplished using back propagation as implemented in the Torch 7 machine learning package.
Once trained, the network is able to generate steering commands from the video images of a single center camera. Figure 4 shows this configuration.
Figure 4: The trained network is used to generate steering commands from a single front-facing center camera.
Training data was collected by driving on a wide variety of roads and in a diverse set of lighting and weather conditions. We gathered surface street data in central New Jersey and highway data from Illinois, Michigan, Pennsylvania, and New York. Other road types include two-lane roads (with and without lane markings), residential roads with parked cars, tunnels, and unpaved roads. Data was collected in clear, cloudy, foggy, snowy, and rainy weather, both day and night. In some instances, the sun was low in the sky, resulting in glare reflecting from the road surface and scattering from the windshield.
The data was acquired using either our drive-by-wire test vehicle, which is a 2016 Lincoln MKZ, or using a 2013 Ford Focus with cameras placed in similar positions to those in the Lincoln. Our system has no dependencies on any particular vehicle make or model. Drivers were encouraged to maintain full attentiveness, but otherwise drive as they usually do. As of March 28, 2016, about 72 hours of driving data was collected.
Figure 5: CNN architecture. The network has about 27 million connections and 250 thousand parameters.
We train the weights of our network to minimize the mean-squared error between the steering command output by the network, and either the command of the human driver or the adjusted steering command for off-center and rotated images (see “Augmentation”, later). Figure 5 shows the network architecture, which consists of 9 layers, including a normalization layer, 5 convolutional layers, and 3 fully connected layers. The input image is split into YUV planes and passed to the network.
The first layer of the network performs image normalization. The normalizer is hard-coded and is not adjusted in the learning process. Performing normalization in the network allows the normalization scheme to be altered with the network architecture, and to be accelerated via GPU processing.
The convolutional layers are designed to perform feature extraction, and are chosen empirically through a series of experiments that vary layer configurations. We then use strided convolutions in the first three convolutional layers with a 2×2 stride and a 5×5 kernel, and a non-strided convolution with a 3×3 kernel size in the final two convolutional layers.
We follow the five convolutional layers with three fully connected layers, leading to a final output control value which is the inverse-turning-radius. The fully connected layers are designed to function as a controller for steering, but we noted that by training the system end-to-end, it is not possible to make a clean break between which parts of the network function primarily as feature extractor, and which serve as controller.
The first step to training a neural network is selecting the frames to use. Our collected data is labeled with road type, weather condition, and the driver’s activity (staying in a lane, switching lanes, turning, and so forth). To train a CNN to do lane following, we simply select data where the driver is staying in a lane, and discard the rest. We then sample that video at 10 FPS because a higher sampling rate would include images that are highly similar, and thus not provide much additional useful information. To remove a bias towards driving straight the training data includes a higher proportion of frames that represent road curves.
After selecting the final set of frames, we augment the data by adding artificial shifts and rotations to teach the network how to recover from a poor position or orientation. The magnitude of these perturbations is chosen randomly from a normal distribution. The distribution has zero mean, and the standard deviation is twice the standard deviation that we measured with human drivers. Artificially augmenting the data does add undesirable artifacts as the magnitude increases (as mentioned previously).
Before road-testing a trained CNN, we first evaluate the network’s performance in simulation. Figure 6 shows a simplified block diagram of the simulation system, and Figure 7 shows a screenshot of the simulator in interactive mode.
Figure 6: Block-diagram of the drive simulator.
The simulator takes prerecorded videos from a forward-facing on-board camera connected to a human-driven data-collection vehicle, and generates images that approximate what would appear if the CNN were instead steering the vehicle. These test videos are time-synchronized with the recorded steering commands generated by the human driver.
Since human drivers don’t drive in the center of the lane all the time, we must manually calibrate the lane’s center as it is associated with each frame in the video used by the simulator. We call this position the “ground truth”.
The simulator transforms the original images to account for departures from the ground truth. Note that this transformation also includes any discrepancy between the human driven path and the ground truth. The transformation is accomplished by the same methods as described previously.
The simulator accesses the recorded test video along with the synchronized steering commands that occurred when the video was captured. The simulator sends the first frame of the chosen test video, adjusted for any departures from the ground truth, to the input of the trained CNN, which then returns a steering command for that frame. The CNN steering commands as well as the recorded human-driver commands are fed into the dynamic model  of the vehicle to update the position and orientation of the simulated vehicle.
Figure 7: Screenshot of the simulator in interactive mode. See text for explanation of the performance metrics. The green area on the left is unknown because of the viewpoint transformation. The highlighted wide rectangle below the horizon is the area which is sent to the CNN.
The simulator then modifies the next frame in the test video so that the image appears as if the vehicle were at the position that resulted by following steering commands from the CNN. This new image is then fed to the CNN and the process repeats.
The simulator records the off-center distance (distance from the car to the lane center), the yaw, and the distance traveled by the virtual car. When the off-center distance exceeds one meter, a virtual human intervention is triggered, and the virtual vehicle position and orientation is reset to match the ground truth of the corresponding frame of the original test video.
We evaluate our networks in two steps: first in simulation, and then in on-road tests.
In simulation we have the networks provide steering commands in our simulator to an ensemble of prerecorded test routes that correspond to about a total of three hours and 100 miles of driving in Monmouth County, NJ. The test data was taken in diverse lighting and weather conditions and includes highways, local roads, and residential streets.
We estimate what percentage of the time the network could drive the car (autonomy) by counting the simulated human interventions that occur when the simulated vehicle departs from the center line by more than one meter. We assume that in real life an actual intervention would require a total of six seconds: this is the time required for a human to retake control of the vehicle, re-center it, and then restart the self-steering mode. We calculate the percentage autonomy by counting the number of interventions, multiplying by 6 seconds, dividing by the elapsed time of the simulated test, and then subtracting the result from 1:
Thus, if we had 10 interventions in 600 seconds, we would have an autonomy value of
After a trained network has demonstrated good performance in the simulator, the network is loaded on the DRIVE PX in our test car and taken out for a road test. For these tests we measure performance as the fraction of time during which the car performs autonomous steering. This time excludes lane changes and turns from one road to another. For a typical drive in Monmouth County NJ from our office in Holmdel to Atlantic Highlands, we are autonomous approximately 98% of the time. We also drove 10 miles on the Garden State Parkway (a multi-lane divided highway with on and off ramps) with zero intercepts.
Here is a video of our test car driving in diverse conditions.
Visualization of Internal CNN State
Figure 8: How the CNN “sees” an unpaved road. Top: subset of the camera image sent to the CNN. Bottom left: Activation of the first layer feature maps. Bottom right: Activation of the second layer feature maps. This demonstrates that the CNN learned to detect useful road features on its own, i. e., with only the human steering angle as training signal. We never explicitly trained it to detect the outlines of roads.
Figures 8 and 9 show the activations of the first two feature map layers for two different example inputs, an unpaved road and a forest. In case of the unpaved road, the feature map activations clearly show the outline of the road while in case of the forest the feature maps contain mostly noise, i. e., the CNN finds no useful information in this image.
This demonstrates that the CNN learned to detect useful road features on its own, i. e., with only the human steering angle as training signal. We never explicitly trained it to detect the outlines of roads, for example.
Figure 9: Example image with no road. The activations of the first two feature maps appear to contain mostly noise, i. e., the CNN doesn’t recognize any useful features in this image.
We have empirically demonstrated that CNNs are able to learn the entire task of lane and road following without manual decomposition into road o
Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1(4):541–551, Winter 1989.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks.
Danwei Wang and Feng Qi. Trajectory planning for a four-wheel-steering vehicle. In Proceedings of the 2001 IEEE International Conference on Robotics & Automation, May 21–26 2001. URL: http://www.ntu.edu.sg/home/edwwang/confpapers/wdwicar01.pdf.
rlane marking detection, semantic abstraction, path planning, and control. A small amount of training data from less than a hundred hours of driving was sufficient to train the car to operate in diverse conditions, on highways, local and residential roads in sunny, cloudy, and rainy conditions.
The CNN is able to learn meaningful road features from a very sparse training signal (steering alone).
The system learns for example to detect the outline of a road without the need of explicit labels during training.
More work is needed to improve the robustness of the network, to find methods to verify the robustness, and to improve visualization of the network-internal processing steps.
Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backprop- agation applied to handwritten zip code recognition. Neural Computation, 1(4):541–551, Winter 1989. URL: http://yann.lecun.org/exdb/publis/pdf/lecun-89e.pdf.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1097–1105. Curran Associates, Inc., 2012. URL: http://papers.nips.cc/paper/ 4824-imagenet-classification-with-deep-convolutional-neural-networks. pdf.
L. D. Jackel, D. Sharman, Stenard C. E., Strom B. I., , and D Zuckert. Optical character recognition for self-service banking. AT&T Technical Journal, 74(1):16–24, 1995.
Large scale visual recognition challenge (ILSVRC). URL: http://www.image-net.org/ challenges/LSVRC/.
Net-Scale Technologies, Inc. Autonomous off-road vehicle control using end-to-end learning, July 2004. Final technical report. URL: http://net-scale.com/doc/net-scale-dave-report.pdf.
Dean A. Pomerleau. ALVINN, an autonomous land vehicle in a neural network. Technical report, Carnegie Mellon University, 1989. URL: http://repository.cmu.edu/cgi/viewcontent. cgi?article=2874&context=compsci.
Danwei Wang and Feng Qi. Trajectory planning for a four-wheel-steering vehicle. In Proceedings of the 2001 IEEE International Conference on Robotics & Automation, May 21–26 2001. URL: http: //www.ntu.edu.sg/home/edwwang/confpapers/wdwicar01.pdf.
Gamalon has developed a technique that lets machines learn to recognize concepts in images or text much more efficiently.
An app developed by Gamalon recognizes objects after seeing a few examples. A learning program recognizes simpler concepts such as lines and rectangles.
Machine learning is becoming extremely powerful, but it requires extreme amounts of data.
You can, for instance, train a deep-learning algorithm to recognize a cat with a cat-fancier’s level of expertise, but you’ll need to feed it tens or even hundreds of thousands of images of felines, capturing a huge amount of variation in size, shape, texture, lighting, and orientation. It would be lot more efficient if, a bit like a person, an algorithm could develop an idea about what makes a cat a cat from fewer examples.
A Boston-based startup called Gamalon has developed technology that lets computers do this in some situations, and it is releasing two products Tuesday based on the approach.
If the underlying technique can be applied to many other tasks, then it could have a big impact. The ability to learn from less data could let robots explore and understand new environments very quickly, or allow computers to learn about your preferences without sharing your data.
Gamalon uses a technique that it calls Bayesian program synthesis to build algorithms capable of learning from fewer examples. Bayesian probability, named after the 18th century mathematician Thomas Bayes, provides a mathematical framework for refining predictions about the world based on experience. Gamalon’s system uses probabilistic programming—or code that deals in probabilities rather than specific variables—to build a predictive model that explains a particular data set. From just a few examples, a probabilistic program can determine, for instance, that it’s highly probable that cats have ears, whiskers, and tails. As further examples are provided, the code behind the model is rewritten, and the probabilities tweaked. This provides an efficient way to learn the salient knowledge from the data.
Probabilistic programming techniques have been around for a while. In 2015, for example, a team from MIT and NYU used probabilistic methods to have computers learn to recognize written characters and objects after seeing just one example (see “This AI Algorithm Learns Simple Tasks as Fast as We Do”). But the approach has mostly been an academic curiosity.
There are difficult computational challenges to overcome, because the program has to consider many different possible explanations, says Brenden Lake, a research fellow at NYU who led the 2015 work.
Still, in theory, Lake says, the approach has significant potential because it can automate aspects of developing a machine-learning model. “Probabilistic programming will make machine learning much easier for researchers and practitioners,” Lake says. “It has the potential to take care of the difficult [programming] parts automatically.”
There are certainly significant incentives to develop easier-to-use and less data-hungry machine-learning approaches. Machine learning currently involves acquiring a large raw data set, and often then labeling it manually. The learning is then done inside large data centers, using many computer processors churning away in parallel for hours or days. “There are only a few really large companies that can really afford to do this,” says Ben Vigoda, cofounder and CEO of Gamalon.
When Machines Have Ideas | Ben Vigoda | TEDxBoston
Our CEO, Ben Vigoda, gave a talk at TEDx Boston 2016 called “When Machines Have Ideas” that describes why building “stories” (i.e. Bayesian generative models) into machine intelligence systems can be very powerful.
In theory, Gamalon’s approach could make it a lot easier for someone to build and refine a machine-learning model, too. Perfecting a deep-learning algorithm requires a great deal of mathematical and machine-learning expertise. “There’s a black art to setting these systems up,” Vigoda says. With Gamalon’s approach, a programmer could train a model by feeding in significant examples.
Vigoda showed MIT Technology Review a demo with a drawing app that uses the technique. It is similar to the one released last year by Google, which uses deep learning to recognize the object a person is trying to sketch (see “Want to Understand AI? Try Sketching a Duck for a Neural Network”). But whereas Google’s app needs to see a sketch that matches the ones it has seen previously, Gamalon’s version uses a probabilistic program to recognize the key features of an object. For instance, one program understands that a triangle sitting atop a square is most likely a house. This means even if your sketch is very different from what it has seen before, providing it has those features, it will guess correctly.
The technique could have significant near-term commercial applications, too. The company’s first products use Bayesian program synthesis to recognize concepts in text.
One product, called Gamalon Structure, can extract concepts from raw text more efficiently than is normally possible. For example, it can take a manufacturer’s description of a television and determine what product is being described, the brand, the product name, the resolution, the size, and other features.
Another product, Gamalon Match, is used to categorize the products and price in a store’s inventory. In each case, even when different acronyms or abbreviations are used for a product or feature, the system can quickly be trained to recognize them.
Vigoda believes the ability to learn will have other practical benefits.
A computer could learn about a user’s interests without requiring an impractical amount of data or hours of training.
Personal data might not need to be shared with large companies, either, if machine learning can be done efficiently on a user’s smartphone or laptop.
And a robot or a self-driving car could learn about a new obstacle without needing to see hundreds of thousands of examples.
“And we are now publishing the actual nuts-and-bolts construction plan for a large-scale quantum computer.”
It is thought the astonishing processing power unleashed by quantum mechanics will lead to new, life-saving medicines, help solve the most intractable scientific problems, and probe the mysteries of the universe.
“Life will change completely. We will be able to do certain things we could never even dream of before,” Professor Hensinger said.
“You can imagine that suddenly the sky is the limit.
“This is really, really exciting … it’s probably one of the most exciting times to be in this field.”
He said small quantum computers had been built in the past but to test the theories.
“This is not an academic study any more, it really is all the engineering required to build such a device,” he said.
“Nobody has really gone ahead and drafted a full engineering plan of how you build one.
“Many people questioned, because this is so hard to make this happen, that it can even be built.
“We show that not only can it be built, but we provide a whole detailed plan on how to make it happen.”
The problem is that existing quantum computers require lasers focused precisely on individual atoms. The larger the computer, the more lasers are required and the greater the chance of something going wrong.
But Professor Hensinger and colleagues used a different technique to monitor the atoms involving a microwave field and electricity in an ‘ion-trap’ device.
“What we have is a solution that we can scale to arbitrary [computing] power,” he said.
Fig. 2. Gradient wires placed underneath each gate zone and embedded silicon photodetector.
(A) Illustration showing an isometric view of the two main gradient wires placed underneath each gate zone. Short wires are placed locally underneath each gate zone to form coils, which compensate for slowly varying magnetic fields and allow for individual addressing. The wire configuration in each zone can be seen in more detail in the inset.
(B) Silicon photodetector (marked green) embedded in the silicon substrate, transparent center segmented electrodes, and the possible detection angle are shown. VIA structures are used to prevent optical cross-talk from neighboring readout zones.
Source: Science Journals — AAAS. Blueprint for a microwave trapped ion quantum computer. Lekitsch et al. Sci. Adv. 2017;3: e1601540 1 February 2017
Fig. 4. Scalable module illustration. One module consisting of 36 × 36 junctions placed on the supporting steel frame structure: Nine wafers containing the required DACs and control electronics are placed between the wafer holding 36 × 36 junctions and the microchannel cooler (red layer) providing the cooling. X-Y-Z piezo actuators are placed in the four corners on top of the steel frame, allowing for accurate alignment of the module. Flexible electric wires supply voltages, currents, and control signals to the DACs and control electronics, such as field-programmable gate arrays (FPGAs). Coolant is supplied to the microchannel cooler layer via two flexible steel tubes placed in the center of the modules.
Source: Science Journals — AAAS. Blueprint for a microwave trapped ion quantum computer. Lekitsch et al. Sci. Adv. 2017;3: e1601540 1 February 2017
Fig. 5. Illustration of vacuum chambers. Schematic of octagonal UHV chambers connected together; each chamber is 4.5 × 4.5 m2 large and can hold >2.2 million individual X-junctions placed on steel frames.
Source: Science Journals — AAAS. Blueprint for a microwave trapped ion quantum computer. Lekitsch et al. Sci. Adv. 2017;3: e1601540 1 February 2017
“We are already building it now. Within two years we think we will have completed a prototype which incorporates all the technology we state in this blueprint.
“At the same time we are now looking for industry partner so we can really build a large-scale device that fills a building basically.
“It’s extraordinarily expensive so we need industry partners … this will be in the 10s of millions, up to £100m.”
Dr Toby Cubitt, a Royal Society research fellow in quantum information theory at University College London, said: “Many different technologies are competing to build the first large-scale quantum computer. Ion traps were one of the earliest realistic proposals.
“This work is an important step towards scaling up ion-trap quantum computing.
“Though there’s still a long way to go before you’ll be making spreadsheets on your quantum computer.”
And Professor Alan Woodward, of Surrey University, hailed the “tremendous step in the right direction”.
“It is great work,” he said. “They have made some significant strides forward.”
But he added it was “too soon to say” whether it would lead to the hoped-for technological revolution.
Today we announced our funding of Xnor.ai. We are excited to be working with Ali Farhadi, Mohammad Rastegari and their team on this new company. We are also looking forward to working with Paul Allen’s team at the Allen Institute for AI and in particular our good friend and CEO of AI2, Dr. Oren Etzioni who is joining the board of Xnor.ai. Machine Learning and AI have been a key investment theme for us for the past several years and bringing deep learning capabilities such as image and speech recognition to small devices is a huge challenge.
Mohammad and Ali and their team have developed a platform that enables low resource devices to perform tasks that usually require large farms of GPUs in cloud environments. This, we believe, has the opportunity to change how we think about certain types of deep learning use cases as they get extended from the core to the edge. Image and voice recognition are great examples. These are broad areas of use cases out in the world – usually with a mobile device, but right now they require the device to be connected to the internet so those large farms of GPUs can process all the information your device is capturing/sending and having the core transmit back the answer. If you could do that on your phone (while preserving battery life) it opens up a new world of options.
It is just these kinds of inventions that put the greater Seattle area at the center of the revolution in machine learning and AI that is upon us. Xnor.ai came out of the outstanding work the team was doing at the Allen Institute for Artificial Intelligence (AI2.) and Ali is a professor at the University of Washington. Between Microsoft, Amazon, the University of Washington and research institutes such as AI2, our region is leading the way as new types of intelligent applications takes shape. Madrona is energized to play our role as company builder and support for these amazing inventors and founders.
AI acceleration startup Xnor.ai collects $2.6M in funding
I was excited by the promise of Xnor.ai and its technique that drastically reduces the computing power necessary to perform complex operations like computer vision. Seems I wasn’t the only one: the company, just officially spun off from the Allen Institute for AI (AI2), has attracted $2.6 million in seed funding from its parent company and Madrona Venture Group.
The specifics of the product and process you can learn about in detail in my previous post, but the gist is this: machine learning models for things like object and speech recognition are notoriously computation-heavy, making them difficult to implement on smaller, less powerful devices. Xnor.ai’s researchers use a bit of mathematical trickery to reduce that computing load by an order of magnitude or two — something it’s easy to see the benefit of.
“Imagine what is possible if that style of computing could be done on the device in your hand, on your wrist, or in your car,” said Madrona’s managing director, Matt McIlwain, in a press release. I’m sure they’re all imagining very hard right now. “Machine Learning and AI have been a key investment theme for us for the past several years and bringing deep learning capabilities such as image and speech recognition to small devices is a huge challenge,” he added in a company blog post.
McIlwain will join AI2 CEO Oren Etzioni on the board of Xnor.ai; Ali Farhadi, who led the original project, will be the company’s CEO, and Mohammad Rastegari is CTO.
The new company aims to facilitate commercial applications of its technology (it isn’t quite plug and play yet), but the research that led up to it is, like other AI2 work, open source.