Stunning AI Breakthrough Takes Us One Step Closer to the Singularity

By Hugo Angel,

As a new Nature paper points out, “There are an astonishing 10 to the
power of 170 possible board configurations in Go—more than the number of
atoms in the known universe.” (Image: DeepMind)
Remember AlphaGo, the first artificial intelligence to defeat a grandmaster at Go?
Well, the program just got a major upgrade, and it can now teach itself how to dominate the game without any human intervention. But get this: In a tournament that pitted AI against AI, this juiced-up version, called AlphaGo Zero, defeated the regular AlphaGo by a whopping 100 games to 0, signifying a major advance in the field. Hear that? It’s the technological singularity inching ever closer.A new paper published in Nature today describes how the artificially intelligent system that defeated Go grandmaster Lee Sedol in 2016 got its digital ass kicked by a new-and-improved version of itself. And it didn’t just lose by a little—it couldn’t even muster a single win after playing a hundred games. Incredibly, it took AlphaGo Zero (AGZ) just three days to train itself from scratch and acquire literally thousands of years of human Go knowledge simply by playing itself. The only input it had was what it does to the positions of the black and white pieces on the board.

  • In addition to devising completely new strategies,
  • the new system is also considerably leaner and meaner than the original AlphaGo.
Lee Sedol getting crushed by AlphaGo in 2016. (Image: AP)

Now, every once in a while the field of AI experiences a “holy shit” moment, and this would appear to be one of those moments. Looking back, other “holy shit” moments include:

This latest achievement qualifies as a “holy shit” moment for a number of reasons.

First of all, the original AlphaGo had the benefit of learning from literally thousands of previously played Go games, including those played by human amateurs and professionals. AGZ, on the other hand, received no help from its human handlers, and had access to absolutely nothing aside from the rules of the game. Using “reinforcement learning,” AGZ played itself over and over again, “starting from random play, and without any supervision or use of human data,” according to the Google-owned DeepMind researchers in their study. This allowed the system to improve and refine its digital brain, known as a neural network, as it continually learned from experience. This basically means that AlphaGo Zero was its own teacher.

This technique is more powerful than previous versions of AlphaGo because it is no longer constrained by the limits of human knowledge,” notes the DeepMind team in a release. “Instead, it is able to learn tabula rasa [from a clean slate] from the strongest player in the world: AlphaGo itself.

When playing Go, the system considers the most probable next moves (a “policy network”), and then estimates the probability of winning based on those moves (its “value network”). AGZ requires about 0.4 seconds to make these two assessments. The original AlphaGo was equipped with a pair of neural networks to make similar evaluations, but for AGZ, the Deepmind developers merged the policy and value networks into one, allowing the system to learn more efficiently. What’s more, the new system is powered by four tensor processing units (TPUS)—specialized chips for neural network training. Old AlphaGo needed 48 TPUs.

After just three days of self-play training and a total of 4.9 million games played against itself, AGZ acquired the expertise needed to trounce AlphaGo (by comparison, the original AlphaGo had 30 million games for inspiration). After 40 days of self-training, AGZ defeated another, more sophisticated version of AlphaGo called AlphaGo “Master” that defeated the world’s best Go players and the world’s top ranked Go player, Ke Jie. Earlier this year, both the original AlphaGo and AlphaGo Master won a combined 60 games against top professionals. The rise of AGZ, it would now appear, has made these previous versions obsolete.

The time when humans can have a meaningful conversation with an AI has always seemed far off and the stuff of science fiction. But for Go players, that day is here.

This is a major achievement for AI, and the subfield of reinforcement learning in particular. By teaching itself, the system matched and exceeded human knowledge by an order of magnitude in just a few days, while also developing 

  • unconventional strategies and
  • creative new moves.

For Go players, the breakthrough is as sobering as it is exciting; they’re learning things from AI that they could have never learned on their own, or would have needed an inordinate amount of time to figure out.
[AlphaGo Zero’s] games against AlphaGo Master will surely contain gems, especially because its victories seem effortless,” wrote Andy Okun and Andrew Jackson, members of the American Go Association, in a Nature News and Views article. “At each stage of the game, it seems to gain a bit here and lose a bit there, but somehow it ends up slightly ahead, as if by magic… The time when humans can have a meaningful conversation with an AI has always seemed far off and the stuff of science fiction. But for Go players, that day is here.”

No doubt, AGZ represents a disruptive advance in the world of Go, but what about its potential impact on the rest of the world? According to Nick Hynes, a grad student at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), it’ll be a while before a specialized tool like this will have an impact on our daily lives.“So far, the algorithm described only works for problems where there are a countable number of actions you can take, so it would need modification before it could be used for continuous control problems like locomotion [for instance],” Hynes told Gizmodo. “Also, it requires that you have a really good model of the environment. In this case, it literally knows all of the rules. That would be as if you had a robot for which you could exactly predict the outcomes of actions—which is impossible for real, imperfect physical systems.

The nice part, he says, is that there are several other lines of AI research that address both of these issues (e.g. machine learning, evolutionary algorithms, etc.), so it’s really just a matter of integration. “The real key here is the technique,” says Hynes.

It’s like an alien civilization inventing its own mathematics which allows it to do things like time travel…Although we’re still far from ‘The Singularity,’ we’re definitely heading in that direction.
As expected—and desired—we’re moving farther away from the classic pattern of getting a bunch of human-labeled data and training a model to imitate it,” he said. “What we’re seeing here is a model free from human bias and presuppositions: It can learn whatever it determines is optimal, which may indeed be more nuanced that our own conceptions of the same. It’s like an alien civilization inventing its own mathematics which allows it to do things like time travel,” to which he added: “Although we’re still far from ‘The Singularity,’ we’re definitely heading in that direction.

Noam Brown, a Carnegie Mellon University computer scientist who helped to develop the first AI to defeat top humans in no-limit poker, says the DeepMind researchers have achieved an impressive result, and that it could lead to bigger, better things in AI.

While the original AlphaGo managed to defeat top humans, it did so partly by relying on expert human knowledge of the game and human training data,” Brown told Gizmodo. “That led to questions of whether the techniques could extend beyond Go. AlphaGo Zero achieves even better performance without using any expert human knowledge. It seems likely that the same approach could extend to all perfect-information games [such as chess and checkers]. This is a major step toward developing general-purpose AIs.

As both Hynes and Brown admit, this latest breakthrough doesn’t mean the technological singularity—that hypothesized time in the future when greater-than-human machine intelligence achieves explosive growth—is imminent. But it should cause pause for thought. Once 

  • we teach a system the rules of a game or 
  • the constraints of a real-world problem, 

the power of reinforcement learning makes it possible to simply press the start button and let the system do the rest. It will then figure out the best ways to succeed at the task, devising solutions and strategies that are beyond human capacities, and possibly even human comprehension.

As noted, AGZ and the game of Go represent an oversimplified, constrained, and highly predictable picture of the world, but in the future, AI will be tasked with more complex challenges. Eventually, self-teaching systems will be used to solve more pressing problems, such as protein folding to conjure up new medicines and biotechnologies, figuring out ways to reduce energy consumption, or when we need to design new materials. A highly generalized self-learning system could also be tasked with improving itself, leading to artificial general intelligence (i.e. a very human-like intelligence) and even artificial superintelligence.

As the DeepMind researchers conclude in their study, “Our results comprehensively demonstrate that a pure reinforcement learning approach is fully feasible, even in the most challenging of domains: it is possible to train to superhuman level, without human examples or guidance, given no knowledge of the domain beyond basic rules.

And indeed, now that human players are no longer dominant in games like chess and Go, it can be said that we’ve already entered into the era of superintelligence. This latest breakthrough is the tiniest hint of what’s still to come.



By George Dvorsky

Artificial intelligence pioneer says we need to start over

By Hugo Angel,

 Geoffrey Hinton harbors doubts about AI's current workhorse. (Johnny Guatto / University of Toronto)

Geoffrey Hinton harbors doubts about AI’s current workhorse. (Johnny Guatto / University of Toronto)

Steve LeVine Sep 15

In 1986, Geoffrey Hinton co-authored a paper that, four decades later, is central to the explosion of artificial intelligence. But Hinton says his breakthrough method should be dispensed with, and a new path to AI found.

Speaking with Axios on the sidelines of an AI conference in Toronto on Wednesday, Hinton, a professor emeritus at the University of Toronto and a Google researcher, said he is now “deeply suspicious” of back-propagation, the workhorse method that underlies most of the advances we are seeing in the AI field today, including the capacity to sort through photos and talk to Siri. “My view is throw it all away and start again,” he said.

The bottom line: Other scientists at the conference said back-propagation still has a core role in AI’s future. But Hinton said that, to push materially ahead, entirely new methods will probably have to be invented.Max Planck said, ‘Science progresses one funeral at a time.’ The future depends on some graduate student who is deeply suspicious of everything I have said.

How it works: In back propagation, labels or “weights” are used to represent a photo or voice within a brain-like neural layer. The weights are then adjusted and readjusted, layer by layer, until the network can perform an intelligent function with the fewest possible errors.

But Hinton suggested that, to get to where neural networks are able to become intelligent on their own, what is known as “unsupervised learning,” “I suspect that means getting rid of back-propagation.

I don’t think it’s how the brain works,” he said. “We clearly don’t need all the labeled data.

“Es el momento de hacer que nuestros niños sean más inteligentes que la inteligencia artificial”

By Hugo Angel,

Danqing Wang Computer ABC

ENTREVISTA | Noriko Arai, directora del Todai Robot Project »

Noriko Arai quiere revolucionar el sistema educativo para que los humanos no pierdan la batalla laboral contra los robots

Noriko Arai durante su charla TED en Vancouver.

Noriko Arai durante su charla TED en Vancouver. Bret Hartman / TED

Una vez al año, medio millón de estudiantes japoneses realizan el examen de acceso a la universidad, ocho pruebas tipo test. Menos del 3% lo harán suficientemente bien como para hacer la segunda parte, un examen escrito diseñado especialmente para el acceso a la Universidad de Tokio (Todai), la más prestigiosa de Japón. Noriko Arai, de 54 años, directora del Centro de Investigación para el Conocimiento en la Comunidad del Instituto Nacional de Informática y del Todai Robot Project, está trabajando en un robot que pueda aprobar todos estos exámenes para aprender las posibilidades y las limitaciones de la inteligencia artificial.

En 2013, tras dos años de Proyecto, el robot Todai sacó una nota suficientemente buena para ser admitido en 472 de 581 universidades privadas. En 2016, su nota estuvo entre el 20% de las mejores en los exámenes tipo test, y en entre el 1% de los mejores en uno de los dos exámenes de matemáticas. Además, fue capaz de escribir una redacción sobre el comercio marítimo del siglo XVII mejor que la mayoría de los estudiantes. “Tomó información del libro de texto y de Wikipedia y la combinó sin entender ni pizca”, explicó Arai durante su reciente charla TED en Vancouver. “Ni Watson, ni Siri, ni Todai Robot pueden leer. La inteligencia artificial no puede entender, solo hace como que entiende”.

Más que contenta por su robot, Arai quedó alarmada por los resultados. “¿Cómo es posible que esta máquina no inteligente lo hiciera mejor que nuestros niños?”, se preguntaba. Preocupada por el futuro laboral de las nuevas generaciones, realizó un experimento con estudiantes y descubrió que un tercio de ellos fallaron preguntas sencillas porque no leen bien, un problema que, piensa, existe en todo el mundo. “Nosotros, los humanos, podemos comprender el significado de las cosas, algo que no puede hacer la inteligencia artificial. Pero la mayoría de los estudiantes reciben conocimiento sin comprender el significado, y eso no es conocimiento, es memorización, y la inteligencia artificial puede hacer lo mismo. Debemos crear un nuevo sistema educativo”.

Pregunta: ¿Por qué decidió una matemática como usted meterse en el mundo de los robots?
Respuesta: La inteligencia artificial consiste en intentar escribir el pensamiento en lenguaje matemático. No hay otra forma para que la inteligencia artificial sea inteligente. Como matemática creo que el pensamiento no puede escribirse en el lenguaje matemático. Descartes dijo lo mismo. Mi primera impresión fue que la inteligencia artificial es imposible. Utiliza probabilidad y estadística sumada a la lógica. En el siglo XX se usaba solo la lógica, y por supuesto que no todo puede ser escrito con lógica, como los sentimientos, por ejemplo. Ahora están usando estadística, imitando el pasado para ver cómo actuar cuando encontremos cosas nuevas.

P. No le gusta cuando la gente dice que la inteligencia artificial podría conquistar el mundo…
R. Estoy harta de esa imagen, por eso decidí crear un robot muy inteligente, utilizando lo último en investigación para ver sus limitaciones. Watson de IBM y Google Car, por ejemplo, tienden a mostrar solo las cosas buenas. Nosotros queremos mostrarlo todo. También lo que no es capaz de hacer.

P. Al intentar mejorar la inteligencia artificial, usted vio que había que mejorar la educación.

R. Sabía que mi robot era ininteligente, cargado de conocimientos que no sabe cómo usar correctamente porque no entiende el significado. Quedé estupefacta al ver que este robot que no es inteligente escribió una redacción mejor que la mayoría de los estudiantes. Así que decidí investigar lo que estaba ocurriendo en el mundo humano. Estaría más contenta si hubiera descubierto que la inteligencia artificial adelantó a los estudiantes porque es mejor en memorizar y computar, pero ese no era el caso. El robot no comprende el significado, pero tampoco la mayoría de los estudiantes.

P. ¿Cree usted que el problema es que dependemos tanto de Siri y Google para resolver nuestras dudas que por eso no procesamos la información bien?
R. Estamos analizando el porqué. Algo que podemos ver es que antes todo el mundo leía el periódico, incluso la gente pobre. Pero ahora la mayoría de las parejas jóvenes no leen el diario porque lo tienen en su teléfono. No compran libros porque la mayoría de las historias están en blogs. No tienen calendario o hasta reloj en casa porque lo tienen en el teléfono. Los niños se crían sin números ni letras en su ambiente. Y también tienden a tener conversaciones en mensajes de texto muy cortos. Tienen menos oportunidades de leer, creo.

P. Parte del proyecto Todai es ver qué tipo de trabajos la inteligencia artificial podría quitarle a los humanos.
R. En Japón, antes, todo el mundo era clase media, no había gente muy rica, ni gente muy pobre. Pero cuando la inteligencia artificial llega a una sociedad se lleva muchos trabajos, incluidos los puestos de banqueros o analistas. Quienes pierden su trabajo por culpa de la inteligencia artificial puede que no encuentren otro en mucho tiempo. Quizás haya trabajos como corregir los errores cometidos por la inteligencia artificial, trabajos muy duros y más insignificantes que nunca, como en Tiempos Modernos de Chaplin. Alguien con talento, creativo, inteligente, determinado, bueno en la lectura y la escritura, tendrá más oportunidades que nunca porque incluso si nació en un pueblo, mientras tenga acceso a Internet dispondrá de mucha información para aprender gratuitamente y llegar a hacerse millonario. Es mucho más fácil comenzar un negocio que en el siglo XX. Pero alguien que no tiene ese tipo de inteligencia, probablemente se quede atrapado entre las multitudes. Lo que pasa es que todos tienen derecho a voto, y, en ese sentido, somos todos iguales. Si cada vez hay más y más gente que se siente atrapada y solo la gente inteligente gana dinero, y los utiliza para ganar más dinero, pensarán mal de la sociedad, odiarán a la sociedad, y las consecuencias las sufriremos todos, en todo el mundo.

P. ¿Cuál piensa que es la solución?
R. Ahora es el momento de hacer que nuestros niños sean más inteligentes que la inteligencia artificial. He inaugurado el Instituto de Investigación de la Ciencia para la Educación este mes para investigar cuántos estudiantes tienen malos hábitos de lectura y escritura, y por qué, y ver cómo podemos ayudarles a modificar esos hábitos para que puedan adelantar al robot usando su poderío humano. Me gustaría que estuviéramos como en Japón de los años setenta, cuando todo el mundo era de clase media, todos nos ayudábamos y no necesitábamos más dinero del que somos capaces de gastar en nuestra vida. Todo el mundo debería estar bien educado, saber leer y escribir, pero no solo el significado literal. Todos deberíamos aprender con profundidad, leer con profundidad para poder mantener nuestro trabajo.

Por Isaac Hernández Isaac Hernández. Vancouver
6 JUN 2017

IBM Makes Breakthrough in Race to Commercialize Quantum Computers

By Hugo Angel,

Photographer: David Paul Morris

Researchers at International Business Machines Corp. have developed a new approach for simulating molecules on a quantum computer.

The breakthrough, outlined in a research paper to be published in the scientific journal Nature Thursday, uses a technique that could eventually allow quantum computers to solve difficult problems in chemistry and electro-magnetism that cannot be solved by even the most powerful supercomputers today.

In the experiments described in the paper, IBM researchers used a quantum computer to derive the lowest energy state of a molecule of beryllium hydride. Knowing the energy state of a molecule is a key to understanding chemical reactions.

In the case of beryllium hydride, a supercomputer can solve this problem, but the standard techniques for doing so cannot be used for large molecules because the number of variables exceeds the computational power of even these machines.

The IBM researchers created a new algorithm specifically designed to take advantage of the capabilities of a quantum computer that has the potential to run similar calculations for much larger molecules, the company said.

The problem with existing quantum computers – including the one IBM used for this research — is that they produce errors and as the size of the molecule being analyzed grows, the calculation strays further and further from chemical accuracy. The inaccuracy in IBM’s experiments varied between 2 and 4 percent, Jerry Chow, the manager of experimental quantum computing for IBM, said in an interview.

Alan Aspuru-Guzik, a professor of chemistry at Harvard University who was not part of the IBM research, said that the Nature paper is an important step. “The IBM team carried out an impressive series of experiments that holds the record as the largest molecule ever simulated on a quantum computer,” he said.

But Aspuru-Guzik said that quantum computers would be of limited value until their calculation errors can be corrected. “When quantum computers are able to carry out chemical simulations in a numerically exact way, most likely when we have error correction in place and a large number of logical qubits, the field will be disrupted,” he said in a statement. He said applying quantum computers in this way could lead to the discovery of new pharmaceuticals or organic materials.

IBM has been pushing to commercialize quantum computers and recently began allowing anyone to experiment with running calculations on a 16-qubit quantum computer it has built to demonstrate the technology.

In a classical computer, information is stored using binary units, or bits. A bit is either a 0 or 1. A quantum computer instead takes advantage of quantum mechanical properties to process information using quantum bits, or qubits. A qubit can be both a 0 or 1 at the same time, or any range of numbers between 0 and 1. Also, in a classical computer, each logic gate functions independently. In a quantum computer, the qubits affect one another. This allows a quantum computer, in theory, to process information far more efficiently than a classical computer.

The machine IBM used for the Nature paper consisted of seven quibits created from supercooled superconducting materials. In the experiment, six of these quibits were used to map the energy states of the six electrons in the beryllium hydride molecule. Rather than providing a single, precise and accurate answer, as a classical computer does, a quantum computer must run a calculation hundreds of times, with an average used to arrive at a final answer.

Chow said his team is currently working to improve the speed of its quantum computer with the aim of reducing the time it takes to run each calculation from seconds to microseconds. He said they were also working on ways to reduce its error rate.

IBM is not the only company working on quantum computing. Alphabet Inc.’s Google is working toward creating a 50 qubit quantum computer. The company has pledged to use this machine to solve a previously unsolvable calculation from chemistry or electro-magnetism by the end of the year. Also competing to commercialize quantum computing is Rigetti Computing, a startup in Berkeley, California, which is building its own machine, and Microsoft Corp. which is working with an unproven quantum computing architecture that is, in theory, inherently error-free. D-Wave Systems Inc., a Canadian company, is currently the only company to sell

ORIGINAL: Bloomberg
By Jeremy Kahn September 13, 2017

Researchers take major step forward in Artificial Intelligence

By Hugo Angel,

The long-standing dream of using Artificial Intelligence (AI) to build an artificial brain has taken a significant step forward, as a team led by Professor Newton Howard from the University of Oxford has successfully prototyped a nanoscale, AI-powered, artificial brain in the form factor of a high-bandwidth neural implant.
Professor Newton Howard (pictured above and below) holding parts of the implant device
In collaboration with INTENT LTD, Qualcomm Corporation, Intel Corporation, Georgetown University and the Brain Sciences Foundation, Professor Howard’s Oxford Computational Neuroscience Lab in the Nuffield Department of Surgical Sciences has developed the proprietary algorithms and the optoelectronics required for the device. Rodents’ testing is on target to begin very soon.
This achievement caps over a decade of research by Professor Howard at MIT’s Synthetic Intelligence Lab and the University of Oxford, work that resulted in several issued US patents on the technologies and algorithms that power the device, 
  • the Fundamental Code Unit of the Brain (FCU)
  • the Brain Code (BC) and the Biological Co-Processor (BCP) 

are the latest advanced foundations for any eventual merger between biological intelligence and human intelligence. Ni2o (pronounced “Nitoo”) is the entity that Professor Howard licensed to further develop, market and promote these technologies.

The Biological Co-Processor is unique in that it uses advanced nanotechnology, optogenetics and deep machine learning to intelligently map internal events, such as neural spiking activity, to external physiological, linguistic and behavioral expression. The implant contains over a million carbon nanotubes, each of which is 10,000 times smaller than the width of a human hair. Carbon nanotubes provide a natural, high-bandwidth interface as they conduct heat, light and electricity instantaneously updating the neural laces. They adhere to neuronal constructs and even promote neural growth. Qualcomm team leader Rudy Beraha commented, ‘Although the prototype unit shown today is tethered to external power, a commercial Brain Co-Processor unit will be wireless and inductively powered, enabling it to be administered with a minimally-invasive procedures.
The device uses a combination of methods to write to the brain, including 
  • pulsed electricity
  • light and 
  • various molecules that simulate or inhibit the activation of specific neuronal groups
These can be targeted to stimulate a desired response, such as releasing chemicals in patients suffering from a neurological disorder or imbalance. The BCP is designed as a fully integrated system to use the brain’s own internal systems and chemistries to pattern and mimic healthy brain behavior, an approach that stands in stark contrast to the current state of the art, which is to simply apply mild electrocution to problematic regions of the brain. 
Therapeutic uses
The Biological Co-Processor promises to provide relief for millions of patients suffering from neurological, psychiatric and psychological disorders as well as degenerative diseases. Initial therapeutic uses will likely be for patients with traumatic brain injuries and neurodegenerative disorders, such as Alzheimer’s, as the BCP will strengthen the weak, shortening connections responsible for lost memories and skills. Once implanted, the device provides a closed-loop, self-learning platform able to both determine and administer the perfect balance of pharmaceutical, electroceutical, genomeceutical and optoceutical therapies.
Dr Richard Wirt, a Senior Fellow at Intel Corporation and Co-Founder of INTENT, the company’s partner of Ni2o bringing BCP to market, commented on the device, saying, ‘In the immediate timeframe, this device will have many benefits for researchers, as it could be used to replicate an entire brain image, synchronously mapping internal and external expressions of human response. Over the long term, the potential therapeutic benefits are unlimited.
The brain controls all organs and systems in the body, so the cure to nearly every disease resides there.- Professor Newton Howard
Rather than simply disrupting neural circuits, the machine learning systems within the BCP are designed to interpret these signals and intelligently read and write to the surrounding neurons. These capabilities could be used to reestablish any degenerative or trauma-induced damage and perhaps write these memories and skills to other, healthier areas of the brain. 
One day, these capabilities could also be used in healthy patients to radically augment human ability and proactively improve health. As Professor Howard points out: ‘The brain controls all organs and systems in the body, so the cure to nearly every disease resides there.‘ Speaking more broadly, Professor Howard sees the merging of man with machine as our inevitable destiny, claiming it to be ‘the next step on the blueprint that the author of it all built into our natural architecture.
With the resurgence of neuroscience and AI enhancing machine learning, there has been renewed interest in brain implants. This past March, Elon Musk and Bryan Johnson independently announced that they are focusing and investing in for the brain/computer interface domain. 
When asked about these new competitors, Professor Howard said he is happy to see all these new startups and established names getting into the field – he only wonders what took them so long, stating: ‘I would like to see us all working together, as we have already established a mathematical foundation and software framework to solve so many of the challenges they will be facing. We could all get there faster if we could work together – after all, the patient is the priority.
© 2017 Nuffield Department of Surgical Sciences, John Radcliffe Hospital, Headington, Oxford, OX3 9DU
2 June 2017 

Spectacular Visualizations of Brain Scans Enhanced with 1,750 Pieces of Gold Leaf

By Hugo Angel,

Self Reflected, 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The entire Self Reflected microetching under violet and white light. (photo by Greg Dunn and Will Drinker)
Anyone who thinks that scientists can’t be artists need look no further than Dr. Greg Dunn and Dr. Brian Edwards. The neuroscientist and applied physicist have paired together to create an artistic series of images that the artists describe as “the most fundamental self-portrait ever created.Literally going inside, the pair has blown up a thin slice of the brain 22 times in a series called Self-Reflected.
Traveling across 500,000 neurons, the images took two years to complete, as Dunn and Edwards developed special technology for the project. Using a technique they’ve called reflective microetching, they microscopically manipulated the reflectivity of the brain’s surface. Different regions of the brain were hand painted and digitized, later using a computer program created by Edwards to show the complex choreography our mind undergoes as it processes information.
After printing the designs onto transparencies, the duo added 1,750 gold leaf sheets to increase the art’s reflectivity. The astounding results are images that demonstrate the delicate flow and balance of our brain’s activity. “Self Reflected was created to remind us that the most marvelous machine in the known universe is at the core of our being and is the root of our shared humanity,” the artists share.
Self Reflected fine art prints and microetchings are available for purchase via Dunn’s website.
Self Reflected is an unprecedented look inside the brain.
Self Reflected (detail), 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The parietal gyrus where movement and vision are integrated. (photo by Greg Dunn and Will Drinker)


Self Reflected (detail), 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The brainstem and cerebellum, regions that control basic body and motor functions. (photo by Greg Dunn and Will Drinker)


An astounding achievement in scientific art, the artists applied 1,750 leaves of gold to the final microetchings.
Self Reflected (detail), 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The laminar structure of the cerebellum, a region involved in movement and proprioception (calculating where your body is in space).


Self Reflected (detail), 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The pons, a region involved in movement and implicated in consciousness. (photo by Greg Dunn and Will Drinker)


Self Reflected (detail), 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. Raw colorized microetching data from the reticular formation.


Self Reflected (detail), 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The visual cortex, the region located at the back of the brain that processes visual information.


Self Reflected (detail), 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The thalamus and basal ganglia, sorting senses, initiating movement, and making decisions. (photo by Greg Dunn and Will Drinker)


Self Reflected, 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The entire Self Reflected microetching under white light. (photo by Greg Dunn and Will Drinker)
Self Reflected (detail), 22K gilded microetching, 96″ X 130″, 2014-2016, Greg Dunn and Brian Edwards. The midbrain, an area that carries out diverse functions in reward, eye movement, hearing, attention, and movement. (photo by Greg Dunn and Will Drinker)
This video shows how the etched neurons twinkle as a light source is moved.

Interested in learning more? Watch Dr. Greg Dunn present the project at The Franklin Institute.
Dr. Greg Dunn: Website | Facebook | Instagram
My Modern Met granted permission to use photos by Dr. Greg Dunn.

By Jessica Stewart 
April 12, 2017

  Category: Art, Brain, Neuroscience, Visualization
  Comments: Comments Off on Spectacular Visualizations of Brain Scans Enhanced with 1,750 Pieces of Gold Leaf

Scientists Have Created an Artificial Synapse That Can Learn Autonomously

By Hugo Angel,

Sergey Tarasov/Shutterstock
Developments and advances in artificial intelligence (AI) have been due in large part to technologies that mimic how the human brain works. In the world of information technology, such AI systems are called neural networks.
These contain algorithms that can be trained, among other things, to imitate how the brain recognises speech and images. However, running an Artificial Neural Network consumes a lot of time and energy.
Now, researchers from the National Centre for Scientific Research (CNRS) in Thales, the University of Bordeaux in Paris-Sud, and Evry have developed an artificial synapse called a memristor directly on a chip.
It paves the way for intelligent systems that required less time and energy to learn, and it can learn autonomously.
In the human brain, synapses work as connections between neurons. The connections are reinforced and learning is improved the more these synapses are stimulated.
The memristor works in a similar fashion. It’s made up of a thin ferroelectric layer (which can be spontaneously polarised) that is enclosed between two electrodes.
Using voltage pulses, their resistance can be adjusted, like biological neurons. The synaptic connection will be strong when resistance is low, and vice-versa.
Figure 1
(a) Sketch of pre- and post-neurons connected by a synapse. The synaptic transmission is modulated by the causality (Δt) of neuron spikes. (b) Sketch of the ferroelectric memristor where a ferroelectric tunnel barrier of BiFeO3 (BFO) is sandwiched between a bottom electrode of (Ca,Ce)MnO3 (CCMO) and a top submicron pillar of Pt/Co. YAO stands for YAlO3. (c) Single-pulse hysteresis loop of the ferroelectric memristor displaying clear voltage thresholds ( and ). (d) Measurements of STDP in the ferroelectric memristor. Modulation of the device conductance (ΔG) as a function of the delay (Δt) between pre- and post-synaptic spikes. Seven data sets were collected on the same device showing the reproducibility of the effect. The total length of each pre- and post-synaptic spike is 600 ns.
Source: Nature Communications
The memristor’s capacity for learning is based on this adjustable resistance.
AI systems have developed considerably in the past couple of years. Neural networks built with learning algorithms are now capable of performing tasks which synthetic systems previously could not do.
For instance, intelligent systems can now compose music, play games and beat human players, or do your taxes. Some can even identify suicidal behaviour, or differentiate between what is lawful and what isn’t.
This is all thanks to AI’s capacity to learn, the only limitation of which is the amount of time and effort it takes to consume the data that serve as its springboard.
With the memristor, this learning process can be greatly improved. Work continues on the memristor, particularly on exploring ways to optimise its function.
For starters, the researchers have successfully built a physical model to help predict how it functions.
Their work is published in the journal Nature Communications.
ORIGINAL: ScienceAlert
7 APR 2017

Google DeepMind has built an AI machine that could learn as quickly as humans before long

By Hugo Angel,

Neural Episodic Control. Architecture of episodic memory module for a single action

Emerging Technology from the arXiv

Intelligent machines have humans in their sights.

Deep-learning machines already have superhuman skills when it comes to tasks such as

  • face recognition,
  • video-game playing, and
  • even the ancient Chinese game of Go.

So it’s easy to think that humans are already outgunned.

But not so fast. Intelligent machines still lag behind humans in one crucial area of performance: the speed at which they learn. When it comes to mastering classic video games, for example, the best deep-learning machines take some 200 hours of play to reach the same skill levels that humans achieve in just two hours.

So computer scientists would dearly love to have some way to speed up the rate at which machines learn.

Today, Alexander Pritzel and pals at Google’s DeepMind subsidiary in London claim to have done just that. These guys have built a deep-learning machine that is capable of rapidly assimilating new experiences and then acting on them. The result is a machine that learns significantly faster than others and has the potential to match humans in the not too distant future.

First, some background.

Deep learning uses layers of neural networks to look for patterns in data. When a single layer spots a pattern it recognizes, it sends this information to the next layer, which looks for patterns in this signal, and so on.

So in face recognition,

  • one layer might look for edges in an image,
  • the next layer for circular patterns of edges (the kind that eyes and mouths make), and
  • the next for triangular patterns such as those made by two eyes and a mouth.
  • When all this happens, the final output is an indication that a face has been spotted.

Of course, the devil is in the details. There are various systems of feedback to allow the system to learn by adjusting various internal parameters such as the strength of connections between layers. These parameters must change slowly, since a big change in one layer can catastrophically affect learning in the subsequent layers. That’s why deep neural networks need so much training and why it takes so long.

Pritzel and co have tackled this problem with a technique they call Neural Episodic Control. “Neural episodic control demonstrates dramatic improvements on the speed of learning for a wide range of environments,” they say. “Critically, our agent is able to rapidly latch onto highly successful strategies as soon as they are experienced, instead of waiting for many steps of optimisation.

The basic idea behind DeepMind’s approach is to copy the way humans and animals learn quickly. The general consensus is that humans can tackle situations in two different ways.

  • If the situation is familiar, our brains have already formed a model of it, which they use to work out how best to behave. This uses a part of the brain called the prefrontal cortex.
  • But when the situation is not familiar, our brains have to fall back on another strategy. This is thought to involve a much simpler test-and-remember approach involving the hippocampus. So we try something and remember the outcome of this episode. If it is successful, we try it again, and so on. But if it is not a successful episode, we try to avoid it in future.

This episodic approach suffices in the short term while our prefrontal brain learns. But it is soon outperformed by the prefrontal cortex and its model-based approach.

Pritzel and co have used this approach as their inspiration. Their new system has two approaches.

  • The first is a conventional deep-learning system that mimics the behaviur of the prefrontal cortex.
  • The second is more like the hippocampus. When the system tries something new, it remembers the outcome.

But crucially, it doesn’t try to learn what to remember. Instead, it remembers everything. “Our architecture does not try to learn when to write to memory, as this can be slow to learn and take a significant amount of time,” say Pritzel and co. “Instead, we elect to write all experiences to the memory, and allow it to grow very large compared to existing memory architectures.

They then use a set of strategies to read from this large memory quickly. The result is that the system can latch onto successful strategies much more quickly than conventional deep-learning systems.

They go on to demonstrate how well all this works by training their machine to play classic Atari video games, such as Breakout, Pong, and Space Invaders. (This is a playground that DeepMind has used to train many deep-learning machines.)

The team, which includes DeepMind cofounder Demis Hassibis, shows that neural episodic control vastly outperforms other deep-learning approaches in the speed at which it learns. “Our experiments show that neural episodic control requires an order of magnitude fewer interactions with the environment,” they say.

That’s impressive work with significant potential. The researchers say that an obvious extension of this work is to test their new approach on more complex 3-D environments.

It’ll be interesting to see what environments the team chooses and the impact this will have on the real world. We’ll look forward to seeing how that works out.

Ref: Neural Episodic Control :

ORIGINAL: MIT Technology Review

The future of AI is neuromorphic. Meet the scientists building digital ‘brains’ for your phone

By Hugo Angel,

Neuromorphic chips are being designed to specifically mimic the human brain – and they could soon replace CPUs
Neuroscape Lab
AI services like Apple’s Siri and others operate by sending your queries to faraway data centers, which send back responses. The reason they rely on cloud-based computing is that today’s electronics don’t come with enough computing power to run the processing-heavy algorithms needed for machine learning. The typical CPUs most smartphones use could never handle a system like Siri on the device. But Dr. Chris Eliasmith, a theoretical neuroscientist and co-CEO of Canadian AI startup Applied Brain Research, is confident that a new type of chip is about to change that.
Many have suggested Moore’s law is ending and that means we won’t get ‘more compute’ cheaper using the same methods,” Eliasmith says. He’s betting on the proliferation of ‘neuromorphics’ — a type of computer chip that is not yet widely known but already being developed by several major chip makers.
Traditional CPUs process instructions based on “clocked time” – information is transmitted at regular intervals, as if managed by a metronome. By packing in digital equivalents of neurons, neuromorphics communicate in parallel (and without the rigidity of clocked time) using “spikes” – bursts of electric current that can be sent whenever needed. Just like our own brains, the chip’s neurons communicate by processing incoming flows of electricity – each neuron able to determine from the incoming spike whether to send current out to the next neuron.
What makes this a big deal is that these chips require far less power to process AI algorithms. For example, one neuromorphic chip made by IBM contains five times as many transistors as a standard Intel processor, yet consumes only 70 milliwatts of power. An Intel processor would use anywhere from 35 to 140 watts, or up to 2000 times more power.
Eliasmith points out that neuromorphics aren’t new and that their designs have been around since the 80s. Back then, however, the designs required specific algorithms be baked directly into the chip. That meant you’d need one chip for detecting motion, and a different one for detecting sound. None of the chips acted as a general processor in the way that our own cortex does.
This was partly because there hasn’t been any way for programmers to design algorithms that can do much with a general purpose chip. So even as these brain-like chips were being developed, building algorithms for them has remained a challenge.
Eliasmith and his team are keenly focused on building tools that would allow a community of programmers to deploy AI algorithms on these new cortical chips.
Central to these efforts is Nengo, a compiler that developers can use to build their own algorithms for AI applications that will operate on general purpose neuromorphic hardware. Compilers are a software tool that programmers use to write code, and that translate that code into the complex instructions that get hardware to actually do something. What makes Nengo useful is its use of the familiar Python programming language – known for it’s intuitive syntax – and its ability to put the algorithms on many different hardware platforms, including neuromorphic chips. Pretty soon, anyone with an understanding of Python could be building sophisticated neural nets made for neuromorphic hardware.
Things like vision systems, speech systems, motion control, and adaptive robotic controllers have already been built with Nengo,Peter Suma, a trained computer scientist and the other CEO of Applied Brain Research, tells me.
Perhaps the most impressive system built using the compiler is Spaun, a project that in 2012 earned international praise for being the most complex brain model ever simulated on a computer. Spaun demonstrated that computers could be made to interact fluidly with the environment, and perform human-like cognitive tasks like recognizing images and controlling a robot arm that writes down what it’s sees. The machine wasn’t perfect, but it was a stunning demonstration that computers could one day blur the line between human and machine cognition. Recently, by using neuromorphics, most of Spaun has been run 9000x faster, using less energy than it would on conventional CPUs – and by the end of 2017, all of Spaun will be running on Neuromorphic hardware.
Eliasmith won NSERC’s John C. Polyani award for that project — Canada’s highest recognition for a breakthrough scientific achievement – and once Suma came across the research, the pair joined forces to commercialize these tools.
While Spaun shows us a way towards one day building fluidly intelligent reasoning systems, in the nearer term neuromorphics will enable many types of context aware AIs,” says Suma. Suma points out that while today’s AIs like Siri remain offline until explicitly called into action, we’ll soon have artificial agents that are ‘always on’ and ever-present in our lives.
Imagine a SIRI that listens and sees all of your conversations and interactions. You’ll be able to ask it for things like – “Who did I have that conversation about doing the launch for our new product in Tokyo?” or “What was that idea for my wife’s birthday gift that Melissa suggested?,” he says.
When I raised concerns that some company might then have an uninterrupted window into even the most intimate parts of my life, I’m reminded that because the AI would be processed locally on the device, there’s no need for that information to touch a server owned by a big company. And for Eliasmith, this ‘always on’ component is a necessary step towards true machine cognition. “The most fundamental difference between most available AI systems of today and the biological intelligent systems we are used to, is the fact that the latter always operate in real-time. Bodies and brains are built to work with the physics of the world,” he says.
Already, major efforts across the IT industry are heating up to get their AI services into the hands of users. Companies like Apple, Facebook, Amazon, and even Samsung, are developing conversational assistants they hope will one day become digital helpers.
Monday 6 March 2017

A Giant Neuron Has Been Found Wrapped Around the Entire Circumference of the Brain

By Hugo Angel,

Allen Institute for Brain Science

This could be where consciousness forms. For the first time, scientists have detected a giant neuron wrapped around the entire circumference of a mouse’s brain, and it’s so densely connected across both hemispheres, it could finally explain the origins of consciousness.

Using a new imaging technique, the team detected the giant neuron emanating from one of the best-connected regions in the brain, and say it could be coordinating signals from different areas to create conscious thought.

This recently discovered neuron is one of three that have been detected for the first time in a mammal’s brain, and the new imaging technique could help us figure out if similar structures have gone undetected in our own brains for centuries.

At a recent meeting of the Brain Research through Advancing Innovative Neurotechnologies initiative in Maryland, a team from the Allen Institute for Brain Science described how all three neurons stretch across both hemispheres of the brain, but the largest one wraps around the organ’s circumference like a “crown of thorns”.
You can see them highlighted in the image at the top of the page.

Lead researcher Christof Koch told Sara Reardon at Nature that they’ve never seen neurons extend so far across both regions of the brain before.
Oddly enough, all three giant neurons happen to emanate from a part of the brain that’s shown intriguing connections to human consciousness in the past – the claustrum, a thin sheet of grey matter that could be the most connected structure in the entire brain, based on volume.

This relatively small region is hidden between the inner surface of the neocortex in the centre of the brain, and communicates with almost all regions of cortex to achieve many higher cognitive functions such as

  • language,
  • long-term planning, and
  • advanced sensory tasks such as
  • seeing and
  • hearing.

Advanced brain-imaging techniques that look at the white matter fibres coursing to and from the claustrum reveal that it is a neural Grand Central Station,Koch wrote for Scientific American back in 2014. “Almost every region of the cortex sends fibres to the claustrum.”

The claustrum is so densely connected to several crucial areas in the brain that Francis Crick of DNA double helix fame referred to it a “conductor of consciousnessin a 2005 paper co-written with Koch.

They suggested that it connects all of our external and internal perceptions together into a single unifying experience, like a conductor synchronises an orchestra, and strange medical cases in the past few years have only made their case stronger.

Back in 2014, a 54-year-old woman checked into the George Washington University Medical Faculty Associates in Washington, DC, for epilepsy treatment.

This involved gently probing various regions of her brain with electrodes to narrow down the potential source of her epileptic seizures, but when the team started stimulating the woman’s claustrum, they found they could effectively ‘switch’ her consciousness off and on again.

Helen Thomson reported for New Scientist at the time:
When the team zapped the area with high frequency electrical impulses, the woman lost consciousness. She stopped reading and stared blankly into space, she didn’t respond to auditory or visual commands and her breathing slowed.

As soon as the stimulation stopped, she immediately regained consciousness with no memory of the event. The same thing happened every time the area was stimulated during two days of experiments.”

According to Koch, who was not involved in the study, this kind of abrupt and specific ‘stopping and starting‘ of consciousness had never been seen before.

Another experiment in 2015 examined the effects of claustrum lesions on the consciousness of 171 combat veterans with traumatic brain injuries.

They found that claustrum damage was associated with the duration, but not frequency, of loss of consciousness, suggesting that it could play an important role in the switching on and off of conscious thought, but another region could be involved in maintaining it.

And now Koch and his team have discovered extensive neurons in mouse brains emanating from this mysterious region.

In order to map neurons, researchers usually have to inject individual nerve cells with a dye, cut the brain into thin sections, and then trace the neuron’s path by hand.

It’s a surprisingly rudimentary technique for a neuroscientist to have to perform, and given that they have to destroy the brain in the process, it’s not one that can be done regularly on human organs.

Koch and his team wanted to come up with a technique that was less invasive, and engineered mice that could have specific genes in their claustrum neurons activated by a specific drug.

When the researchers fed the mice a small amount of the drug, only a handful of neurons received enough of it to switch on these genes,Reardon reports for Nature.

That resulted in production of a green fluorescent protein that spread throughout the entire neuron. The team then took 10,000 cross-sectional images of the mouse brain, and used a computer program to create a 3D reconstruction of just three glowing cells.

We should keep in mind that just because these new giant neurons are connected to the claustrum doesn’t mean that Koch’s hypothesis about consciousness is correct – we’re a long way from proving that yet.

It’s also important to note that these neurons have only been detected in mice so far, and the research has yet to be published in a peer-reviewed journal, so we need to wait for further confirmation before we can really delve into what this discovery could mean for humans.

But the discovery is an intriguing piece of the puzzle that could help up make sense of this crucial, but enigmatic region of the brain, and how it could relate to the human experience of conscious thought.

The research was presented at the 15 February meeting of the Brain Research through Advancing Innovative Neurotechnologies initiative in Bethesda, Maryland.

ORIGINAL: ScienceAlert

28 FEB 2017

Google Unveils Neural Network with “Superhuman” Ability to Determine the Location of Almost Any Image

By Hugo Angel,

Guessing the location of a randomly chosen Street View image is hard, even for well-traveled humans. But Google’s latest artificial-intelligence machine manages it with relative ease.
Here’s a tricky task. Pick a photograph from the Web at random. Now try to work out where it was taken using only the image itself. If the image shows a famous building or landmark, such as the Eiffel Tower or Niagara Falls, the task is straightforward. But the job becomes significantly harder when the image lacks specific location cues or is taken indoors or shows a pet or food or some other detail.Nevertheless, humans are surprisingly good at this task. To help, they bring to bear all kinds of knowledge about the world such as the type and language of signs on display, the types of vegetation, architectural styles, the direction of traffic, and so on. Humans spend a lifetime picking up these kinds of geolocation cues.So it’s easy to think that machines would struggle with this task. And indeed, they have.

Today, that changes thanks to the work of Tobias Weyand, a computer vision specialist at Google, and a couple of pals. These guys have trained a deep-learning machine to work out the location of almost any photo using only the pixels it contains.

Their new machine significantly outperforms humans and can even use a clever trick to determine the location of indoor images and pictures of specific things such as pets, food, and so on that have no location cues.

Their approach is straightforward, at least in the world of machine learning.

  • Weyand and co begin by dividing the world into a grid consisting of over 26,000 squares of varying size that depend on the number of images taken in that location.
    So big cities, which are the subjects of many images, have a more fine-grained grid structure than more remote regions where photographs are less common. Indeed, the Google team ignored areas like oceans and the polar regions, where few photographs have been taken.


  • Next, the team created a database of geolocated images from the Web and used the location data to determine the grid square in which each image was taken. This data set is huge, consisting of 126 million images along with their accompanying Exif location data.
  • Weyand and co used 91 million of these images to teach a powerful neural network to work out the grid location using only the image itself. Their idea is to input an image into this neural net and get as the output a particular grid location or a set of likely candidates. 
  • They then validated the neural network using the remaining 34 million images in the data set.
  • Finally they tested the network—which they call PlaNet—in a number of different ways to see how well it works.

The results make for interesting reading. To measure the accuracy of their machine, they fed it 2.3 million geotagged images from Flickr to see whether it could correctly determine their location. “PlaNet is able to localize 3.6 percent of the images at street-level accuracy and 10.1 percent at city-level accuracy,” say Weyand and co. What’s more, the machine determines the country of origin in a further 28.4 percent of the photos and the continent in 48.0 percent of them.

That’s pretty good. But to show just how good, Weyand and co put PlaNet through its paces in a test against 10 well-traveled humans. For the test, they used an online game that presents a player with a random view taken from Google Street View and asks him or her to pinpoint its location on a map of the world.

Anyone can play at Give it a try—it’s a lot of fun and more tricky than it sounds.

GeoGuesser Screen Capture Example

Needless to say, PlaNet trounced the humans. “In total, PlaNet won 28 of the 50 rounds with a median localization error of 1131.7 km, while the median human localization error was 2320.75 km,” say Weyand and co. “[This] small-scale experiment shows that PlaNet reaches superhuman performance at the task of geolocating Street View scenes.

An interesting question is how PlaNet performs so well without being able to use the cues that humans rely on, such as vegetation, architectural style, and so on. But Weyand and co say they know why: “We think PlaNet has an advantage over humans because it has seen many more places than any human can ever visit and has learned subtle cues of different scenes that are even hard for a well-traveled human to distinguish.

They go further and use the machine to locate images that do not have location cues, such as those taken indoors or of specific items. This is possible when images are part of albums that have all been taken at the same place. The machine simply looks through other images in the album to work out where they were taken and assumes the more specific image was taken in the same place.

That’s impressive work that shows deep neural nets flexing their muscles once again. Perhaps more impressive still is that the model uses a relatively small amount of memory unlike other approaches that use gigabytes of the stuff. “Our model uses only 377 MB, which even fits into the memory of a smartphone,” say Weyand and co.

That’s a tantalizing idea—the power of a superhuman neural network on a smartphone. It surely won’t be long now!

Ref: : PlaNet—Photo Geolocation with Convolutional Neural Networks

ORIGINAL: TechnoplogyReview
by Emerging Technology from the arXiv
February 24, 2016

JPMorgan Software Does in Seconds What Took Lawyers 360,000 Hours

By Hugo Angel,

New software does in seconds what took staff 360,000 hours Bank seeking to streamline systems, avoid redundancies

At JPMorgan Chase & Co., a learning machine is parsing financial deals that once kept legal teams busy for thousands of hours.

The program, called COIN, for Contract Intelligence, does the mind-numbing job of interpreting commercial-loan agreements that, until the project went online in June, consumed 360,000 hours of work each year by lawyers and loan officers. The software reviews documents in seconds, is less error-prone and never asks for vacation.

Attendees discuss software on Feb. 27, the eve of JPMorgan’s Investor Day.
Photographer: Kholood Eid/Bloomberg

While the financial industry has long touted its technological innovations, a new era of automation is now in overdrive as cheap computing power converges with fears of losing customers to startups. Made possible by investments in machine learning and a new private cloud network, COIN is just the start for the biggest U.S. bank. The firm recently set up technology hubs for teams specializing in big data, robotics and cloud infrastructure to find new sources of revenue, while reducing expenses and risks.

The push to automate mundane tasks and create new tools for bankers and clients — a growing part of the firm’s $9.6 billion technology budget — is a core theme as the company hosts its annual investor day on Tuesday.

Behind the strategy, overseen by Chief Operating Operating Officer Matt Zames and Chief Information Officer Dana Deasy, is an undercurrent of anxiety: Though JPMorgan emerged from the financial crisis as one of few big winners, its dominance is at risk unless it aggressively pursues new technologies, according to interviews with a half-dozen bank executives.

Redundant Software

That was the message Zames had for Deasy when he joined the firm from BP Plc in late 2013. The New York-based bank’s internal systems, an amalgam from decades of mergers, had too many redundant software programs that didn’t work together seamlessly.“Matt said, ‘Remember one thing above all else: We absolutely need to be the leaders in technology across financial services,’” Deasy said last week in an interview. “Everything we’ve done from that day forward stems from that meeting.

After visiting companies including Apple Inc. and Facebook Inc. three years ago to understand how their developers worked, the bank set out to create its own computing cloud called Gaia that went online last year. Machine learning and big-data efforts now reside on the private platform, which effectively has limitless capacity to support their thirst for processing power. The system already is helping the bank automate some coding activities and making its 20,000 developers more productive, saving money, Zames said. When needed, the firm can also tap into outside cloud services from Inc., Microsoft Corp. and International Business Machines Corp.

Tech SpendingJPMorgan will make some of its cloud-backed technology available to institutional clients later this year, allowing firms like BlackRock Inc. to access balances, research and trading tools. The move, which lets clients bypass salespeople and support staff for routine information, is similar to one Goldman Sachs Group Inc. announced in 2015.JPMorgan’s total technology budget for this year amounts to 9 percent of its projected revenue — double the industry average, according to Morgan Stanley analyst Betsy Graseck. The dollar figure has inched higher as JPMorgan bolsters cyber defenses after a 2014 data breach, which exposed the information of 83 million customers.

We have invested heavily in technology and marketing — and we are seeing strong returns,” JPMorgan said in a presentation Tuesday ahead of its investor day, noting that technology spending in its consumer bank totaled about $1 billion over the past two years.

Attendees inspect JPMorgan Markets software kiosk for Investors Day.
Photographer: Kholood Eid/Bloomberg

One-third of the company’s budget is for new initiatives, a figure Zames wants to take to 40 percent in a few years. He expects savings from automation and retiring old technology will let him plow even more money into new innovations.

Not all of those bets, which include several projects based on a distributed ledger, like blockchain, will pay off, which JPMorgan says is OK. One example executives are fond of mentioning: The firm built an electronic platform to help trade credit-default swaps that sits unused.

‘Can’t Wait’We’re willing to invest to stay ahead of the curve, even if in the final analysis some of that money will go to product or a service that wasn’t needed,Marianne Lake, the lender’s finance chief, told a conference audience in June. That’s “because we can’t wait to know what the outcome, the endgame, really looks like, because the environment is moving so fast.”As for COIN, the program has helped JPMorgan cut down on loan-servicing mistakes, most of which stemmed from human error in interpreting 12,000 new wholesale contracts per year, according to its designers.

JPMorgan is scouring for more ways to deploy the technology, which learns by ingesting data to identify patterns and relationships. The bank plans to use it for other types of complex legal filings like credit-default swaps and custody agreements. Someday, the firm may use it to help interpret regulations and analyze corporate communications.

Another program called X-Connect, which went into use in January, examines e-mails to help employees find colleagues who have the closest relationships with potential prospects and can arrange introductions.

Creating Bots
For simpler tasks, the bank has created bots to perform functions like granting access to software systems and responding to IT requests, such as resetting an employee’s password, Zames said. Bots are expected to handle 1.7 million access requests this year, doing the work of 140 people.

Matt Zames
Photographer: Kholood Eid/Bloomberg

While growing numbers of people in the industry worry such advancements might someday take their jobs, many Wall Street personnel are more focused on benefits. A survey of more than 3,200 financial professionals by recruiting firm Options Group last year found a majority expect new technology will improve their careers, for example by improving workplace performance.

Anything where you have back-office operations and humans kind of moving information from point A to point B that’s not automated is ripe for that,” Deasy said. “People always talk about this stuff as displacement. I talk about it as freeing people to work on higher-value things, which is why it’s such a terrific opportunity for the firm.

To help spur internal disruption, the company keeps tabs on 2,000 technology ventures, using about 100 in pilot programs that will eventually join the firm’s growing ecosystem of partners. For instance, the bank’s machine-learning software was built with Cloudera Inc., a software firm that JPMorgan first encountered in 2009.

We’re starting to see the real fruits of our labor,” Zames said. “This is not pie-in-the-sky stuff.


by Hugh Son
27 de febrero de 2017
%d bloggers like this: