3D printed artificial intelligence device identifies objects at the speed of light

By Hugo Angel,

Announcing Cirq: An Open Source Framework for NISQ Algorithms

By Hugo Angel,

Over the past few years, quantum computing has experienced a growth not only in the construction of quantum hardware, but also in the development of quantum algorithms. With the availability of Noisy Intermediate Scale Quantum (NISQ) computers (devices with ~50 – 100 qubits and high fidelity quantum gates), the development of algorithms to understand the power of these machines is of increasing importance. However, a common problem when designing a quantum algorithm on a NISQ processor is how to take full advantage of these limited quantum devices—using resources to solve the hardest part of the problem rather than on overheads from poor mappings between the algorithm and hardware. Furthermore some quantum processors have complex geometric constraints and other nuances, and ignoring these will either result in faulty quantum computation, or a computation that is modified and sub-optimal.*

Today at the First International Workshop on Quantum Software and Quantum Machine Learning(QSML), the Google AI Quantum team announced the public alpha of Cirq, an open source framework for NISQ computers. Cirq is focused on near-term questions and helping researchers understand whether NISQ quantum computers are capable of solving computational problems of practical importance. Cirq is licensed under Apache 2, and is free to be modified or embedded in any commercial or open source package.

Once installed, Cirq enables researchers to write quantum algorithms for specific quantum processors. Cirq gives users fine tuned control over quantum circuits, specifying gate behavior using native gates, placing these gates appropriately on the device, and scheduling the timing of these gates within the constraints of the quantum hardware. Data structures are optimized for writing and compiling these quantum circuits to allow users to get the most out of NISQ architectures. Cirq supports running these algorithms locally on a simulator, and is designed to easily integrate with future quantum hardware or larger simulators via the cloud.

We are also announcing the release of OpenFermion-Cirq, an example of a Cirq based application enabling near-term algorithms. OpenFermion is a platform for developing quantum algorithms for chemistry problems, and OpenFermion-Cirq is an open source library which compiles quantum simulation algorithms to Cirq. The new library uses the latest advances in building low depth quantum algorithms for quantum chemistry problems to enable users to go from the details of a chemical problem to highly optimized quantum circuits customized to run on particular hardware. For example, this library can be used to easily build quantum variational algorithms for simulating properties of molecules and complex materials.

Quantum computing will require strong cross-industry and academic collaborations if it is going to realize its full potential. In building Cirq, we worked with early testers to gain feedback and insight into algorithm design for NISQ computers. Below are some examples of Cirq work resulting from these early adopters:

To learn more about how Cirq is helping enable NISQ algorithms, please visit the links above where many of the adopters have provided example source code for their implementations.

Today, the Google AI Quantum team is using Cirq to create circuits that run on Google’s Bristlecone processor. In the future, we plan to make this processor available in the cloud, and Cirq will be the interface in which users write programs for this processor. In the meantime, we hope Cirq will improve the productivity of NISQ algorithm developers and researchers everywhere. Please check out the GitHub repositories for Cirq and OpenFermion-Cirq — pull requests welcome!

 

Acknowledgements
We would like to thank Craig Gidney for leading the development of Cirq, Ryan Babbush and Kevin Sung for building OpenFermion-Cirq and a whole host of code contributors to both frameworks. 



* An analogous situation is how early classical programmers needed to run complex programs in very small memory spaces by paying careful attention to the lowest level details of the hardware.

ORIGINAL: AI Blog

Wednesday, July 18, 2018

  Category: Google, QSML, Quantum, Software
  Comments: Comments Off on Announcing Cirq: An Open Source Framework for NISQ Algorithms

Nvidia launches AI computer to give autonomous robots better brains

By Hugo Angel,

Nvidia’s chips will be used to power autonomous robots like the one above, which guides customers around Lowe’s stores.  Image: Lowe’s

Nvidia’s chips will be used to power autonomous robots like the one above, which guides customers around Lowe’s stores.  Image: Lowe’s

Chip designer Nvidia has been an integral part of the recent AI renaissance, providing the processors that power much of the field’s research and development. Now, it’s looking to the future. At Computex 2018, it unveiled two new products:
  • Nvidia Isaac, a new developer platform, and the
  • Jetson Xavier, an AI computer, both built to power autonomous robots.

Nvidia CEO Jensen Huang said Isaac and Jetson Xavier were designed to capture the next stage of AI innovation as it moves from software running in the cloud to robots that navigate the real world. “AI, in combination with sensors and actuators, will be the brain of a new generation of autonomous machines,” said Huang. “Someday, there will be billions of intelligent machines in manufacturing, home delivery, warehouse logistics and much more.”

The Isaac platform is a set of software tools that will make it simpler for companies to develop and train robots.

It includes

  • a collection of APIs to connect to 3D cameras and sensors;
  • a library of AI accelerators to keep algorithms running smoothly and without lag; and
  • a new simulation environment, Isaac Sim, for training and testing bots in a virtual space.

Doing so is quicker and safer than IRL testing, but it can’t match the complexity of the real world.

The Jetson Xavier chipboard (center) and covered developer kit (right). Photo by Vlad Savov / The Verge

The Jetson Xavier chipboard (center) and covered developer kit (right). Photo by Vlad Savov / The Verge

But the heart of the Isaac platform is Nvidia’s new Jetson Xavier computer, an incredibly compact piece of hardware that’s comprised of a number of processing components. These include a

  • Volta Tensor Core GPU,
  • an eight-core ARM64 CPU,
  • two NVDLA deep learning accelerators, and processors for static images and video.

In total, Jetson Xavier contains more than 9 billion transistors and delivers over 30 TOPS (trillion operations per second) of compute. And it consumes just 30 watts of power, which is half of the electricity used by the average light bulb.

The cost of one Jetson Xavier (along with access to the Isaac platform) is $1,299, and Huang claims the computer provides the same processing power as a $10,000 workstation. This comparison is not that meaningful without knowing exactly what chips the Jetson Xavier is being compared with, but it’s undeniable that this hardware offers a lot of power for a reasonable price.

The really interesting thing, of course, is not Nvidia’s hardware, but what developers will do with it. AI-powered robots are becoming more common; early use cases include security, food delivery, and inventory management in retail stores. Nvidia’s chips are already used to power robots made by a company called Fellow, which are being trialed by Lowe’s. The AI robot revolution is just beginning.

ORIGINAL: The Verge
By James Vincent
on 

AI will spell the end of capitalism

By Hugo Angel,

A resident rides past an image depicting German philosophers Karl Marx and Friedrich Engels, former Soviet leaders Vladimir Lenin and Joseph Stalin and former Chinese leader Mao Zedong in Shanghai, China. April 25, 2016. (Aly Song/Reuters)

Feng Xiang, a professor of law at Tsinghua University, is one of China’s most prominent legal scholars. He spoke at the Berggruen Institute’s China Center workshop on artificial intelligence in March in Beijing.

BEIJING — The most momentous challenge facing socio-economic systems today is the arrival of artificial intelligence. If AI remains under the control of market forces, it will inexorably result in a super-rich oligopoly of data billionaires who reap the wealth created by robots that displace human labor, leaving massive unemployment in their wake.

But China’s socialist market economy could provide a solution to this. If AI rationally allocates resources through big data analysis, and if robust feedback loops can supplant the imperfections of “the invisible hand” while fairly sharing the vast wealth it creates, a planned economy that actually works could at last be achievable.

The more AI advances into a general-purpose technology that permeates every corner of life, the less sense it makes to allow it to remain in private hands that serve the interests of the few instead of the many. More than anything else, the inevitability of mass unemployment and the demand for universal welfare will drive the idea of socializing or nationalizing AI.

Marx’s dictum, “From each according to their abilities, to each according to their needs,” needs an update for the 21st century: “From the inability of an AI economy to provide jobs and a living wage for all, to each according to their needs.”

Even at this early stage, the idea that digital capitalism will somehow make social welfare a priority has already proven to be a fairytale. The billionaires of Google and Apple, who have been depositing company profits in offshore havens to avoid taxation, are hardly paragons of social responsibility. The ongoing scandal around Facebook’s business model, which puts profitability above responsible citizenship, is yet another example of how in digital capitalism, private companies only look after their own interests at the expense of the rest of society.

One can readily see where this is all headed once technological unemployment accelerates. “Our responsibility is to our shareholders,” the robot owners will say. “We are not an employment agency or a charity.”

These companies have been able to get away with their social irresponsibility because the legal system and its loopholes in the West are geared to protect private property above all else. Of course, in China, we have big privately owned Internet companies like Alibaba and Tencent. But unlike in the West, they are monitored by the state and do not regard themselves as above or beyond social control.

It is the very pervasiveness of AI that will spell the end of market dominance. The market may reasonably if unequally function if industry creates employment opportunities for most people. But when industry only produces joblessness, as robots take over more and more, there is no good alternative but for the state to step in. As AI invades economic and social life, all private law-related issues will soon become public ones. More and more, regulation of private companies will become a necessity to maintain some semblance of stability in societies roiled by constant innovation.

I consider this historical process a step closer to a planned market economy. Laissez-faire capitalism as we have known it can lead nowhere but to a dictatorship of AI oligarchs who gather rents because the intellectual property they own rules over the means of production. On a global scale, it is easy to envision this unleashed digital capitalism leading to a battle between robots for market share that will surely end as disastrously as the imperialist wars did in an earlier era.

For the sake of social well-being and security, individuals and private companies should not be allowed to possess any exclusive cutting-edge technology or core AI platforms. Like nuclear and biochemical weapons, as long as they exist, nothing other than a strong and stable state can ensure society’s safety. If we don’t nationalize AI, we could sink into a dystopia reminiscent of the early misery of industrialization, with its satanic mills and street urchins scrounging for a crust of bread.

The dream of communism is the elimination of wage labor. If AI is bound to serve society instead of private capitalists, it promises to do so by freeing an overwhelming majority from such drudgery while creating wealth to sustain all.

If the state controls the market, instead of digital capitalism controlling the state, true communist aspirations will be achievable. And because AI increasingly enables the management of complex systems by processing massive amounts of information through intensive feedback loops, it presents, for the first time, a real alternative to the market signals that have long justified laissez-faire ideology — and all the ills that go with it.

Going forward, China’s socialist market economy, which aims to harness the fruits of production for the whole population and not just a sliver of elites operating in their own self-centered interests, can lead the way toward this new stage of human development.

If properly regulated in this way, we should celebrate, not fear, the advent of AI. If it is brought under social control, it will finally free workers from peddling their time and sweat only to enrich those at the top. The communism of the future ought to adopt a new slogan: “Robots of the world, unite!

This was produced by The WorldPost, a partnership of the Berggruen Institute and The Washington Post.

ORIGINAL: The Washington Post

AI is the new space race. Here’s what the biggest countries are doing

By Hugo Angel,

It’s a space-race redux, where world superpowers battle to define generations of technology to come. Unlike space, there’s no clear finish line.

Despite the duopoly narrative, other countries, including Canada and the UK, have ramped investment in the technology, announcing deals to fund private and public AI ventures. After years of a slow trickle, the first months of 2018 have seen an explosion of government-backed projects announced all over the world. Here are some of the biggest and most consequential.

Refresher: US and China

China is investing at least $7 billion through 2030, including $2 billion for a research park in Beijing. The Chinese government foresees a $150 billion AI industry by that time, and has the most comprehensive national plan to become a leader in the technology. Chinese startups also received 48% of all funding for AI investments, according to CB Insights.

The US has no central AI policy, but individual projects are funded by military and paramilitary departments like DARPA and IARPA. While little is being done on a national level, AI industry and research is led by academia and private industry in the United States.

United Kingdom

Last week, the UK announced a deal between private and public groups that would bring more than $200 million of AI investment into the country, a representative of the initiative told Quartz. The UK government is pledging $30 million to build AI tech incubators, while VC firm Global Brain has pledged a $50 million fund and Chrysalix will put up a fund of more than $100 million. The government will also take a more proactive role in funding academic research, financially supporting 1,000 AI PhDs at any given time. This supports universities such as Cambridge and Oxford, which both have large artificial intelligence programs.

The House of Lords also released a report earlier this month acknowledging that the UK can’t outspend countries like China and the US, but can instead specialize in areas like AI ethics to gain competitive advantages.

European Union

An April 25 report from the European Commission outlines a $24 billion (€20 billion) investment between 2018 and 2020, with the expectation that those funds will come from public and private entities. The Commission is starting that with a 2018 investment of $1.8 billion (€1.5 billion) in research, as a part of the EU’s Horizon 2020 fund. However, this is just a preliminary document—the Commission expects to have a fully fleshed out plan for AI investment by the end of 2018.

Germany

Chancellor Angela Merkel spoke April 22 to the importance of competing with China in AI, due to China’s aggressive goals of becoming the world’s leading nation in the technology, but the German government hasn’t committed to investing. However, Amazon is investing $1.5 million in awards and building a new research center next to a Max Planck Institute AI campus in Tübingen, which already does core AI research. One venture capitalist’s analysis of the world’s AI startup hubs pegs Berlin as the fourth largest in terms of number of startup AI companies, behind Silicon Valley, London, and Paris.

France

The French government will invest $1.8 billion (€1.5 billion) in AI research until 2022, president Emmanuel Macron announced in late March. The country’s AI initiative has a unique focus on data, with plans to make private companies publicly release their data for use in AI on a case-by-case basis. Other initiatives typically focus on backing research firms, rather than making private companies play nice with others.

An undisclosed portion of the funding will go towards an AI research partnership with Germany.

Canada

Canada saw government’s role in AI research before many others, and made a $125 million commitment to AI research in March 2017. After the election of US president Donald Trump, Canada also started recruiting AI talent, playing off the nationalist rhetoric coming out of the White House. The Quebec government has warned that if investment isn’t significantly ramped up to meet that of the UK and China, Canada will fall behind.

Russia

Vladimir Putin has made some grandiose comments on artificial intelligence, saying that the leader in AI will “rule the world.” While the country spends an estimated $12.5 million annually on AI, Samuel Bendett writes for Defense One that Russia’s real strength in AI comes from the ability for the government to corral participation between public and private organizations. Many of Russia’s AI demonstrations are military in nature, like AI-assisted fighter jets and automated artillery.

 

ORIGINAL: Quartz

Written by Dave Gershgorn

May 02, 2018

Chinese AI startup dwarfs global rivals with $4.5 billion valuation

By Hugo Angel,

Chinese artificial intelligence startups are attracting ever richer valuations as the country bets big on the emerging technology.

SenseTime, which specializes in software that can identify people’s faces in surveillance videos, said Monday that it had secured $600 million in fresh funds and is already in talks with investors to raise more money. The latest cash injection values SenseTime at more than $4.5 billion, according to a person familiar with the company’s fundraising.

That’s more than any other artificial intelligence startup on the planet, according to CB Insights. The second biggest AI startup is also Chinese: Shanghai-based Yitu Technology with a valuation of about $2.4 billion.

Related: China’s Didi said to be worth $56B after raising more cash

SenseTime tapped big names for cash in its latest funding round, including China’s leading e-commerce company, Alibaba (BABA). It had already announced an investment from US computer chip maker Qualcomm (QCOM) last year.

The new funding will “help us widen the scope” for putting artificial intelligence to use in different industries, SenseTime CEO Li Xu said in a statement. Specifically, the company said it will pump more money into areas like security, smartphones, advertising and autonomous driving.

The investment comes amid intensifying commitment by corporations and governments in AI research and development, despite warnings from some tech leaders and academics of the potential misuse of the emerging technology.

The world’s biggest tech companies like Google (GOOG) and Facebook (FB) are pouring resources into artificial intelligence. Last week, Apple (AAPL) said it had poached Google’s AI chief to help boost its own efforts in the technology.

Related: Google is opening an artificial intelligence center in China

Facial recognition technology in particular is big business in China, including in government efforts to keep tabs on citizens.

SenseTime’s software is already used by Chinese smartphone makers like Xiaomi, Vivo and Oppo to organize photo albums or unlock phones by scanning faces.

SenseTime artificial intelligence detection tracking tech

A demonstration of SenseTime’s facial recognition technology at an industry conference.

Alibaba said it is still figuring out how to use SenseTime tech in its businesses. One potential area could be in the company’s cashless grocery store chain.

Chinese retailer Suning, which has also invested in SenseTime, is already using the startup’s software to develop cashier-free stores. They’re similar to Amazon Go, where shoppers can just grab products and walk out, with AI software determining what was taken from shelves and settling the bill electronically.

SenseTime said it has more than 400 partners and clients using its AI applications.

That includes city governments that have paired facial recognition software with the massive number of surveillance cameras trained on city streets. AI software analyzes the footage, scanning faces to identify people or analyzing crowds to detect suspicious behavior.

Related: Control AI now or brace for nightmare future, experts warn

SenseTime said as far as it knows, Chinese police have only used the company’s tech to catch criminals.

But critics have slammed the deployment of AI to track Chinese citizens, saying it violates privacy and targets political dissidents.

China has said it wants to be the dominant player in AI by 2030, aiming to build an industry worth $150 billion. The country’s ambitious surveillance plans have helped spur spending on the technology.

Investment in facial recognition tech, including government grants, surged to $1.7 billion in 2017, a more than sixfold increase from the previous year, according to a CB Insights report.

All that cash has made China home to some of the most valuable AI startups on the planet, including SenseTime, Yitu and Megvii, according to CB Insights.

ORIGINAL: CNN Money

  Category: AI, Alibaba
  Comments: Comments Off on Chinese AI startup dwarfs global rivals with $4.5 billion valuation

Move Over Moore’s Law, Make Way for Huang’s Law

By Hugo Angel,

Graphics processors are on a supercharged development path that eclipses Moore’s Law, says Nvidia’s Jensen Huang

Nvidia CEO Jensen Huang on stage at the GTC 2018 conference

Nvidia CEO Jensen Huang on stage at the GTC 2018 conference Photo: Tekla Perry

 An exuberant Jensen Huang, who gave a keynote and popped up on stage during various events at Nvidia’s 2018 GPU Technology Conference (GTC) held in San Jose, Calif. last week, repeatedly made the point that due to extreme advances in technology, graphics processing units (GPUs) are governed by a law of their own.

There’s a new law going on,” he says, “a supercharged law.

Huang, who is CEO of Nvidia, didn’t call it Huang’s Law; I’m guessing he’ll leave that to others. After all, Gordon Moore wasn’t the one who gave Moore’s Law its now-famous moniker. (Moore’s Law—Moore himself called it an observation—refers to the regular doubling of the number of components per integrated circuit that drove a dramatic reduction in the cost of computing power.)

But Huang did make sure nobody attending GTC missed the memo.

Just how fast does GPU technology advance?

In his keynote address, Huang pointed out that Nvidia’s GPUs today are 25 times faster than five years ago. If they were advancing according to Moore’s law, he said, they only would have increased their speed by a factor of 10.

Huang later considered the increasing power of GPUs in terms of another benchmark: the time to train AlexNet, a neural network trained on 15 million images. He said that five years ago, it took AlexNet six days on two of Nvidia’s GTX 580s to go through the training process; with the company’s latest hardware, the DGX-2, it takes 18 minutes—a factor of 500.

So Huang was throwing a variety of numbers out there; it seems he’s still working out the exact multiple he’s talking about. But he was clear about the reason that GPUs need a law of their own—they benefit from simultaneous advances on multiple fronts: architecture, interconnects, memory technology, algorithms, and more.

The innovation isn’t just about chips,” he said, “It’s about the entire stack.

GPUs are also advancing more quickly than CPUs because they rely upon a parallel architecture, Jesse Clayton, an Nvidia senior manager, pointed out in another session.

ORIGINAL: Spectrum
By Tekla S. Perry

Powerful New Algorithm Is a Big Step Towards Whole-Brain Simulation

By Hugo Angel,

Image Credit: Jolygon / Shutterstock.com

The renowned physicist Dr. Richard Feynman once said: “What I cannot create, I do not understand. Know how to solve every problem that has been solved.

An increasingly influential subfield of neuroscience has taken Feynman’s words to heart. To theoretical neuroscientists, the key to understanding how intelligence works is to recreate it inside a computer. Neuron by neuron, these whizzes hope to reconstruct the neural processes that lead to a thought, a memory, or a feeling.

With a digital brain in place, scientists can test out current theories of cognition or explore the parameters that lead to a malfunctioning mind. As philosopher Dr. Nick Bostrom at the University of Oxford argues, simulating the human mind is perhaps one of the most promising (if laborious) ways to recreate—and surpass—human-level ingenuity.

There’s just one problem: our computers can’t handle the massively parallel nature of our brains. Squished within a three-pound organ are over

  • 100 billion interconnected neurons and
  • trillions of synapses.

Even the most powerful supercomputers today balk at that scale: so far, machines such as the K computer at the Advanced Institute for Computational Science in Kobe, Japan can tackle at most ten percent of neurons and their synapses in the cortex.

This ineptitude is partially due to software. As computational hardware inevitably gets faster, algorithms increasingly become the linchpin towards whole-brain simulation.

This month, an international team completely revamped the structure of a popular simulation algorithm, developing a powerful piece of technology that dramatically slashes computing time and memory use.

neuroscience-post-petascale-brain-simulation-algorithm-exascale

Using today’s simulation algorithms, only small progress (dark red area of center brain) would be possible on the next generation of supercomputers. However, the new technology allows researchers to simulate larger parts of the brain while using the same amount of computer memory. This makes the new technology more appropriate for future use in supercomputers for whole-brain level simulation. Image Credit: Forschungszentrum Jülich/Frontiers

The new algorithm is compatible with a range of computing hardware, from laptops to supercomputers. When future exascale supercomputers hit the scene—projected to be 10 to 100 times more powerful than today’s top performers—the algorithm can immediately run on those computing beasts.

With the new technology we can exploit the increased parallelism of modern microprocessors a lot better than previously, which will become even more important in exascale computers,” said study author Jakob Jordan at the Jülich Research Center in Germany, who published the work in Frontiers in Neuroinformatics.

It’s a decisive step towards creating the technology to achieve simulations of brain-scale networks,” the authors said.

The Trouble With Scale

Current supercomputers are composed of hundreds of thousands of subdomains called nodes. Each node has multiple processing centers that can support a handful of virtual neurons and their connections.

A main issue in brain simulation is how to effectively represent millions of neurons and their connections inside these processing centers to cut time and power.

One of the most popular simulation algorithms today is the Memory-Usage Model. Before scientists simulate changes in their neuronal network, they need to first create all the neurons and their connections within the virtual brain using the algorithm.

Here’s the rub: for any neuronal pair, the model stores all information about connectivity in each node that houses the receiving neuron—the postsynaptic neuron.

In other words, the presynaptic neuron, which sends out electrical impulses, is shouting into the void; the algorithm has to figure out where a particular message came from by solely looking at the receiver neuron and data stored within its node.

It sounds like a strange setup, but the model allows all the nodes to construct their particular portion of the neural network in parallel. This dramatically cuts down boot-up time, which is partly why the algorithm is so popular.

But as you probably guessed, it comes with severe problems in scaling. The sender node broadcasts its message to all receiver neuron nodes. This means that each receiver node needs to sort through every single message in the network—even ones meant for neurons housed in other nodes.

That means a huge portion of messages get thrown away in each node, because the addressee neuron isn’t present in that particular node. Imagine overworked post office staff skimming an entire country’s worth of mail to find the few that belong to their jurisdiction. Crazy inefficient, but that’s pretty much what goes on in the Memory-Usage Model.

The problem becomes worse as the size of the simulated neuronal network grows.  Each node needs to dedicate memory storage space to an “address book” listing all its neural inhabitants and their connections. At the scale of billions of neurons, the “address book” becomes a huge memory hog.

Size Versus Source

The team hacked the problem by essentially adding a zip code to the algorithm.

Here’s how it works. The receiver nodes contain two blocks of information:

  • The first is a database that stores data about all the sender neurons that connect to the nodes. Because synapses come in several sizes and types that differ in their memory consumption, this database further sorts its information based on the type of synapses formed by neurons in the node.
    This setup already dramatically differs from its predecessor, in which connectivity data is sorted by the incoming neuronal source, not synapse type. Because of this, the node no longer has to maintain its “address book.”
    The size of the data structure is therefore independent of the total number of neurons in the network,” the authors explained.
  • The second chunk stores data about the actual connections between the receiver node and its senders. Similar to the first chunk, it organizes data by the type of synapse. Within each type of synapse, it then separates data by the source (the sender neuron).
    In this way, the algorithm is far more specific than its predecessor: rather than storing all connection data in each node, the receiver nodes only store data relevant to the virtual neurons housed within.
    The team also gave each sender neuron a target address book. During transmission the data is broken up into chunks, with each chunk containing a zip code of sorts directing it to the correct receiving nodes.

Rather than a computer-wide message blast, here the data is confined to the receiver neurons that they’re supposed to go to.

Speedy and Smart

The modifications panned out.

In a series of tests, the new algorithm performed much better than its predecessors in terms of scalability and speed. On the supercomputer JUQUEEN in Germany, the algorithm ran 55 percent faster than previous models on a random neural network, mainly thanks to its streamlined data transfer scheme.

At a network size of half a billion neurons, for example, simulating one second of biological events took about five minutes of JUQUEEN runtime using the new algorithm. Its predecessor clocked in at six times that.

This really “brings investigations of fundamental aspects of brain function, like plasticity and learning unfolding over minutes…within our reach,” said study author Dr. Markus Diesmann at the Jülich Research Centre.

As expected, several scalability tests revealed that the new algorithm is far more proficient at handling large networks, reducing the time it takes to process tens of thousands of data transfers by roughly threefold.

The novel technology profits from sending only the relevant spikes to each process,” the authors concluded. Because computer memory is now uncoupled from the size of the network, the algorithm is poised to tackle brain-wide simulations, the authors said.

While revolutionary, the team notes that a lot more work remains to be done.

  • For one, mapping the structure of actual neuronal networks onto the topology of computer nodes should further streamline data transfer.
  • For another, brain simulation software needs to regularly save its process so that in case of a computer crash, the simulation doesn’t have to start over.

Now the focus lies on accelerating simulations in the presence of various forms of network plasticity,” the authors concluded. With that solved, the digital human brain may finally be within reach.

2.4. NEST Simulator Forntiers in Neuroinformatics

NEST is an open-source software tool that is designed for the simulation of large-scale networks of single-compartment spiking neuron models (Gewaltig and Diesmann, 2007). It is developed and maintained by the NEST initiative2 under the GNU General Public License, version 23 and can be freely downloaded from the website of the NEST simulator4. The collaborative development of NEST follows an iterative, incremental strategy derived from the requirements and constraints given by the community (Diesmann and Gewaltig, 2002). Users can control simulations either via a built-in scripting language (SLI) or a Python module (PyNEST; Eppler et al., 2009Zaytsev and Morrison, 2014). While the definition of the network, in terms of the specification of neuronal populations and connections, can be conveniently performed in procedural form in an interpreted language, all compute-intensive tasks such as the actual generation of connectivity or the propagation of neuron dynamics are executed by the simulation kernel implemented in C++. NEST supports a wide variety of computing platforms, from laptops to moderately-sized clusters and supercomputers using a common codebase. To optimally use the available compute resources, NEST supports hybrid parallelization employing MPI for inter-node communication and multi-threading via OpenMP within each MPI process. Running multiple threads instead of multiple MPI processes per compute node makes better use of the available memory (Ippen et al., 2017).

Neurons are distributed in a round-robin fashion across all available threads according to their global id (GID), which labels all neurons and devices in the network uniquely by order of creation. The round-robin distribution of neurons implements a simple form of static load balancing as it ensures that neurons which belong to the same population and are hence expected to exhibit similar activity patterns, are evenly distributed across cores. Devices for stimulation and recording, are duplicated on each thread and only send to or record from thread-local neurons to avoid expensive communication of status variables. Events between neurons are communicated between processes by collective MPI functions (see section 3.2). Most data structures are private to each thread within a compute node. This separation is however relaxed during writing of events to MPI buffers and reading of events from the buffers to improve efficiency and reduce serial overhead (see sections 3.1.3 and 3.2). NEST offers a range of neuron and synapse models from low to high complexity. Users can extend the range of available models by employing a domain-specific model description language (Plotnikov et al., 2016) or by providing an appropriate implementation in C++. The simulation kernel supports further biophysical mechanisms, for example neuromodulated plasticity (Potjans et al., 2010), structural plasticity (Diaz-Pier et al., 2016), coupling between neurons via gap junctions (Hahne et al., 2015), and non-spiking neurons with continuous interactions, such as rate-based models (Hahne et al., 2017).”

 from:

Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

  • 1Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
  • 2Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
  • 3Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
  • 4Advanced Institute for Computational Science, RIKEN, Kobe, Japan
  • 5Computational Engineering Applications Unit, RIKEN, Wako, Japan
  • 6Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
  • 7Department of Computational Science and Technology, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden
  • 8Simulation Laboratory Neuroscience – Bernstein Facility for Simulation and Database Technology, Jülich Research Centre, Jülich, Germany

Even with massive supercomputers, it’s next to impossible to simulate 100% of the brain. With an earlier version of the algorithm, the researchers were able to reproduce only 1% on their petascale K supercomputer.

The reason is that the memory required per processor to simulate just 1% of the human brain is very high. If the entire brain comes in the picture, the memory requirements jump to almost 100 times per processor the current supercomputers have.

In the future, with exascale supercomputers (having more processors per node), it would be possible to scale the NEST algorithm to achieve faster whole-brain simulation. Then also, the memory per processor and the number of nodes would stay the same. But the advanced NEST algorithm would be able to optimize the memory required by the system.

 

The researchers have described their brain simulation algorithm in a white paper published in Frontiers of Neuroinformatics.

 

Shelly Xuelai Fan is a neuroscientist at the University of California, San Francisco, where she studies ways to make old brains young again. In addition to research, she’s also an avid science writer with an insatiable obsession with biotech, AI and all things neuro. She spends her spare time kayaking, bike camping and getting lost in the woods.

FOLLOW SHELLY:

  

… Continue reading

The Algorithm March 8, 2018

By Hugo Angel,

MIT Technology Review

The Algorithm
News and views on the latest in artificial intelligence.
03.08.18

 

placeholder_600.gif

First Word
“The real race in AI between China and the US, then, will be one between the two countries’ big cloud companies, which will vie to be the provider of choice for companies and cities that want to make use of AI.”  
MITTR


Showdown: Chinese companies are coming out with some seriously advanced AI technology that challenges what’s put out by Silicon Valley. The next area they plan to dominate? The cloud, with Alibaba taking the lead.
For example: Train stations are already hotbeds of testing. Not only do the ticket machines suggest routes, scan your face for an ID check and listen to your voice commands amid the station’s cacophony, but police officers are using glasses with built-in facial recognition to catch fugitives amid the crowds of travelers. All of this tech is powered by the cloud.
More ways than one: Besides trying to take over the cloud market, the Chinese government is investing heavily in AI-powered military technology and companies are making a go at catching up as chip manufacturers. It’s all part of the government’s plan to be the global leader in AI by 2030.

 

Today

In the News
  • AI is human-friendly! (If we want it). (NY Times)

  • Humans’ deep well of prior knowledge means we can still learn faster than AI. (TR)

  • Starsky Robotics tested a self-driving truck without anyone in it for seven miles on a Florida road. (Trucks)
    +But, uh, there were some problems the day before. Like the truck slowing down and then stopping in the middle of the road.(Car&Driver)

  • The next major update to Windows 10 will come with free pre-trained machine learning models that developers can use in building apps for Windows devices. (Windows)

  • Semi-autonomous sailboat drones could soon replace buoys as a way to monitor the effect of climate change on oceans. (Science)

  • There is a millennium worth of technology that tried to mimic human intelligence and automate work that precedes our current AI systems. (The Public Domain Review)

 

Robotics and AI are driving rapid change across all industries.

Join us at EmTech Next where we will examine the technology behind these global trends and their implications for the future of work. Purchase your tickets today before time runs out.

 

Deeper

From the archives: China’s AI strategy

Where AI intersects with nationalism in the US, the narrative seems to be that America is in a war with China for global dominance. That may be true to an extent, but after our own Will Knight went to China, he came back with a different sentiment: Rather than considering China the enemy, fostering growth at home we means taking a page out of the Chinese playbook. The government in China supports AI startups with funding, established industries (like manufacturing) are more open to change brought on by algorithms, and there is (or appears to be) less fear of what jobs could be lost. Keeping ahead might be as easy as following the lessons coming out of America’s AI rival.

 

ORIGINAL: The Algorithm – Technology Review
By @jackiesnow.


A Preview of Bristlecone, Google’s New Quantum Processor

By Hugo Angel,

The goal of the Google Quantum AI lab is to build a quantum computer that can be used to solve real-world problems. Our strategy is to explore near-term applications using systems that are forward compatible to a large-scale universal error-corrected quantum computer. In order for a quantum processor to be able to run algorithms beyond the scope of classical simulations, it requires not only a large number of qubits. Crucially, the processor must also have low error rates on readout and logical operations, such as single and two-qubit gates. Today we presented Bristlecone, our new quantum processor, at the annual American Physical Society meeting in Los Angeles. The purpose of this gate-based superconducting system is to provide a testbed for research into system error rates and scalability of our qubit technology, as well as applications in quantum simulationoptimization, and machine learning.
Bristlecone is Google’s newest quantum processor (left). On the right is a cartoon of the device: each “X” represents a qubit, with nearest neighbor connectivity.

 

The guiding design principle for this device is to preserve the underlying physics of our previous 9-qubit linear array technology12, which demonstrated low error rates for readout (1%), single-qubit gates (0.1%) and most importantly two-qubit gates (0.6%) as our best result. This device uses the same scheme for coupling, control, and readout, but is scaled to a square array of 72 qubits. We chose a device of this size to be able to demonstrate quantum supremacy in the future, investigate first and second order error-correction using the surface code, and to facilitate quantum algorithm development on actual hardware.

 2D conceptual chart showing the relationship between error rate and number of qubits. The intended research direction of the Quantum AI Lab is shown in red, where we hope to access near-term applications on the road to building an error corrected quantum computer.

 

Before investigating specific applications, it is important to quantify a quantum processor’s capabilities. Our theory team has developed a benchmarking tool for exactly this task. We can assign a single system error by applying random quantum circuits to the device and checking the sampled output distribution against a classical simulation. If a quantum processor can be operated with low enough error, it would be able to outperform a classical supercomputer on a well-defined computer science problem, an achievement known as quantum supremacy. These random circuits must be large in both number of qubits as well as computational length (depth). Although no one has achieved this goal yet, we calculate quantum supremacy can be comfortably demonstrated with 49 qubits, a circuit depth exceeding 40, and a two-qubit error below 0.5%. We believe the experimental demonstration of a quantum processor outperforming a supercomputer would be a watershed moment for our field, and remains one of our key objectives.
A Bristlecone chip being installed by Research Scientist Marissa Giustina
at the Quantum AI Lab in Santa Barbara

 

We are looking to achieve similar performance to the best error rates of the 9-qubit device, but now across all 72 qubits of Bristlecone. We believe Bristlecone would then be a compelling proof-of-principle for building larger scale quantum computers. Operating a device such as Bristlecone at low system error requires harmony between a full stack of technology ranging from software and control electronics to the processor itself. Getting this right requires careful systems engineering over several iterations.

We are cautiously optimistic that quantum supremacy can be achieved with Bristlecone, and feel that learning to build and operate devices at this level of performance is an exciting challenge! We look forward to sharing the results and allowing collaborators to run experiments in the future.

 

ORIGINAL: Research Google
Posted by Julian Kelly, Research Scientist, Quantum AI Lab
Monday, March 05, 2018

Revelado el misterio de los trazos invisibles de Leonardo Da Vinci

By Hugo Angel,

A simple vista parecen un papel viejo en blanco, pero bajo la luz ultravioleta se puede apreciar más de una docena de esbozos de manos dibujadas por Leonardo Da Vinci. Los dos trozos de papel, que ahora se sabe que formaron parte del estudio preliminar del artista para pintar la famosa obra La Adoración de los Magos (1481), se exhibirán al público por primera vez en la Galería de la Reina, en el Palacio de Buckingham, con motivo del 500 aniversario de la muerte del artista florentino.Los dibujos invisibles de Da Vinci se podrán apreciar bajo la luz ultravioleta a partir del mes de febrero del año que viene. Los más eruditos conocían desde hace algún tiempo la existencia de las páginas “en blanco” de la colección de Da Vinci, que se habían preservado porque varios expertos se dieron cuenta de que el papel presentaba hendiduras.

El resultado del análisis de los trazos invisibles

Los esbozos de mano habían permanecido ocultos durante décadas hasta que fueron examinados bajo luz ultravioleta, una técnica que permitió revelar el trabajo asombrosamente detallado de las manos. Recientemente, los expertos centraron la atención en saber por qué el dibujo se había borrado aparentemente.

Leonardo ejecutó los estudios de manos en punto metálico, que implica dibujar con un lápiz metálico sobre papel preparado”, ha explicado Martin Clayton, el director de impresiones y dibujos de la colección real de tesoros de Reino Unido, que ha añadido también que las dos hojas de papel se mostrarán al público, junto a una fotografía de la imagen ultravioleta que revela las manos.

Dibujos de Leonardo en el Palazzo Reale de Milan

Dibujos de Leonardo en el Palazzo Reale de Milan (Palazzo Reale)

 

Una de las hojas que contiene los esbozos de manos de La Adoración de los Magos fue analizada en las instalaciones del sincrotrón nacional del Reino Unido, que se encuentra en Oxfordshire. El centro utilizó fluorescencia de rayos X de alta energía para mapear la distribución de los elementos químicos en el papel.

Se descubrió que los dibujos se habían vuelto invisibles a simple vista debido al alto contenido de cobre en el lápiz óptico que Leonardo usaba: el cobre metálico había reaccionado con el tiempo para convertirse en una sal de cobre transparente”, ha aclarado Clayton.

Los dibujos de Leonardo, encuadernados en un solo álbum por el escultor Pompeo Leoni en Milán alrededor de 1590, ingresaron en la Colección Real durante el reinado de Carlos II. Probablemente Henry Howard, nieto de Thomas Howard, conde de Arundel, un prolífico coleccionista de dibujos que los adquirió en 1620, fuera quien se los donó a la Casa Real.

En 2019 varias ciudades acogerán exposiciones en homenaje a Da Vinci

Los dibujos que contiene el álbum reflejan las pasiones e intereses asombrosamente diversos de Leonardo, como

  • pintura
  • escultura
  • arquitectura
  • música
  • anatomía
  • ingeniería
  • botánica
  • táctica militar
  • cartografía
  • geología.

En este sentido uno de los dibujos más extraños es un estudio de varios gatos, leones y un dragón, que Da Vinci realizó para un tratado no realizado “sobre los movimientos de los animales con cuatro pies, entre los que se encuentra el hombre, que en su infancia se arrastra a cuatro patas”.

Los trazos invisibles de Da Vinci saldrán a la luz con motivo de las exposiciones simultáneas en varias ciudades que Reino Unido organizará el año en homenaje a Da Vinci. Los dibujos de Da Vinci llegarán a Belfast, Birmingham, Bristol, Cardiff, Glasgow, Leeds, Liverpool, Sheffield, Southampton y Sunderland, según ha informado la Fundación de la Colección Real, que posee medio millar de dibujos del artista.

No obstante, previamente los dibujos se podrán ver juntos en una exhibición de unas 200 obras de Da Vinci en la Galería de la Reina del palacio de Buckingham, en Londres, en lo que será la “exposición más importante del trabajo de Leonardo en 65 años, según fuentes de la fundación.

Martin Clayton ha comentado que Da Vinci dibujó intensamente, “no solo para preparar proyectos artísticos, sino para

  • engendrar nuevas ideas,
  • registrar sus observaciones y
  • probar sus teorías de cada materia”.

Y ha agregado: “Y porque acumuló miles de dibujos y folios de manuscritos hasta el final de su vida, tenemos un conocimiento inigualable del funcionamiento de la mente extraordinaria de Leonardo”.

Los dibujos más importantes de Leonardo han estado en la Colección Real durante más de 350 años”, ha concluido Clayton, quien también destaca el buen estado de conservación de las obras. Sin embargo, las obras son frágiles y sólo se permite exhibirlas a la luz en intervalos cortos, lo que hace que la exhibición sea una oportunidad única para conocer de cerca una parte muy importante del legado del artista.

Una exposición en la Galería de la Reina reunirá 200 obras del artista

ORIGINAL: La Vanguardia
By Redacción, Barcelona

 

  Category: Art, Leonardo500
  Comments: Comments Off on Revelado el misterio de los trazos invisibles de Leonardo Da Vinci

An Algorithm Summarizes Lengthy Text Surprisingly Well

By Hugo Angel,

Training software to accurately sum up information in documents could have great impact in many fields, such as medicine, law, and scientific research.

 

Who has time to read every article they see shared on Twitter or Facebook, or every document that’s relevant to their job? As information overload grows ever worse, computers may become our only hope for handling a growing deluge of documents. And it may become routine to rely on a machine to analyze and paraphrase articles, research papers, and other text for you.

An algorithm developed by researchers at Salesforce shows how computers may eventually take on the job of summarizing documents. It uses several machine-learning tricks to produce surprisingly coherent and accurate snippets of text from longer pieces. And while it isn’t yet as good as a person, it hints at how condensing text could eventually become automated.

The algorithm produced, for instance, the following summary of a recent New York Times article about Facebook trying to combat fake news ahead of the U.K.’s upcoming election:

  • Social network published a series of advertisements in newspapers in Britain on Monday.
  • It has removed tens of thousands of fake accounts in Britain.
  • It also said it would hire 3,000 more moderators, almost doubling the number of people worldwide who scan for inappropriate or offensive content.

The Salesforce algorithm is dramatically better than anything developed previously, according to a common software tool for measuring the accuracy of text summaries.

I don’t think I’ve ever seen such a large improvement in any [natural-language-processing] task,” says Richard Socher, chief scientist at Salesforce. Socher is a prominent name in machine learning and natural-language processing, and his startup, MetaMind, was acquired by Salesforce in 2016.

The software is still a long way from matching a human’s ability to capture the essence of document text, and other summaries it produces are sloppier and less coherent. Indeed, summarizing text perfectly would require genuine intelligence, including commonsense knowledge and a mastery of language.

Parsing language remains one of the grand challenges of artificial intelligence (see “AI’s Language Problem”). But it’s a challenge with enormous commercial potential. Even limited linguistic intelligence—the ability to parse spoken or written queries, and to respond in more sophisticated and coherent ways—could transform personal computing. In many specialist fields—like medicine, scientific research, and law—condensing information and extracting insights could have huge commercial benefits.

Caiming Xiong, a research scientist at Salesforce who contributed to the work, says his team’s algorithm, while imperfect, could summarize daily news articles, or provide a synopsis of customer e-mails. The latter could be especially useful for Salesforce’s own platform.

The team’s algorithm uses a combination of approaches to achieve its improvement. The system learns from examples of good summaries, an approach called supervised learning, but also employs a kind of artificial attention to the text it is ingesting and outputting. This helps ensure that it doesn’t produce too many repetitive strands of text, a common problem with summarization algorithms.

The system experiments in order to generate summaries of its own using a process called reinforcement learning. Inspired by the way animals seem to learn, this involves providing positive feedback for actions that lead toward a particular objective. Reinforcement learning has been used to train computers to do impressive new things, like playing complex games or controlling robots (see “10 Breakthrough Technologies 2017: Reinforcement Learning”). Those working on conversational interfaces are increasingly now looking at reinforcement learning as a way to improve their systems.

Kristian Hammond, a professor at Northwestern University, and the founder of Narrative Science, a company that generates narrative reports from raw data, says the Salesforce research is a good advance, but it also shows the limits of relying purely on statistical machine learning. “At some point, we have to admit that we need a little bit of semantics and a little bit of syntactic knowledge in these systems in order for them to be fluid and fluent,” says Hammond.

Hammond says the use of an attention mechanism mimics, at a very simple level, the way a person pays attention to what he’s just just said. “When you say something, the details of how you say it are driven by the context of what you have said before,” he says. “This work is a step in that direction.”

Would you trust a machine to summarize important documents for you?

Tell us why.

Improving the language skills of computers may also prove important in the quest to advance artificial intelligence. A startup called Maluuba, which was acquired earlier this year by Microsoft, recently produced a system capable of generating relevant questions from text. The Maluuba team also used a combination of supervised learning and reinforcement learning.

Adam Trischler, senior research scientist at Maluuba, says asking relevant questions is an important part of learning, so it is important to create inquisitive machines, too. “The ultimate goal is to use question-and-answering in a dialogue,” Trischler says. “What if a machine could go out and gather information and then ask its own questions?

ORIGINAL: Technology Review
by Will Knight
May 12, 2017

%d bloggers like this: