Category: Innovation

An international team of scientists has come up with a blueprint for a large-scale quantum computer

By Hugo Angel,

‘It is the Holy Grail of science … we will be able to do certain things we could never even dream of before’
Courtesy Professor Winfried Hensinger
Quantum computing breakthrough could help ‘change life completely‘, say scientists
Scientists claim to have produced the first-ever blueprint for a large-scale quantum computer in a development that could bring about a technological revolution on a par with the invention of computing itself.
Until now quantum computers have had just a fraction of the processing power they are theoretically capable of producing.
But an international team of researchers believe they have finally overcome the main technical problems that have prevented the construction of more powerful machines.
They are currently building a prototype and a full-scale quantum computer – many millions of times faster than the best currently available – could be built in about a decade.
This is a modal window.
Scientists invent invisible underwater robots based on glass eels
Such devices work by utilising the almost magical properties found in the world of the very small, where an atom can apparently exist in two different places at the same time.
Professor Winfried Hensinger, head of the Ion Quantum Technology Group at Sussex University, who has been leading this research, told The Independent: “It is the Holy Grail of science, really, to build a quantum computer.
And we are now publishing the actual nuts-and-bolts construction plan for a large-scale quantum computer.
It is thought the astonishing processing power unleashed by quantum mechanics will lead to new, life-saving medicines, help solve the most intractable scientific problems, and probe the mysteries of the universe.
Life will change completely. We will be able to do certain things we could never even dream of before,” Professor Hensinger said.
You can imagine that suddenly the sky is the limit.
This is really, really exciting … it’s probably one of the most exciting times to be in this field.
He said small quantum computers had been built in the past but to test the theories.
This is not an academic study any more, it really is all the engineering required to build such a device,” he said.
Nobody has really gone ahead and drafted a full engineering plan of how you build one.
Many people questioned, because this is so hard to make this happen, that it can even be built.
We show that not only can it be built, but we provide a whole detailed plan on how to make it happen.
The problem is that existing quantum computers require lasers focused precisely on individual atoms. The larger the computer, the more lasers are required and the greater the chance of something going wrong.
But Professor Hensinger and colleagues used a different technique to monitor the atoms involving a microwave field and electricity in an ‘ion-trap’ device.

What we have is a solution that we can scale to arbitrary [computing] power,” he said.

Fig. 2. Gradient wires placed underneath each gate zone and embedded silicon photodetector.
(A) Illustration showing an isometric view of the two main gradient wires placed underneath each gate zone. Short wires are placed locally underneath each gate zone to form coils, which compensate for slowly varying magnetic fields and allow for individual addressing. The wire configuration in each zone can be seen in more detail in the inset.
(B) Silicon photodetector (marked green) embedded in the silicon substrate, transparent center segmented electrodes, and the possible detection angle are shown. VIA structures are used to prevent optical cross-talk from neighboring readout zones.
Source: Science Journals — AAAS. Blueprint for a microwave trapped ion quantum computer. Lekitsch et al. Sci. Adv. 2017;3: e1601540 1 February 2017
Fig. 4. Scalable module illustration. One module consisting of 36 × 36 junctions placed on the supporting steel frame structure: Nine wafers containing the required DACs and control electronics are placed between the wafer holding 36 × 36 junctions and the microchannel cooler (red layer) providing the cooling. X-Y-Z piezo actuators are placed in the four corners on top of the steel frame, allowing for accurate alignment of the module. Flexible electric wires supply voltages, currents, and control signals to the DACs and control electronics, such as field-programmable gate arrays (FPGAs). Coolant is supplied to the microchannel cooler layer via two flexible steel tubes placed in the center of the modules.
Source: Science Journals — AAAS. Blueprint for a microwave trapped ion quantum computer. Lekitsch et al. Sci. Adv. 2017;3: e1601540 1 February 2017
Fig. 5. Illustration of vacuum chambers. Schematic of octagonal UHV chambers connected together; each chamber is 4.5 × 4.5 m2 large and can hold >2.2 million individual X-junctions placed on steel frames.
Source: Science Journals — AAAS. Blueprint for a microwave trapped ion quantum computer. Lekitsch et al. Sci. Adv. 2017;3: e1601540 1 February 2017

We are already building it now. Within two years we think we will have completed a prototype which incorporates all the technology we state in this blueprint.

At the same time we are now looking for industry partner so we can really build a large-scale device that fills a building basically.
It’s extraordinarily expensive so we need industry partners … this will be in the 10s of millions, up to £100m.
Commenting on the research, described in a paper in the journal Science Advances, other academics praised the quality of the work but expressed caution about how quickly it could be developed.
Dr Toby Cubitt, a Royal Society research fellow in quantum information theory at University College London, said: “Many different technologies are competing to build the first large-scale quantum computer. Ion traps were one of the earliest realistic proposals. 
This work is an important step towards scaling up ion-trap quantum computing.
Though there’s still a long way to go before you’ll be making spreadsheets on your quantum computer.
And Professor Alan Woodward, of Surrey University, hailed the “tremendous step in the right direction”.
It is great work,” he said. “They have made some significant strides forward.

But he added it was “too soon to say” whether it would lead to the hoped-for technological revolution.

ORIGINAL: The Independent
Ian Johnston Science Correspondent
Thursday 2 February 2017

How a Japanese cucumber farmer is using deep learning and TensorFlow.

By Hugo Angel,

by Kaz Sato, Developer Advocate, Google Cloud Platform
August 31, 2016
It’s not hyperbole to say that use cases for machine learning and deep learning are only limited by our imaginations. About one year ago, a former embedded systems designer from the Japanese automobile industry named Makoto Koike started helping out at his parents’ cucumber farm, and was amazed by the amount of work it takes to sort cucumbers by size, shape, color and other attributes.
Makoto’s father is very proud of his thorny cucumber, for instance, having dedicated his life to delivering fresh and crispy cucumbers, with many prickles still on them. Straight and thick cucumbers with a vivid color and lots of prickles are considered premium grade and command much higher prices on the market.
But Makoto learned very quickly that sorting cucumbers is as hard and tricky as actually growing them.Each cucumber has different color, shape, quality and freshness,” Makoto says.
Cucumbers from retail stores
Cucumbers from Makoto’s farm
In Japan, each farm has its own classification standard and there’s no industry standard. At Makoto’s farm, they sort them into nine different classes, and his mother sorts them all herself — spending up to eight hours per day at peak harvesting times.
The sorting work is not an easy task to learn. You have to look at not only the size and thickness, but also the color, texture, small scratches, whether or not they are crooked and whether they have prickles. It takes months to learn the system and you can’t just hire part-time workers during the busiest period. I myself only recently learned to sort cucumbers well,” Makoto said.
Distorted or crooked cucumbers are ranked as low-quality product
There are also some automatic sorters on the market, but they have limitations in terms of performance and cost, and small farms don’t tend to use them.
Makoto doesn’t think sorting is an essential task for cucumber farmers. “Farmers want to focus and spend their time on growing delicious vegetables. I’d like to automate the sorting tasks before taking the farm business over from my parents.
Makoto Koike, center, with his parents at the family cucumber farm
Makoto Koike, family cucumber farm
The many uses of deep learning
Makoto first got the idea to explore machine learning for sorting cucumbers from a completely different use case: Google AlphaGo competing with the world’s top professional Go player.
When I saw the Google’s AlphaGo, I realized something really serious is happening here,” said Makoto. “That was the trigger for me to start developing the cucumber sorter with deep learning technology.
Using deep learning for image recognition allows a computer to learn from a training data set what the important “features” of the images are. By using a hierarchy of numerous artificial neurons, deep learning can automatically classify images with a high degree of accuracy. Thus, neural networks can recognize different species of cats, or models of cars or airplanes from images. Sometimes neural networks can exceed the performance of the human eye for certain applications. (For more information, check out my previous blog post Understanding neural networks with TensorFlow Playground.)

TensorFlow democratizes the power of deep learning
But can computers really learn mom’s art of cucumber sorting? Makoto set out to see whether he could use deep learning technology for sorting using Google’s open source machine learning library, TensorFlow.
Google had just open sourced TensorFlow, so I started trying it out with images of my cucumbers,” Makoto said. “This was the first time I tried out machine learning or deep learning technology, and right away got much higher accuracy than I expected. That gave me the confidence that it could solve my problem.
With TensorFlow, you don’t need to be knowledgeable about the advanced math models and optimization algorithms needed to implement deep neural networks. Just download the sample code and read the tutorials and you can get started in no time. The library lowers the barrier to entry for machine learning significantly, and since Google open-sourced TensorFlow last November, many “non ML” engineers have started playing with the technology with their own datasets and applications.

Cucumber sorting system design
Here’s a systems diagram of the cucumber sorter that Makoto built. The system uses Raspberry Pi 3 as the main controller to take images of the cucumbers with a camera, and 

  • in a first phase, runs a small-scale neural network on TensorFlow to detect whether or not the image is of a cucumber
  • It then forwards the image to a larger TensorFlow neural network running on a Linux server to perform a more detailed classification.
Systems diagram of the cucumber sorter
Makoto used the sample TensorFlow code Deep MNIST for Experts with minor modifications to the convolution, pooling and last layers, changing the network design to adapt to the pixel format of cucumber images and the number of cucumber classes.
Here’s Makoto’s cucumber sorter, which went live in July:
Here’s a close-up of the sorting arm, and the camera interface:

And here is the cucumber sorter in action:

Pushing the limits of deep learning
One of the current challenges with deep learning is that you need to have a large number of training datasets. To train the model, Makoto spent about three months taking 7,000 pictures of cucumbers sorted by his mother, but it’s probably not enough.
When I did a validation with the test images, the recognition accuracy exceeded 95%. But if you apply the system with real use cases, the accuracy drops down to about 70%. I suspect the neural network model has the issue of “overfitting” (the phenomenon in neural network where the model is trained to fit only to the small training dataset) because of the insufficient number of training images.
The second challenge of deep learning is that it consumes a lot of computing power. The current sorter uses a typical Windows desktop PC to train the neural network model. Although it converts the cucumber image into 80 x 80 pixel low-resolution images, it still takes two to three days to complete training the model with 7,000 images.
Even with this low-res image, the system can only classify a cucumber based on its shape, length and level of distortion. It can’t recognize color, texture, scratches and prickles,” Makoto explained. Increasing image resolution by zooming into the cucumber would result in much higher accuracy, but would also increase the training time significantly.
To improve deep learning, some large enterprises have started doing large-scale distributed training, but those servers come at an enormous cost. Google offers Cloud Machine Learning (Cloud ML), a low-cost cloud platform for training and prediction that dedicates hundreds of cloud servers to training a network with TensorFlow. With Cloud ML, Google handles building a large-scale cluster for distributed training, and you just pay for what you use, making it easier for developers to try out deep learning without making a significant capital investment.
These specialized servers were used in the AlphaGo match
Makoto is eagerly awaiting Cloud ML. “I could use Cloud ML to try training the model with much higher resolution images and more training data. Also, I could try changing the various configurations, parameters and algorithms of the neural network to see how that improves accuracy. I can’t wait to try it.

Inside Vicarious, the Secretive AI Startup Bringing Imagination to Computers

By Hugo Angel,

By reinventing the neural network, the company hopes to help computers make the leap from processing words and symbols to comprehending the real world.
Life would be pretty dull without imagination. In fact, maybe the biggest problem for computers is that they don’t have any.
That’s the belief motivating the founders of Vicarious, an enigmatic AI company backed by some of the most famous and successful names in Silicon Valley. Vicarious is developing a new way of processing data, inspired by the way information seems to flow through the brain. The company’s leaders say this gives computers something akin to imagination, which they hope will help make the machines a lot smarter.
Vicarious is also, essentially, betting against the current boom in AI. Companies including Google, Facebook, Amazon, and Microsoft have made stunning progress in the past few years by feeding huge quantities of data into large neural networks in a process called “deep learning.” When trained on enough examples, for instance, deep-learning systems can learn to recognize a particular face or type of animal with very high accuracy (see “10 Breakthrough Technologies 2013: Deep Learning”). But those neural networks are only very crude approximations of what’s found inside a real brain.
Illustration by Sophia Foster-Dimino
Vicarious has introduced a new kind of neural-network algorithm designed to take into account more of the features that appear in biology. An important one is the ability to picture what the information it’s learned should look like in different scenarios—a kind of artificial imagination. The company’s founders believe a fundamentally different design will be essential if machines are to demonstrate more human like intelligence. Computers will have to be able to learn from less data, and to recognize stimuli or concepts more easily.
Despite generating plenty of early excitement, Vicarious has been quiet over the past couple of years. But this year, the company says, it will publish details of its research, and it promises some eye-popping demos that will show just how useful a computer with an imagination could be.
The company’s headquarters don’t exactly seem like the epicenter of a revolution in artificial intelligence. Located in Union City, a short drive across the San Francisco Bay from Palo Alto, the offices are plain—a stone’s throw from a McDonald’s and a couple of floors up from a dentist. Inside, though, are all the trappings of a vibrant high-tech startup. A dozen or so engineers were hard at work when I visited, several using impressive treadmill desks. Microsoft Kinect 3-D sensors sat on top of some of the engineers’ desks.
D. Scott Phoenix, the company’s 33-year-old CEO, speaks in suitably grandiose terms. “We are really rapidly approaching the amount of computational power we need to be able to do some interesting things in AI,” he told me shortly after I walked through the door. “In 15 years, the fastest computer will do more operations per second than all the neurons in all the brains of all the people who are alive. So we are really close.
Vicarious is about more than just harnessing more computer power, though. Its mathematical innovations, Phoenix says, will more faithfully mimic the information processing found in the human brain. It’s true enough that the relationship between the neural networks currently used in AI and the neurons, dendrites, and synapses found in a real brain is tenuous at best.
One of the most glaring shortcomings of artificial neural networks, Phoenix says, is that information flows only one way. “If you look at the information flow in a classic neural network, it’s a feed-forward architecture,” he says. “There are actually more feedback connections in the brain than feed-forward connections—so you’re missing more than half of the information flow.
It’s undeniably alluring to think that imagination—a capability so fundamentally human it sounds almost mystical in a computer—could be the key to the next big advance in AI.
Vicarious has so far shown that its approach can create a visual system capable of surprisingly deft interpretation. In 2013 it showed that the system could solve any captcha (the visual puzzles that are used to prevent spam-bots from signing up for e-mail accounts and the like). As Phoenix explains it, the feedback mechanism built into Vicarious’s system allows it to imagine what a character would look like if it weren’t distorted or partly obscured (see “AI Startup Says It Has Defeated Captchas”).
Phoenix sketched out some of the details of the system at the heart of this approach on a whiteboard. But he is keeping further details quiet until a scientific paper outlining the captcha approach is published later this year.
In principle, this visual system could be put to many other practical uses, like recognizing objects on shelves more accurately or interpreting real-world scenes more intelligently. The founders of Vicarious also say that their approach extends to other, much more complex areas of intelligence, including language and logical reasoning.
Phoenix says his company may give a demo later this year involving robots. And indeed, the job listings on the company’s website include several postings for robotics experts. Currently robots are bad at picking up unfamiliar, oddly arranged, or partly obscured objects, because they have trouble recognizing what they are. “If you look at people who are picking up objects in an Amazon facility, most of the time they aren’t even looking at what they’re doing,” he explains. “And they’re imagining—using their sensory motor simulator—where the object is, and they’re imagining at what point their finger will touch it.
While Phoenix is the company’s leader, his cofounder, Dileep George, might be considered its technical visionary. George was born in India and received a PhD in electrical engineering from Stanford University, where he turned his attention to neuroscience toward the end of his doctoral studies. In 2005 he cofounded Numenta with Jeff Hawkins, the creator of Palm Computing. But in 2010 George left to pursue his own ideas about the mathematical principles behind information processing in the brain, founding Vicarious with Phoenix the same year.
I bumped into George in the elevator when I first arrived. He is unassuming and speaks quietly, with a thick accent. But he’s also quite matter-of-fact about what seem like very grand objectives.
George explained that imagination could help computers process language by tying words, or symbols, to low-level physical representations of real-world things. In theory, such a system might automatically understand the physical properties of something like water, for example, which would make it better able to discuss the weather. “When I utter a word, you know what it means because you can simulate the concept,” he says.
This ambitious vision for the future of AI has helped Vicarious raise an impressive $72 million so far. Its list of investors also reads like a who’s who of the tech world. Early cash came from Dustin Moskovitz, ex-CTO of Facebook, and Adam D’Angelo, cofounder of Quora. Further funding came from Peter Thiel, Mark Zuckerberg, Jeff Bezos, and Elon Musk.
Many people are itching to see what Vicarious has done beyond beating captchas. “I would love it if they showed us something new this year,” says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence in Seattle.
In contrast to the likes of Google, Facebook, or Baidu, Vicarious hasn’t published any papers or released any tools that researchers can play with. “The people [involved] are great, and the problems [they are working on] are great,” says Etzioni. “But it’s time to deliver.
For those who’ve put their money behind Vicarious, the company’s remarkable goals should make the wait well worth it. Even if progress takes a while, the potential payoffs seem so huge that the bet makes sense, says Matt Ocko, a partner at Data Collective, a venture firm that has backed Vicarious. A better machine-learning approach could be applied in just about any industry that handles large amounts of data, he says. “Vicarious sat us down and demonstrated the most credible pathway to reasoning machines that I have ever seen.
Ocko adds that Vicarious has demonstrated clear evidence it can commercialize what it’s working on. “We approached it with a crapload of intellectual rigor,” he says.
It will certainly be interesting to see if Vicarious can inspire this kind of confidence among other AI researchers and technologists with its papers and demos this year. If it does, then the company could quickly go from one of the hottest prospects in the Valley to one of its fastest-growing businesses.
That’s something the company’s founders would certainly like to imagine.
by Will Knight. Senior Editor, AI
May 19, 2016

“AI & The Future Of Civilization” A Conversation With Stephen Wolfram

By Hugo Angel,

“AI & The Future Of Civilization” A Conversation With Stephen Wolfram [3.1.16]
Stephen Wolfram
What makes us different from all these things? What makes us different is the particulars of our history, which gives us our notions of purpose and goals. That’s a long way of saying when we have the box on the desk that thinks as well as any brain does, the thing it doesn’t have, intrinsically, is the goals and purposes that we have. Those are defined by our particulars—our particular biology, our particular psychology, our particular cultural history.

The thing we have to think about as we think about the future of these things is the goals. That’s what humans contribute, that’s what our civilization contributes—execution of those goals; that’s what we can increasingly automate. We’ve been automating it for thousands of years. We will succeed in having very good automation of those goals. I’ve spent some significant part of my life building technology to essentially go from a human concept of a goal to something that gets done in the world.

There are many questions that come from this. For example, we’ve got these great AIs and they’re able to execute goals, how do we tell them what to do?…

STEPHEN WOLFRAM, distinguished scientist, inventor, author, and business leader, is Founder & CEO, Wolfram Research; Creator, Mathematica, Wolfram|Alpha & the Wolfram Language; Author, A New Kind of Science. Stephen Wolfram’s EdgeBio Page


Some tough questions. One of them is about the future of the human condition. That’s a big question. I’ve spent some part of my life figuring out how to make machines automate stuff. It’s pretty obvious that we can automate many of the things that we humans have been proud of for a long time. What’s the future of the human condition in that situation?

More particularly, I see technology as taking human goals and making them able to be automatically executed by machines. The human goals that we’ve had in the past have been things like moving objects from here to there and using a forklift rather than our own hands. Now, the things that we can do automatically are more intellectual kinds of things that have traditionally been the professions’ work, so to speak. These are things that we are going to be able to do by machine. The machine is able to execute things, but something or someone has to define what its goals should be and what it’s trying to execute.

People talk about the future of the intelligent machines, and whether intelligent machines are going to take over and decide what to do for themselves. What one has to figure out, while given a goal, how to execute it into something that can meaningfully be automated, the actual inventing of the goal is not something that in some sense has a path to automation.

How do we figure out goals for ourselves? How are goals defined? They tend to be defined for a given human by their own personal history, their cultural environment, the history of our civilization. Goals are something that are uniquely human. It’s something that almost doesn’t make any sense. We ask, what’s the goal of our machine? We might have given it a goal when we built the machine.

The thing that makes this more poignant for me is that I’ve spent a lot of time studying basic science about computation, and I’ve realized something from that. It’s a little bit of a longer story, but basically, if we think about intelligence and things that might have goals, things that might have purposes, what kinds of things can have intelligence or purpose? Right now, we know one great example of things with intelligence and purpose and that’s us, and our brains, and our own human intelligence. What else is like that? The answer, I had at first assumed, is that there are the systems of nature. They do what they do, but human intelligence is far beyond anything that exists naturally in the world. It’s something that’s the result of all of this elaborate process of evolution. It’s a thing that stands apart from the rest of what exists in the universe. What I realized, as a result of a whole bunch of science that I did, was that is not the case.

Have we hit a major artificial intelligence milestone?

By Hugo Angel,

Image: REUTERS/China DailyStudents play the board game “Go”, known as “Weiqi” in Chinese, during a competition.
Google’s computer program AlphaGo has defeated a top-ranked Go player in the first round of five historic matches – marking a significant achievement in the development of artificial intelligence.
AlphaGo’s victory over a human champion shows an artificial intelligence system has mastered the most complex game ever designed. The ancient Chinese board game is vastly more complicated than chess and is said to have more possible configurations than there are atoms in the Universe.
The battle between AlphaGo, developed by Google’s Deepmind unit, and South Korea’s Lee Se-dol was said by commentators to be close, with both sides making some mistakes.
Game playing is an important way to measure AI advances, demonstrating that machines can outperform humans at intellectual tasks.
AlphaGo’s win follows in the footsteps of the legendary 1997 victory of IBM supercomputer Deep Blue over world chess champion Garry Kasparov. But Go, which relies heavily on players’ intuition to choose among vast numbers of board positions, is far more challenging for artificial intelligence than chess.
Speaking in the lead-up to the first match, Se-dol, who is currently ranked second in the world behind fellow South Korean Lee Chang-ho, said: “Having learned today how its algorithms narrow down possible choices, I have a feeling that AlphaGo can imitate human intuition to a certain degree.”
Demis Hassabis, founder and CEO of DeepMind, which was acquired by Google in 2014, previously described “Go as the pinnacle of game AI research” and the “holy grail” of AI since Deep Blue beat Kasparov.

Experts had predicted it would take another decade for AI systems to beat professional Go players. But in January, the journal Nature reported that AlphaGo won a five-game match against European champion Fan Hui. Since then the computer program’s performance has steadily improved.
Mastering the game of Go. Nature
While DeepMind’s team built AlphaGo to learn in a more human-like way, it still needs much more practice than a human expert, millions of games rather than thousands.
Potential future uses of AI programs like AlphaGo could include improving smartphone assistants such as Apple’s Siri, medical diagnostics, and possibly even working with human scientists in research.
by Rosamond Hutt, Senior Producer, Formative Content
9 March 2016

Forward to the Future: Visions of 2045

By Hugo Angel,

DARPA asked the world and our own researchers what technologies they expect to see 30 years from now—and received insightful, sometimes funny predictions
Today—October 21, 2015—is famous in popular culture as the date 30 years in the future when Marty McFly and Doc Brown arrive in their time-traveling DeLorean in the movie “Back to the Future Part II.” The film got some things right about 2015, including in-home videoconferencing and devices that recognize people by their voices and fingerprints. But it also predicted trunk-sized fusion reactors, hoverboards and flying cars—game-changing technologies that, despite the advances we’ve seen in so many fields over the past three decades, still exist only in our imaginations.
A big part of DARPA’s mission is to envision the future and make the impossible possible. So ten days ago, as the “Back to the Future” day approached, we turned to social media and asked the world to predict: What technologies might actually surround us 30 years from now? We pointed people to presentations from DARPA’s Future Technologies Forum, held last month in St. Louis, for inspiration and a reality check before submitting their predictions.
Well, you rose to the challenge and the results are in. So in honor of Marty and Doc (little known fact: he is a DARPA alum) and all of the world’s innovators past and future, we present here some highlights from your responses, in roughly descending order by number of mentions for each class of futuristic capability:
  • Space: Interplanetary and interstellar travel, including faster-than-light travel; missions and permanent settlements on the Moon, Mars and the asteroid belt; space elevators
  • Transportation & Energy: Self-driving and electric vehicles; improved mass transit systems and intercontinental travel; flying cars and hoverboards; high-efficiency solar and other sustainable energy sources
  • Medicine & Health: Neurological devices for memory augmentation, storage and transfer, and perhaps to read people’s thoughts; life extension, including virtual immortality via uploading brains into computers; artificial cells and organs; “Star Trek”-style tricorder for home diagnostics and treatment; wearable technology, such as exoskeletons and augmented-reality glasses and contact lenses
  • Materials & Robotics: Ubiquitous nanotechnology, 3-D printing and robotics; invisibility and cloaking devices; energy shields; anti-gravity devices
  • Cyber & Big Data: Improved artificial intelligence; optical and quantum computing; faster, more secure Internet; better use of data analytics to improve use of resources
A few predictions inspired us to respond directly:
  • Pizza delivery via teleportation”—DARPA took a close look at this a few years ago and decided there is plenty of incentive for the private sector to handle this challenge.
  • Time travel technology will be close, but will be closely guarded by the military as a matter of national security”—We already did this tomorrow.
  • Systems for controlling the weather”—Meteorologists told us it would be a job killer and we didn’t want to rain on their parade.
  • Space colonies…and unlimited cellular data plans that won’t be slowed by your carrier when you go over a limit”—We appreciate the idea that these are equally difficult, but they are not. We think likable cell-phone data plans are beyond even DARPA and a total non-starter.
So seriously, as an adjunct to this crowd-sourced view of the future, we asked three DARPA researchers from various fields to share their visions of 2045, and why getting there will require a group effort with players not only from academia and industry but from forward-looking government laboratories and agencies:

Pam Melroy, an aerospace engineer, former astronaut and current deputy director of DARPA’s Tactical Technologies Office (TTO), foresees technologies that would enable machines to collaborate with humans as partners on tasks far more complex than those we can tackle today:
Justin Sanchez, a neuroscientist and program manager in DARPA’s Biological Technologies Office (BTO), imagines a world where neurotechnologies could enable users to interact with their environment and other people by thought alone:
Stefanie Tompkins, a geologist and director of DARPA’s Defense Sciences Office, envisions building substances from the atomic or molecular level up to create “impossible” materials with previously unattainable capabilities.
Check back with us in 2045—or sooner, if that time machine stuff works out—for an assessment of how things really turned out in 30 years.
# # #
Associated images posted on and video posted at may be reused according to the terms of the DARPA User Agreement, available here:
Tweet @darpa
[email protected]

Seven Emerging Technologies That Will Change the World Forever

By admin,

By Gray Scott
Sep 29, 2015

When someone asks me what I do, and I tell them that I’m a futurist,
the first thing they ask “what is a futurist?” The short answer that I
give is “I use current scientific research in emerging technologies to
imagine how we will live in the future.”  
However, as you can imagine the art of futurology and foresight is much more complex. I spend my days thinking, speaking and writing about the future, and emerging technologies. On any given day I might be in Warsaw speaking at an Innovation Conference, in London speaking at a Global Leadership Summit, or being interviewed by the Discovery Channel. Whatever the situation, I have one singular mission. I want you to think about the future. 

How will we live in the future? How will emerging technologies change our lives, our economy and our businesses? We should begin to think about the future now. It will be here faster than you think.

Let’s explore seven current emerging technologies that I am thinking about that are set to change the world forever.

1. Age Reversal
We will see the emergence of true biological age reversal by 2025.

It may be extraordinarily expensive, complex and risky, but for people who want to turn back the clock, it may be worth it. It may sound like science fiction but the science is real, and it has already begun. In fact, according to new research published in Nature’s Scientific Reports, Professor Jun-Ichi Hayashi from the University of Tsukuba in Japan has already reversed ageing in human cell lines by “turning on or off”mitochondrial function.

Another study published in CELL reports that Australian and US researchers have successfully reversed the aging process in the muscles of mice. They found that raising nuclear NAD+ in old mice reverses pseudohypoxia and metabolic dysfunction. Researchers gave the mice a compound called nicotinamide adenine dinucleotide or NAD for a week and found that the age indicators in two-year-old mice were restored to that of six-month-old mice. That would be like turning a 60-year-old human into a 20-year-old!

How will our culture deal with age reversal? Will we set limits on who can age-reverse? Do we ban criminals from this technology? These are the questions we will face in a very complex future. One thing is certain, age reversal will happen and when it does it will change our species and our world forever.

2. Artificial General Intelligence
The robots are coming and they are going to eat your job for lunch. Worldwide shipments of multipurpose industrial robots are forecast to exceed 207,000 units in 2015, and this is just the beginning. Robots like Care-o-bot 4 and Softbank’s Pepper may be in homes, offices and hotels within the next year. These robots will be our personal servants, assistants and caretakers.

Amazon has introduced a new AI assistant called ECHO that could replace the need for a human assistant altogether. We already have robots and automation that can make pizza, serve beer, write news articles, scan our faces for diseases, and drive cars. We will see AI in our factories, hospitals, restaurants and hotels around the world by 2020.

This “pinkhouse” at Caliber Biotherapeutics in Bryan, Texas, grows 2.2 million plants under the glow of blue and red LEDs.
Courtesy of Caliber Therapeutics

3. Vertical Pink Farms
We are entering the techno-agricultural era. Agricultural science is changing the way we harvest our food. Robots and automation are going to play a decisive role in the way we hunt and gather. The most important and disruptive idea is what I call “Vertical PinkFarms” and it is set to decentralise the food industry forever.

The United Nations (UN) predicts by 2050 80% of the Earth’s population will live in cities. Climate change will also make traditional food production more difficult and less productive in the future. We will need more efficient systems to feed these hungry urban areas. Thankfully, several companies around the world are already producing food grown in these Vertical PinkFarms and the results are remarkable.

Vertical PinkFarms will use blue and red LED lighting to grow organic, pesticide free, climate controlled food inside indoor environments. Vertical PinkFarms use less water, less energy and enable people to grow food underground or indoors year round in any climate.

Traditional food grown on outdoor farms are exposed to the full visible light spectrum. This range includes Red, Orange, Yellow, Green, Blue and Violet. However, agricultural science is now showing us that O, Y, G and V are not necessary for plant growth. You only need R and B. LED lights are much more efficient and cooler than indoor florescent grow lights used in most indoor greenhouses. LED lights are also becoming less expensive as more companies begin to invest in this technology. Just like the solar and electric car revolution, the change will be exponential. By 2025, we may see massive Vertical PinkFarms in most major cities around the world. We may even see small Vertical PinkFarm units in our homes in the future.

4. Transhumanism
By 2035, even if a majority of humans do not self-identify as Transhuman, technically they will be. If we define any bio-upgrade or human enhancement as Transhumanism, then the numbers are already quite high and growing exponentially. According to a UN Telecom Agency report, around 6 billion people have cell phones. This demonstrates the ubiquitous nature of technology that we keep on or around our body.

As human bio-enhancements become more affordable, billions of humans will become Transhuman. Digital implants, mind-controlled exoskeletal upgrades, age reversal pills, hyper-intelligence brain implants and bionic muscle upgrades. All of these technologies will continue our evolution as humans.

Reconstructive joint replacements, spinal implants, cardiovascular implants, dental implants, intraocular lens and breast implants are all part of our human techno-evolution into this new Transhuman species.

5. Wearables and Implantables  
Smartphones will fade into digital history as the high-resolution smart contact lens and corresponding in-ear audio plugs communicate with our wearable computers or “smart suits.” The digital world will be displayed directly on our eye in stunning interactive augmented beauty. The Gent University’s Centre of Microsystems Technology in Belgium has recently developed a spherical curved LCD display that can be embedded in contact lenses. This enables the entire lens to display information.

The bridge to the smart contact starts with smart glasses, VR headsets and yes, the Apple watch. Wearable technologies are growing exponentially. New smart augmented glasses like 
  • Google Glass, 
  • METAPro, and 
  • Vuzix M100 Smart Glasses 
are just the beginning. In fact, CastAR augmented 3D glasses recently received over a million dollars in funding on Kickstarter. Their goal was only four hundred thousand. The market is ready for smart vision, and tech companies should move away from handheld devices if they want to compete.

The question of what is real and augmented will be irrelevant in the future. We will be able to create our reality with clusters of information cults that can only see certain augmented information realities if you are in these groups. All information will be instantaneously available in the augmented visual future.

Mist Water Canarias
Gray Scott, an IEET Advisory Board member, is a futurist,
techno-philosopher, speaker, writer and artist. He is the founder and
CEO of and a professional member of The World Future Society.

6. Atmospheric Water Harvesting
California and parts of the south-west in the US are currently experiencing an unprecedented drought. If this drought continues, the global agricultural system could become unstable.

Consider this: California and Arizona account for about 98% of commercial lettuce production in the United States. Thankfully we live in a world filled with exponential innovation right now.

An emerging technology called Atmospheric Water Harvesting could save California and other arid parts of the world from severe drought and possibly change the techno-agricultural landscape forever.

Traditional agricultural farming methods consume 80% of the water in California. According to the California Agricultural Resource Directory of 2009, California grows 
  • 99% of the U.S. almonds, artichokes, and walnuts; 
  • 97% of the kiwis, apricots and plums; 
  • 96% of the figs, olives and nectarines; 
  • 95% of celery and garlic; 
  • 88% of strawberries and lemons; 
  • 74% of peaches; 
  • 69% of carrots; 
  • 62% of tangerines and 
  • the list goes on.
Several companies around the world are already using atmospheric water harvesting technologies to solve this problem. Each company has a different technological approach but all of them combined could help alleviate areas suffering from water shortages.

The most basic, and possibly the most accessible, form of atmospheric water harvesting technology works by collecting water and moisture from the atmosphere using micro netting. These micro nets collect water that drains down into a collection chamber. This fresh water can then be stored or channelled into homes and farms as needed.

A company called FogQuest is already successfully using micro netting or “fog collectors” to harvest atmospheric water in places like Ethiopia, Guatemala, Nepal, Chile and Morocco.
Will people use this technology or will we continue to drill for water that may not be there?

7. 3D Printing
Today we already have 3D printers that can print clothing, circuit boards, furniture, homes and chocolate. A company called BigRep has created a 3D printer called the BigRep ONE.2 that enables designers to create entire tables, chairs or coffee tables in one print. Did you get that?

You can now buy a 3D printer and print furniture!
Fashion designers like 
  • Iris van Herpen, 
  • Bryan Oknyansky, 
  • Francis Bitonti, 
  • Madeline Gannon, and 
  • Daniel Widrig 
have all broken serious ground in the 3D printed fashion movement. These avant-garde designs may not be functional for the average consumer so what is one to do for a regular tee shirt? Thankfully a new Field Guided Fabrication 3D printer called ELECTROLOOM has arrived that can print and it may put a few major retail chains out of business. The ELECTROLOOM enables anyone to create seamless fabric items on demand.

So what is next? 3D printed cars. Yes, cars. Divergent Microfactories (DM) has recently created a first 3D printed high-performance car called the Blade. This car is no joke. The Blade has a chassis weight of just 61 pounds, goes 0-60 MPH in 2.2 seconds and is powered by a 4-cylinder 700-horsepower bi-fuel internal combustion engine.

These are just seven emerging technologies on my radar. I have a list of hundreds of innovations that will change the world forever. Some sound like pure sci-fi but I assure you they are real. Are we ready for a world filled with abundance, age reversal and self-replicating AI robots? I hope so.


Neurotechnology Provides Near-Natural Sense of Touch

By admin,

Revolutionizing Prosthetics program achieves goal of restoring sensation

Modular Prosthetic Limb courtesy of the Johns Hopkins University

Modular Prosthetic Limb courtesy of the Johns Hopkins University

Modular Prosthetic Limb courtesy of the Johns Hopkins University

A 28-year-old who has been paralyzed for more than a decade as a result of a spinal cord injury has become the first person to be able to “feel” physical sensations through a prosthetic hand directly connected to his brain, and even identify which mechanical finger is being gently touched.
The advance, made possible by sophisticated neural technologies developed under DARPA’s Revolutionizing Prosthetics points to a future in which people living with paralyzed or missing limbs will not only be able to manipulate objects by sending signals from their brain to robotic devices, but also be able to sense precisely what those devices are touching.
We’ve completed the circuit,” said DARPA program manager Justin Sanchez. “Prosthetic limbs that can be controlled by thoughts are showing great promise, but without feedback from signals traveling back to the brain it can be difficult to achieve the level of control needed to perform precise movements. By wiring a sense of touch from a mechanical hand directly into the brain, this work shows the potential for seamless bio-technological restoration of near-natural function.
The clinical work involved the placement of electrode arrays onto the paralyzed volunteer’s sensory cortex—the brain region responsible for identifying tactile sensations such as pressure. In addition, the team placed arrays on the volunteer’s motor cortex, the part of the brain that directs body movements.
Wires were run from the arrays on the motor cortex to a mechanical hand developed by the Applied Physics Laboratory (APL) at Johns Hopkins University. That gave the volunteer—whose identity is being withheld to protect his privacy—the capacity to control the hand’s movements with his thoughts, a feat previously accomplished under the DARPA program by another person with similar injuries.

Then, breaking new neurotechnological ground, the researchers went on to provide the volunteer a sense of touch. The APL hand contains sophisticated torque sensors that can detect when pressure is being applied to any of its fingers, and can convert those physical “sensations” into electrical signals. The team used wires to route those signals to the arrays on the volunteer’s brain.

In the very first set of tests, in which researchers gently touched each of the prosthetic hand’s fingers while the volunteer was blindfolded, he was able to report with nearly 100 percent accuracy which mechanical finger was being touched. The feeling, he reported, was as if his own hand were being touched.
At one point, instead of pressing one finger, the team decided to press two without telling him,” said Sanchez, who oversees the Revolutionizing Prosthetics program. “He responded in jest asking whether somebody was trying to play a trick on him. That is when we knew that the feelings he was perceiving through the robotic hand were near-natural.”
Sanchez described the basic findings on Thursday at Wait, What? A Future Technology Forum, hosted by DARPA in St. Louis. Further details about the work are being withheld pending peer review and acceptance for publication in a scientific journal.
The restoration of sensation with implanted neural arrays is one of several neurotechnology-based advances emerging from DARPA’s 18-month-old Biological Technologies Office, Sanchez said. “DARPA’s investments in neurotechnologies are helping to open entirely new worlds of function and experience for individuals living with paralysis and have the potential to benefit people with similarly debilitating brain injuries or diseases,” he said.

In addition to the Revolutionizing Prosthetics program that focuses on restoring movement and sensation, DARPA’s portfolio of neurotechnology programs includes the

which seek to develop closed-loop direct interfaces to the brain to restore function to individuals living with memory loss from traumatic brain injury or complex neuropsychiatric illness.

For more information about Wait, What? please visit: (!!)
[email protected]

IBM’S ‘Rodent Brain’ Chip Could Make Our Phones Hyper-Smart

By admin,

At a lab near San Jose, IBM has built the digital equivalent of a rodent brain—roughly speaking. It spans 48 of the company’s experimental TrueNorth chips, a new breed of processor that mimics the brain’s biological building blocks. IBM
DHARMENDRA MODHA WALKS me to the front of the room so I can see it up close. About the size of a bathroom medicine cabinet, it rests on a table against the wall, and thanks to the translucent plastic on the outside, I can see the computer chips and the circuit boards and the multi-colored lights on the inside. It looks like a prop from a ’70s sci-fi movie, but Modha describes it differently. “You’re looking at a small rodent,” he says.
He means the brain of a small rodent—or, at least, the digital equivalent. The chips on the inside are designed to behave like neurons—the basic building blocks of biological brains. Modha says the system in front of us spans 48 million of these artificial nerve cells, roughly the number of neurons packed into the head of a rodent.
Modha oversees the cognitive computing group at IBM, the company that created these “neuromorphic” chips. For the first time, he and his team are sharing their unusual creations with the outside world, running a three-week “boot camp” for academics and government researchers at an IBM R&D lab on the far side of Silicon Valley. Plugging their laptops into the digital rodent brain at the front of the room, this eclectic group of computer scientists is exploring the particulars of IBM’s architecture and beginning to build software for the chip dubbed TrueNorth.
We want to get as close to the brain as possible while maintaining flexibility.’DHARMENDRA MODHA, IBM
Some researchers who got their hands on the chip at an engineering workshop in Colorado the previous month have already fashioned software that can identify images, recognize spoken words, and understand natural language. Basically, they’re using the chip to run “deep learning” algorithms, the same algorithms that drive the internet’s latest AI services, including the face recognition on Facebook and the instant language translation on Microsoft’s Skype. But the promise is that IBM’s chip can run these algorithms in smaller spaces with considerably less electrical power, letting us shoehorn more AI onto phones and other tiny devices, including hearing aids and, well, wristwatches.
What does a neuro-synaptic architecture give us? It lets us do things like image classification at a very, very low power consumption,” says Brian Van Essen, a computer scientist at the Lawrence Livermore National Laboratory who’s exploring how deep learning could be applied to national security. “It lets us tackle new problems in new environments.
The TrueNorth is part of a widespread movement to refine the hardware that drives deep learning and other AI services. Companies like Google and Facebook and Microsoft are now running their algorithms on machines backed with GPUs (chips originally built to render computer graphics), and they’re moving towards FPGAs (chips you can program for particular tasks). For Peter Diehl, a PhD student in the cortical computation group at ETH Zurich and University Zurich, TrueNorth outperforms GPUs and FPGAs in certain situations because it consumes so little power.
The main difference, says Jason Mars, a professor of a computer science at the University of Michigan, is that the TrueNorth dovetails so well with deep-learning algorithms. These algorithms mimic neural networks in much the same way IBM’s chips do, recreating the neurons and synapses in the brain. One maps well onto the other. “The chip gives you a highly efficient way of executing neural networks,” says Mars, who declined an invitation to this month’s boot camp but has closely followed the progress of the chip.
That said, the TrueNorth suits only part of the deep learning process—at least as the chip exists today—and some question how big an impact it will have. Though IBM is now sharing the chips with outside researchers, it’s years away from the market. For Modha, however, this is as it should be. As he puts it: “We’re trying to lay the foundation for significant change.
The Brain on a Phone
Peter Diehl recently took a trip to China, where his smartphone didn’t have access to the `net, an experience that cast the limitations of today’s AI in sharp relief. Without the internet, he couldn’t use a service like Google Now, which applies deep learning to speech recognition and natural language processing, because most the computing takes place not on the phone but on Google’s distant servers. “The whole system breaks down,” he says.
Deep learning, you see, requires enormous amounts of processing power—processing power that’s typically provided by the massive data centers that your phone connects to over the `net rather than locally on an individual device. The idea behind TrueNorth is that it can help move at least some of this processing power onto the phone and other personal devices, something that can significantly expand the AI available to everyday people.
To understand this, you have to understand how deep learning works. It operates in two stages. 
  • First, companies like Google and Facebook must train a neural network to perform a particular task. If they want to automatically identify cat photos, for instance, they must feed the neural net lots and lots of cat photos. 
  • Then, once the model is trained, another neural network must actually execute the task. You provide a photo and the system tells you whether it includes a cat. The TrueNorth, as it exists today, aims to facilitate that second stage.
Once a model is trained in a massive computer data center, the chip helps you execute the model. And because it’s small and uses so little power, it can fit onto a handheld device. This lets you do more at a faster speed, since you don’t have to send data over a network. If it becomes widely used, it could take much of the burden off data centers. “This is the future,” Mars says. “We’re going to see more of the processing on the devices.”
Neurons, Axons, Synapses, Spikes
Google recently discussed its efforts to run neural networks on phones, but for Diehl, the TrueNorth could take this concept several steps further. The difference, he explains, is that the chip dovetails so well with deep learning algorithms. Each chip mimics about a million neurons, and these can communicate with each other via something similar to a synapse, the connections between neurons in the brain.
‘Silicon operates in a very different way than the stuff our brains are made of.’
The setup is quite different than what you find in chips on the market today, including GPUs and FPGAs. Whereas these chips are wired to execute particular “instructions,” the TrueNorth juggles “spikes,” much simpler pieces of information analogous to the pulses of electricity in the brain. Spikes, for instance, can show the changes in someone’s voice as they speak—or changes in color from pixel to pixel in a photo. “You can think of it as a one-bit message sent from one neuron to another.” says Rodrigo Alvarez-Icaza, one of the chip’s chief designers.
The upshot is a much simpler architecture that consumes less power. Though the chip contains 5.4 billion transistors, it draws about 70 milliwatts of power. A standard Intel computer processor, by comparison, includes 1.4 billion transistors and consumes about 35 to 140 watts. Even the ARM chips that drive smartphones consume several times more power than the TrueNorth.
Of course, using such a chip also requires a new breed of software. That’s what researchers like Diehl are exploring at the TrueNorth boot camp, which began in early August and runs for another week at IBM’s research lab in San Jose, California. In some cases, researchers are translating existing code into the “spikes” that the chip can read (and back again). But they’re also working to build native code for the chip.
Parting Gift
Like these researchers, Modha discusses the TrueNorth mainly in biological terms. Neurons. Axons. Synapses. Spikes. And certainly, the chip mirrors such wetware in some ways. But the analogy has its limits. “That kind of talk always puts up warning flags,” says Chris Nicholson, the co-founder of deep learning startup Skymind. “Silicon operates in a very different way than the stuff our brains are made of.
Modha admits as much. When he started the project in 2008, backed by $53.5M in funding from Darpa, the research arm for the Department of Defense, the aim was to mimic the brain in a more complete way using an entirely different breed of chip material. But at one point, he realized this wasn’t going to happen anytime soon. “Ambitions must be balanced with reality,” he says.
In 2010, while laid up in bed with the swine flu, he realized that the best way forward was a chip architecture that loosely mimicked the brain—an architecture that could eventually recreate the brain in more complete ways as new hardware materials were developed. “You don’t need to model the fundamental physics and chemistry and biology of the neurons to elicit useful computation,” he says. “We want to get as close to the brain as possible while maintaining flexibility.
This is TrueNorth. It’s not a digital brain. But it is a step toward a digital brain. And with IBM’s boot camp, the project is accelerating. The machine at the front of the room is really 48 separate machines, each built around its own TrueNorth processors. Next week, as the boot camp comes to a close, Modha and his team will separate them and let all those academics and researchers carry them back to their own labs, which span over 30 institutions on five continents. “Humans use technology to transform society,” Modha says, pointing to the room of researchers. “These are the humans..

An executive’s guide to machine learning

By admin,

An executive’s guide to machine learning

It’s no longer the preserve of artificial-intelligence researchers and born-digital companies like Amazon, Google, and Netflix.
Machine learning is based on algorithms that can learn from data without relying on rules-based programming. It came into its own as a scientific discipline in the late 1990s as steady advances in digitization and cheap computing power enabled data scientists to stop building finished models and instead train computers to do so. The unmanageable volume and complexity of the big data that the world is now swimming in have increased the potential of machine learning—and the need for it.
Stanford’s Fei-Fei Li

In 2007 Fei-Fei Li, the head of Stanford’s Artificial Intelligence Lab, gave up trying to program computers to recognize objects and began labeling the millions of raw images that a child might encounter by age three and feeding them to computers. By being shown thousands and thousands of labeled data sets with instances of, say, a cat, the machine could shape its own rules for deciding whether a particular set of digital pixels was, in fact, a cat.1 Last November, Li’s team unveiled a program that identifies the visual elements of any picture with a high degree of accuracy. IBM’s Watson machine relied on a similar self-generated scoring system among hundreds of potential answers to crush the world’s best Jeopardy! players in 2011.

Dazzling as such feats are, machine learning is nothing like learning in the human sense (yet). But what it already does extraordinarily well—and will get better at—is relentlessly chewing through any amount of data and every combination of variables. Because machine learning’s emergence as a mainstream management tool is relatively recent, it often raises questions. In this article, we’ve posed some that we often hear and answered them in a way we hope will be useful for any executive. Now is the time to grapple with these issues, because the competitive significance of business models turbocharged by machine learning is poised to surge. Indeed, management author Ram Charan suggests that any organization that is not a math house now or is unable to become one soon is already a legacy company.2
1. How are traditional industries using machine learning to gather fresh business insights?
Well, let’s start with sports. This past spring, contenders for the US National Basketball Association championship relied on the analytics of Second Spectrum, a California machine-learning start-up. By digitizing the past few seasons’ games, it has created predictive models that allow a coach to distinguish between, as CEO Rajiv Maheswaran puts it, “a bad shooter who takes good shots and a good shooter who takes bad shots”—and to adjust his decisions accordingly.
You can’t get more venerable or traditional than General Electric, the only member of the original Dow Jones Industrial Average still around after 119 years. GE already makes hundreds of millions of dollars by crunching the data it collects from deep-sea oil wells or jet engines to optimize performance, anticipate breakdowns, and streamline maintenance. But Colin Parris, who joined GE Software from IBM late last year as vice president of software research, believes that continued advances in data-processing power, sensors, and predictive algorithms will soon give his company the same sharpness of insight into the individual vagaries of a jet engine that Google has into the online behavior of a 24-year-old netizen from West Hollywood.
2. What about outside North America?
In Europe, more than a dozen banks have replaced older statistical-modeling approaches with machine-learning techniques and, in some cases, experienced 10 percent increases in sales of new products, 20 percent savings in capital expenditures, 20 percent increases in cash collections, and 20 percent declines in churn. The banks have achieved these gains by devising new recommendation engines for clients in retailing and in small and medium-sized companies. They have also built microtargeted models that more accurately forecast who will cancel service or default on their loans, and how best to intervene.
Closer to home, as a recent article in McKinsey Quarterly notes,3 our colleagues have been applying hard analytics to the soft stuff of talent management. Last fall, they tested the ability of three algorithms developed by external vendors and one built internally to forecast, solely by examining scanned résumés, which of more than 10,000 potential recruits the firm would have accepted. The predictions strongly correlated with the real-world results. Interestingly, the machines accepted a slightly higher percentage of female candidates, which holds promise for using analytics to unlock a more diverse range of profiles and counter hidden human bias.
As ever more of the analog world gets digitized, our ability to learn from data by developing and testing algorithms will only become more important for what are now seen as traditional businesses. Google chief economist Hal Varian calls this “computer kaizen.” For “just as mass production changed the way products were assembled and continuous improvement changed how manufacturing was done,” he says, “so continuous [and often automatic] experimentation will improve the way we optimize business processes in our organizations.4
3. What were the early foundations of machine learning?
Machine learning is based on a number of earlier building blocks, starting with classical statistics. Statistical inference does form an important foundation for the current implementations of artificial intelligence. But it’s important to recognize that classical statistical techniques were developed between the 18th and early 20th centuries for much smaller data sets than the ones we now have at our disposal. Machine learning is unconstrained by the preset assumptions of statistics. As a result, it can yield insights that human analysts do not see on their own and make predictions with ever-higher degrees of accuracy.
More recently, in the 1930s and 1940s, the pioneers of computing (such as Alan Turing, who had a deep and abiding interest in artificial intelligence) began formulating and tinkering with the basic techniques such as neural networks that make today’s machine learning possible. But those techniques stayed in the laboratory longer than many technologies did and, for the most part, had to await the development and infrastructure of powerful computers, in the late 1970s and early 1980s. That’s probably the starting point for the machine-learning adoption curve. New technologies introduced into modern economies—the steam engine, electricity, the electric motor, and computers, for example—seem to take about 80 years to transition from the laboratory to what you might call cultural invisibility. The computer hasn’t faded from sight just yet, but it’s likely to by 2040. And it probably won’t take much longer for machine learning to recede into the background.
4. What does it take to get started?
C-level executives will best exploit machine learning if they see it as a tool to craft and implement a strategic vision. But that means putting strategy first. Without strategy as a starting point, machine learning risks becoming a tool buried inside a company’s routine operations: it will provide a useful service, but its long-term value will probably be limited to an endless repetition of “cookie cutter” applications such as models for acquiring, stimulating, and retaining customers.
We find the parallels with M&A instructive. That, after all, is a means to a well-defined end. No sensible business rushes into a flurry of acquisitions or mergers and then just sits back to see what happens. Companies embarking on machine learning should make the same three commitments companies make before embracing M&A. Those commitments are,

  • first, to investigate all feasible alternatives;
  • second, to pursue the strategy wholeheartedly at the C-suite level; and,
  • third, to use (or if necessary acquire) existing expertise and knowledge in the C-suite to guide the application of that strategy.
The people charged with creating the strategic vision may well be (or have been) data scientists. But as they define the problem and the desired outcome of the strategy, they will need guidance from C-level colleagues overseeing other crucial strategic initiatives. More broadly, companies must have two types of people to unleash the potential of machine learning.

  • Quants” are schooled in its language and methods.
  • Translators” can bridge the disciplines of data, machine learning, and decision making by reframing the quants’ complex results as actionable insights that generalist managers can execute.
Access to troves of useful and reliable data is required for effective machine learning, such as Watson’s ability, in tests, to predict oncological outcomes better than physicians or Facebook’s recent success teaching computers to identify specific human faces nearly as accurately as humans do. A true data strategy starts with identifying gaps in the data, determining the time and money required to fill those gaps, and breaking down silos. Too often, departments hoard information and politicize access to it—one reason some companies have created the new role of chief data officer to pull together what’s required. Other elements include putting responsibility for generating data in the hands of frontline managers.
Start small—look for low-hanging fruit and trumpet any early success. This will help recruit grassroots support and reinforce the changes in individual behavior and the employee buy-in that ultimately determine whether an organization can apply machine learning effectively. Finally, evaluate the results in the light of clearly identified criteria for success.
5. What’s the role of top management?
Behavioral change will be critical, and one of top management’s key roles will be to influence and encourage it. Traditional managers, for example, will have to get comfortable with their own variations on A/B testing, the technique digital companies use to see what will and will not appeal to online consumers. Frontline managers, armed with insights from increasingly powerful computers, must learn to make more decisions on their own, with top management setting the overall direction and zeroing in only when exceptions surface. Democratizing the use of analytics—providing the front line with the necessary skills and setting appropriate incentives to encourage data sharing—will require time.
C-level officers should think about applied machine learning in three stages: machine learning 1.0, 2.0, and 3.0—or, as we prefer to say,

  1. description, 
  2. prediction, and
  3. prescription. 

They probably don’t need to worry much about the description stage, which most companies have already been through. That was all about collecting data in databases (which had to be invented for the purpose), a development that gave managers new insights into the past. OLAP—online analytical processing—is now pretty routine and well established in most large organizations.

There’s a much more urgent need to embrace the prediction stage, which is happening right now. Today’s cutting-edge technology already allows businesses not only to look at their historical data but also to predict behavior or outcomes in the future—for example, by helping credit-risk officers at banks to assess which customers are most likely to default or by enabling telcos to anticipate which customers are especially prone to “churn” in the near term (exhibit).
A frequent concern for the C-suite when it embarks on the prediction stage is the quality of the data. That concern often paralyzes executives. In our experience, though, the last decade’s IT investments have equipped most companies with sufficient information to obtain new insights even from incomplete, messy data sets, provided of course that those companies choose the right algorithm. Adding exotic new data sources may be of only marginal benefit compared with what can be mined from existing data warehouses. Confronting that challenge is the task of the “chief data scientist.”
Prescription—the third and most advanced stage of machine learning—is the opportunity of the future and must therefore command strong C-suite attention. It is, after all, not enough just to predict what customers are going to do; only by understanding why they are going to do it can companies encourage or deter that behavior in the future. Technically, today’s machine-learning algorithms, aided by human translators, can already do this. For example, an international bank concerned about the scale of defaults in its retail business recently identified a group of customers who had suddenly switched from using credit cards during the day to using them in the middle of the night. That pattern was accompanied by a steep decrease in their savings rate. After consulting branch managers, the bank further discovered that the people behaving in this way were also coping with some recent stressful event. As a result, all customers tagged by the algorithm as members of that microsegment were automatically given a new limit on their credit cards and offered financial advice.
The prescription stage of machine learning, ushering in a new era of man–machine collaboration, will require the biggest change in the way we work. While the machine identifies patterns, the human translator’s responsibility will be to interpret them for different microsegments and to recommend a course of action. Here the C-suite must be directly involved in the crafting and formulation of the objectives that such algorithms attempt to optimize.
6. This sounds awfully like automation replacing humans in the long run. Are we any nearer to knowing whether machines will replace managers?
It’s true that change is coming (and data are generated) so quickly that human-in-the-loop involvement in all decision making is rapidly becoming impractical. Looking three to five years out, we expect to see far higher levels of artificial intelligence, as well as the development of distributed autonomous corporations. These self-motivating, self-contained agents, formed as corporations, will be able to carry out set objectives autonomously, without any direct human supervision. Some DACs will certainly become self-programming.
One current of opinion sees distributed autonomous corporations as threatening and inimical to our culture. But by the time they fully evolve, machine learning will have become culturally invisible in the same way technological inventions of the 20th century disappeared into the background. The role of humans will be to direct and guide the algorithms as they attempt to achieve the objectives that they are given. That is one lesson of the automatic-trading algorithms which wreaked such damage during the financial crisis of 2008.
No matter what fresh insights computers unearth, only human managers can decide the essential questions, such as which critical business problems a company is really trying to solve. Just as human colleagues need regular reviews and assessments, so these “brilliant machines” and their works will also need to be regularly evaluated, refined—and, who knows, perhaps even fired or told to pursue entirely different paths—by executives with experience, judgment, and domain expertise.
The winners will be neither machines alone, nor humans alone, but the two working together effectively.
7. So in the long term there’s no need to worry?
It’s hard to be sure, but distributed autonomous corporations and machine learning should be high on the C-suite agenda. We anticipate a time when the philosophical discussion of what intelligence, artificial or otherwise, might be will end because there will be no such thing as intelligence—just processes. If distributed autonomous corporations act intelligently, perform intelligently, and respond intelligently, we will cease to debate whether high-level intelligence other than the human variety exists. In the meantime, we must all think about what we want these entities to do, the way we want them to behave, and how we are going to work with them.
About the authors
Dorian Pyle is a data expert in McKinsey’s Miami office, and Cristina San Jose is a principal in the Madrid office.
by Dorian Pyle and Cristina San Jose
June 2015

Computer invents new scientific theory without human help for the first time

By admin,

The mystery of how flatworms regenerate has been solved independently by a computer(Max Delbrück Center)

One of biology’s biggest mysteries – how a sliced up flatworm can regenerate into new organisms – has been solved independently by a computer. The discovery marks the first time that a computer has come up with a new scientific theory without direct human help.

The computer invented an accurate model of the inner-workings of a flatworm(UoM)

Computer scientists from the University of Maryland programmed a computer to randomly predict how a worm’s genes formed a regulatory network capable of regeneration, before evaluating these predictions through simulation.

After three days of continuously predicting, simulating and evaluating, the computer was able to come up with a core genetic network that explained how the worm’s regeneration took place.
The study by Daniel Lobo and Michael Levin, Inferring Regulatory Networks from Experimental Morphological Phenotypes, was published on Thursday (4 June) in the journal PLOS.

It’s not just statistics or number-crunching,” Levin told Popular Mechanics. “The invention of models to explain what nature is doing is the most creative thing scientists do. This is the heart and soul of the scientific enterprise. None of us could have come up with this model; we (as a field) have failed to do so after over a century of effort.

Lobo and Levin are now applying the trial-and-error approach to creating scientific models and theories in different areas, including cancer research.

The pair believe that the approach can be used to better understand the process of metastasis, which causes cancer to spread through the body.

However, despite the computer only taking three days to create the worm model, it took the scientists several years to put together the program.
In order to transfer the computer’s abilities to other areas, massive databases of scientific experiments would need to be prepared in order to have enough raw material for discoveries to be made.

This problem, and our approach, is nearly universal,” Levin said. “It can be used with anything, where functional data exist but the underlying mechanism is hard to guess.

As long as you tweak the formal language, build the database of facts in your field, and provide an appropriate simulator, the whole scheme can be used for many, man applications.

Silicon Valley Then and Now: To Invent the Future, You Must Understand the Past

By admin,

William Shockley’s employees toast him for his Nobel Prize, 1956. Photo courtesy Computer History Museum.
You can’t really understand what is going on now without understanding what came before.
Steve Jobs is explaining why, as a young man, he spent so much time with the Silicon Valley entrepreneurs a generation older, men like Robert Noyce, Andy Grove, and Regis McKenna.
It’s a beautiful Saturday morning in May, 2003, and I’m sitting next to Jobs on his living room sofa, interviewing him for a book I’m writing. I ask him to tell me more about why he wanted, as he put it, “to smell that second wonderful era of the valley, the semiconductor companies leading into the computer.” Why, I want to know, is it not enough to stand on the shoulders of giants? Why does he want to pick their brains?
It’s like that Schopenhauer quote about the conjurer,” he says. When I look blank, he tells me to wait and then dashes upstairs. He comes down a minute later holding a book and reading aloud:
Steve Jobs and Robert Noyce.
Courtesy Leslie Berlin.
He who lives to see two or three generations is like a man who sits some time in the conjurer’s booth at a fair, and witnesses the performance twice or thrice in succession. The tricks were meant to be seen only once, and when they are no longer a novelty and cease to deceive, their effect is gone.
History, Jobs understood, gave him a chance to see — and see through — the conjurer’s tricks before they happened to him, so he would know how to handle them.
Flash forward eleven years. It’s 2014, and I am going to see Robert W. Taylor. In 1966, Taylor convinced the Department of Defense to build the ARPANET that eventually formed the core of the Internet. He went on to run the famous Xerox PARC Computer Science Lab that developed the first modern personal computer. For a finishing touch, he led one of the teams at DEC behind the world’s first blazingly fast search engine — three years before Google was founded.
Visiting Taylor is like driving into a Silicon Valley time machine. You zip past the venture capital firms on Sand Hill Road, over the 280 freeway, and down a twisty two-lane street that is nearly impassable on weekends, thanks to the packs of lycra-clad cyclists on multi-thousand-dollar bikes raising their cardio thresholds along the steep climbs. A sharp turn and you enter what seems to be another world, wooded and cool, the coastal redwoods dense along the hills. Cell phone signals fade in and out in this part of Woodside, far above Buck’s Restaurant where power deals are negotiated over early-morning cups of coffee. GPS tries valiantly to ascertain a location — and then gives up.
When I get to Taylor’s home on a hill overlooking the Valley, he tells me about another visitor who recently took that drive, apparently driven by the same curiosity that Steve Jobs had: Mark Zuckerberg, along with some colleagues at the company he founded, Facebook.
Zuckerberg must have heard about me in some historical sense,” Taylor recalls in his Texas drawl. “He wanted to see what I was all about, I guess.
To invent the future, you must understand the past.

I am a historian, and my subject matter is Silicon Valley. So I’m not surprised that Jobs and Zuckerberg both understood that the Valley’s past matters today and that the lessons of history can take innovation further. When I talk to other founders and participants in the area, they also want to hear what happened before. Their questions usually boil down to two:

  1. Why did Silicon Valley happen in the first place, and 
  2. why has it remained at the epicenter of the global tech economy for so long?
I think I can answer those questions.

First, a definition of terms. When I use the term “Silicon Valley,” I am referring quite specifically to the narrow stretch of the San Francisco Peninsula that is sandwiched between the bay to the east and the Coastal Range to the west. (Yes, Silicon Valley is a physical valley — there are hills on the far side of the bay.) Silicon Valley has traditionally comprised 

  • Santa Clara County and 
  • the southern tip of San Mateo County. In the past few years, 
  • parts of Alameda County and 
  • the city of San Francisco 

can also legitimately be considered satellites of Silicon Valley, or perhaps part of “Greater Silicon Valley.

The name “Silicon Valley,” incidentally, was popularized in 1971 by a hard-drinking, story-chasing, gossip-mongering journalist named Don Hoefler, who wrote for a trade rag called Electronic News. Before, the region was called the Valley of the Hearts Delight,” renowned for its apricot, plum, cherry and almond orchards.
This was down-home farming, three generations of tranquility, beauty, health, and productivity based on family farms of small acreage but bountiful production,” reminisced Wallace Stegner, the famed Western writer. To see what the Valley looked like then, watch the first few minutes of this wonderful 1948 promotional video for the “Valley of the Heart’s Delight.”
Three historical forces — technical, cultural, and financial — created Silicon Valley.
On the technical side, in some sense the Valley got lucky. In 1955, one of the inventors of the transistor, William Shockley, moved back to Palo Alto, where he had spent some of his childhood. Shockley was also a brilliant physicist — he would share the Nobel Prize in 1956 — an outstanding teacher, and a terrible entrepreneur and boss. Because he was a brilliant scientist and inventor, Shockley was able to recruit some of the brightest young researchers in the country — Shockley called them “hot minds” — to come work for him 3,000 miles from the research-intensive businesses and laboratories that lined the Eastern Seaboard from Boston to Bell Labs in New Jersey. Because Shockley was an outstanding teacher, he got these young scientists, all but one of whom had never built transistors, to the point that they not only understood the tiny devices but began innovating in the field of semiconductor electronics on their own.
And because Shockley was a terrible boss — the sort of boss who posted salaries and subjected his employees to lie-detector tests — many who came to work for him could not wait to get away and work for someone else. That someone else, it turned out, would be themselves. The move by eight of Shockley’s employees to launch their own semiconductor operation called Fairchild Semiconductor in 1957 marked the first significant modern startup company in Silicon Valley. After Fairchild Semiconductor blew apart in the late-1960s, employees launched dozens of new companies (including Intel, National and AMD) that are collectively called the Fairchildren.
The Fairchild 8: Gordon Moore, Sheldon Roberts, Eugene Kleiner, Robert Noyce, Victor Grinich, Julius Blank, Jean Hoerni, and Jay Last. Photo courtesy Wayne Miller/Magnum Photos.
Equally important for the Valley’s future was the technology that Shockley taught his employees to build: the transistor. Nearly everything that we associate with the modern technology revolution and Silicon Valley can be traced back to the tiny, tiny transistor.
Think of the transistor as the grain of sand at the core of the Silicon Valley pearl. The next layer of the pearl appeared when people strung together transistors, along with other discrete electronic components like resistors and capacitors, to make an entire electronic circuit on a single slice of silicon. This new device was called a microchip. Then someone came up with a specialized microchip that could be programmed: the microprocessor. The first pocket calculators were built around these microprocessors. Then someone figured out that it was possible to combine a microprocessor with other components and a screen — that was a computer. People wrote code for those computers to serve as operating systems and software on top of those systems. At some point people began connecting these computers to each other: networking. Then people realized it should be possible to “virtualize” these computers and store their contents off-site in a “cloud,” and it was also possible to search across the information stored in multiple computers. Then the networked computer was shrunk — keeping the key components of screen, keyboard, and pointing device (today a finger) — to build tablets and palm-sized machines called smart phones. Then people began writing apps for those mobile devices … .
You get the picture. These changes all kept pace to the metronomic tick-tock of Moore’s Law.
The skills learned through building and commercializing one layer of the pearl underpinned and supported the development of the next layer or developments in related industries. Apple, for instance, is a company that people often speak of as sui generis, but Apple Computer’s early key employees had worked at Intel, Atari, or Hewlett-Packard. Apple’s venture capital backers had either backed Fairchild or Intel or worked there. The famous Macintosh, with its user-friendly aspect, graphical-user interface, overlapping windows, and mouse was inspired by a 1979 visit Steve Jobs and a group of engineers paid to XEROX PARC, located in the Stanford Research Park. In other words, Apple was the product of its Silicon Valley environment and technological roots.
This brings us to the second force behind the birth of Silicon Valley: culture. When Shockley, his transistor and his recruits arrived in 1955, the valley was still largely agricultural, and the small local industry had a distinctly high-tech (or as they would have said then, “space age”) focus. The largest employer was defense contractor Lockheed. IBM was about to open a small research facility. Hewlett-Packard, one of the few homegrown tech companies in Silicon Valley before the 1950s, was more than a decade old.
Stanford, meanwhile, was actively trying to build up its physics and engineering departments. Professor (and Provost from 1955 to 1965) Frederick Terman worried about a “brain drain” of Stanford graduates to the East Coast, where jobs were plentiful. So he worked with President J.E. Wallace Sterling to create what Terman called “a community of technical scholars” in which the links between industry and academia were fluid. This meant that as the new transistor-cum-microchip companies began to grow, technically knowledgeable engineers were already there.
Woz and Jobs.
Photo courtesy Computer History Museum.
These trends only accelerated as the population exploded. Between 1950 and 1970, the population of Santa Clara County tripled, from roughly 300,000 residents to more than 1 million. It was as if a new person moved into Santa Clara County every 15 minutes for 20 years. The newcomers were, overall, younger and better educated than the people already in the area. The Valley changed from a community of aging farmers with high school diplomas to one filled with 20-something PhDs.
All these new people pouring into what had been an agricultural region meant that it was possible to create a business environment around the needs of new companies coming up, rather than adapting an existing business culture to accommodate the new industries. In what would become a self-perpetuating cycle, everything from specialized law firms, recruiting operations and prototyping facilities; to liberal stock option plans; to zoning laws; to community college course offerings developed to support a tech-based business infrastructure.
Historian Richard White says that the modern American West was “born modern” because the population followed, rather than preceded, connections to national and international markets. Silicon Valley was bornpost-modern, with those connections not only in place but so taken for granted that people were comfortable experimenting with new types of business structures and approaches strikingly different from the traditional East Coast business practices with roots nearly two centuries old.
From the beginning, Silicon Valley entrepreneurs saw themselves in direct opposition to their East Coast counterparts. The westerners saw themselves as cowboys and pioneers, working on a “new frontier” where people dared greatly and failure was not shameful but just the quickest way to learn a hard lesson. In the 1970s, with the influence of the counterculture’s epicenter at the corner of Haight and Ashbury, only an easy drive up the freeway, Silicon Valley companies also became famous for their laid-back, dressed-down culture, and for their products, such as video games and personal computers, that brought advanced technology to “the rest of us.

The third key component driving the birth of Silicon Valley, along with the right technology seed falling into a particularly rich and receptive cultural soil, was money. Again, timing was crucial. Silicon Valley was kick-started by federal dollars. Whether it was

  • the Department of Defense buying 100% of the earliest microchips, 
  • Hewlett-Packard and Lockheed selling products to military customers, or 
  • federal research money pouring into Stanford, 

Silicon Valley was the beneficiary of Cold War fears that translated to the Department of Defense being willing to spend almost anything on advanced electronics and electronic systems. The government, in effect, served as the Valley’s first venture capitalist.

The first significant wave of venture capital firms hit Silicon Valley in the 1970s. Both Sequoia Capital and Kleiner Perkins Caufield and Byers were founded by Fairchild alumni in 1972. Between them, these venture firms would go on to fund Amazon, Apple, Cisco, Dropbox, Electronic Arts, Facebook, Genentech, Google, Instagram, Intuit, and LinkedIn — and that is just the first half of the alphabet.
This model of one generation succeeding and then turning around to offer the next generation of entrepreneurs financial support and managerial expertise is one of the most important and under-recognized secrets to Silicon Valley’s ongoing success. Robert Noyce called it “re-stocking the stream I fished from.” Steve Jobs, in his remarkable 2005 commencement address at Stanford, used the analogy of a baton being passed from one runner to another in an ongoing relay across time.
So that’s how Silicon Valley emerged. Why has it endured?

After all, if modern Silicon Valley was born in the 1950s, the region is now in its seventh decade. For roughly two-thirds of that time, Valley watchers have predicted its imminent demise, usually with an allusion to Detroit.

  • First, the oil shocks and energy crises of the 1970s were going to shut down the fabs (specialized factories) that build microchips. 
  • In the 1980s, Japanese competition was the concern. 
  • The bursting of the dot-com bubble
  • the rise of formidable tech regions in other parts of the world
  • the Internet and mobile technologies that make it possible to work from anywhere: 

all have been heard as Silicon Valley’s death knell.

The Valley of Heart’s Delight, pre-technology. OSU Special Collections.
The Valley economy is notorious for its cyclicity, but it has indeed endured. Here we are in 2015, a year in which more patents, more IPOs, and a larger share of venture capital and angel investments have come from the Valley than ever before. As a recent report from Joint Venture Silicon Valley (***) put it, “We’ve extended a four-year streak of job growth, we are among the highest income regions in the country, and we have the biggest share of the nation’s high-growth, high-wage sectors.” Would-be entrepreneurs continue to move to the Valley from all over the world. Even companies that are not started in Silicon Valley move there (witness Facebook).
Why? What is behind Silicon Valley’s staying power? The answer is that many of the factors that launched Silicon Valley in the 1950s continue to underpin its strength today even as the Valley economy has proven quite adaptable.
The Valley still glides in the long wake of the transistor, both in terms of technology and in terms of the infrastructure to support companies that rely on semiconductor technology. Remember the pearl. At the same time, when new industries not related directly to semiconductors have sprung up in the Valley — industries like biotechnology — they have taken advantage of the infrastructure and support structure already in place.
Venture capital has remained the dominant source of funding for young companies in Silicon Valley. In 2014, some $14.5 billion in venture capital was invested in the Valley, accounting for 43 percent of all venture capital investments in the country. More than half of Silicon Valley venture capital went to software investments, and the rise of software, too, helps to explain the recent migration of many tech companies to San Francisco. (San Francisco, it should be noted, accounted for nearly half of the $14.5 billion figure.) Building microchips or computers or specialized production equipment — things that used to happen in Silicon Valley — requires many people, huge fabrication operations and access to specialized chemicals and treatment facilities, often on large swaths of land. Building software requires none of these things; in fact, software engineers need little more than a computer and some server space in the cloud to do their jobs. It is thus easy for software companies to locate in cities like San Francisco, where many young techies want to live.
The Valley continues to be a magnet for young, educated people. The flood of intranational immigrants to Silicon Valley from other parts of the country in the second half of the twentieth century has become, in the twenty-first century, a flood of international immigrants from all over the world. It is impossible to overstate the importance of immigrants to the region and to the modern tech industry. Nearly 37 percent of the people in Silicon Valley today were born outside of the United States — of these, more than 60 percent were born in Asia and 20 percent in Mexico. Half of Silicon Valley households speak a language other than English in the home. Sixty-five percent of the people with Bachelors degrees working in science and engineering in the valley were born in another country. Let me say that again: 2/3 of people in working in sci-tech Valley industries who have completed their college education are foreign born. (Nearly half the college graduates working in all industries in the valley are foreign-born.)
Here’s another way to look at it: From 1995 to 2005, more than half of all Silicon Valley startups had at least one founder who was born outside the United States.[13] Their businesses — companies like Google and eBay — have created American jobs and billions of dollars in American market capitalization.
Silicon Valley, now, as in the past, is built and sustained by immigrants.
Gordon Moore and Robert Noyce at Intel in 1970. Photo courtesy Intel.
Stanford also remains at the center of the action. By one estimate, from 2012, companies formed by Stanford entrepreneurs generate world revenues of $2.7 trillion annually and have created 5.4 million jobs since the 1930s. This figure includes companies whose primary business is not tech: companies like Nike, Gap, and Trader Joe’s. But even if you just look at Silicon Valley companies that came out of Stanford, the list is impressive, including Cisco, Google, HP, IDEO, Instagram, MIPS, Netscape, NVIDIA, Silicon Graphics, Snapchat, Sun, Varian, VMware, and Yahoo. Indeed, some critics have complained that Stanford has become overly focused on student entrepreneurship in recent years — an allegation that I disagree with but is neatly encapsulated in a 2012 New Yorker article that called the university “Get Rich U.”
The above represent important continuities, but change has also been vital to the region’s longevity. Silicon Valley has been re-inventing itself for decades, a trend that is evident with a quick look at the emerging or leading technologies in the area:
• 1940s: instrumentation
• 1950s/60s: microchips
• 1970s: biotech, consumer electronics using chips (PC, video game, etc)
• 1980s: software, networking
• 1990s: web, search
• 2000s: cloud, mobile, social networking
The overriding sense of what it means to be in Silicon Valley — the lionization of risk-taking, the David-versus-Goliath stories, the persistent belief that failure teaches important business lessons even when the data show otherwise — has not changed, but over the past few years, a new trope has appeared alongside the Western metaphors of Gold Rushes and Wild Wests: Disruption.
“Disruption” is the notion, roughly based on ideas first proposed by Joseph Schumpeter in 1942, that a little company can come in and — usually with technology — completely remake an industry that seemed established and largely impervious to change. So: Uber is disrupting the taxi industry. Airbnb is disrupting the hotel industry. The disruption story is, in its essentials, the same as the Western tale: a new approach comes out of nowhere to change the establishment world for the better. You can hear the same themes of adventure, anti-establishment thinking, opportunity and risk-taking. It’s the same song, with different lyrics.
The shift to the new language may reflect the key role that immigrants play in today’s Silicon Valley. Many educated, working adults in the region arrived with no cultural background that promoted cowboys or pioneers. These immigrants did not even travel west to get to Silicon Valley. They came east, or north. It will be interesting to see how long the Western metaphor survives this cultural shift. I’m betting that it’s on its way out.
Something else new has been happening in Silicon Valley culture in the past decade. The anti-establishment little guys have become the establishment big guys. Apple settled an anti-trust case. You are hearing about Silicon Valley companies like Facebook or Google collecting massive amounts of data on American citizens, some of which has ended up in the hands of the NSA. What happens when Silicon Valley companies start looking like the Big Brother from the famous 1984 Apple Macintosh commercial?
A Brief Feint at the Future
I opened these musings by defining Silicon Valley as a physical location. I’m often asked how or whether place will continue to matter in the age of mobile technologies, the Internet and connections that will only get faster. In other words, is region an outdated concept?
I believe that physical location will continue to be relevant when it comes to technological innovation. Proximity matters. Creativity cannot be scheduled for the particular half-hour block of time that everyone has free to teleconference. Important work can be done remotely, but the kinds of conversations that lead to real breakthroughs often happen serendipitously. People run into each other down the hall, or in a coffee shop, or at a religious service, or at the gym, or on the sidelines of a kid’s soccer game.
It is precisely because place will continue to matter that the biggest threats to Silicon Valley’s future have local and national parameters. Silicon Valley’s innovation economy depends on its being able to attract the brightest minds in the world; they act as a constant innovation “refresh” button. If Silicon Valley loses its allure for those people —

  • if the quality of public schools declines so that their children cannot receive good educations, 
  • if housing prices remain so astronomical that fewer than half of first-time buyers can afford the median-priced home, or 
  • if immigration policy makes it difficult for high-skilled immigrants who want to stay here to do so — 

the Valley’s status, and that of the United States economy, will be threatened. Also worrisome: ever-expanding gaps between the highest and lowest earners in Silicon Valley; stagnant wages for low- and middle-skilled workers; and the persistent reality that as a group, men in Silicon Valley earn more than women at the same level of educational attainment. Moreover, today in Silicon Valley, the lowest-earning racial/ethnic group earns 70 percent less than the highest earning group, according to the Joint Venture report. The stark reality, with apologies to George Orwell, is that even in the Valley’s vaunted egalitarian culture, some people are more equal than others.

Another threat is the continuing decline in federal support for basic research. Venture capital is important for developing products into companies, but the federal government still funds the great majority of basic research in this country. Silicon Valley is highly dependent on that basic research — “No Basic Research, No iPhone” is my favorite title from a recently released report on research and development in the United States. Today, the US occupies tenth place among OECD nations in overall R&D investment. That is investment as a percentage of GDP — somewhere between 2.5 and 3 percent. This represents a 13 percent drop below where we were ten years ago (again as a percentage of GDP). China is projected to outspend the United States in R&D within the next ten years, both in absolute terms and as a fraction of economic development.
People around the world have tried to reproduce Silicon Valley. No one has succeeded.
And no one will succeed because no place else — including Silicon Valley itself in its 2015 incarnation — could ever reproduce the unique concoction of academic research, technology, countercultural ideals and a California-specific type of Gold Rush reputation that attracts people with a high tolerance for risk and very little to lose. Partially through the passage of time, partially through deliberate effort by some entrepreneurs who tried to “give back” and others who tried to make a buck, this culture has become self-perpetuating.
The drive to build another Silicon Valley may be doomed to fail, but that is not necessarily bad news for regional planners elsewhere. The high-tech economy is not a zero-sum game. The twenty-first century global technology economy is large and complex enough for multiple regions to thrive for decades to come — including Silicon Valley, if the threats it faces are taken seriously.
Follow Backchannel: Twitter | Facebook