Profile: Kira Radinsky, the Prophet of the Web

Using machine intelligence and data mining, this entrepreneur can make predictions about future events
Photo: Kira Radinsky
Kira Radinsky is a rising star in predictive analytics. She combines the use of artificial intelligence and online data mining to predict likely futures for individuals, societies, and businesses. Radinsky made headlines two years ago for developing a series of algorithms that dissect words and phrases in traditional and social media, Web activity, and search trends to warn of possible disasters, geopolitical events, and disease outbreaks. Her system predicted Cuba’s first cholera epidemic in decades and early Arab Spring riots.
The algorithm—which grew out of Radinsky’s Ph.D. work at Technion–Israel Institute of Technology, in Haifa—looks for clues and historical patterns inferred from online behavior and news articles. “During an event, people search [for related topics] much more than usual,” she says. “The system looks for other times we saw a spike in that same topic and analyzes what was going on in the places that had this same spike.
With the 2012 Cuba cholera outbreak, for example, the system “learned,” by perusing news items, that cholera outbreaks occurred after droughts in Angola in 2006 and large storms in Africa in 2007. The algorithm also noted other factors, such as location, water coverage, population density, and economy. When Cuba presented similar conditions, the system warned of an outbreak, and a few weeks later, cholera was reported.
Radinsky has parlayed her research into a 3-year-old startup, SalesPredict, with 20 people on staff and offices near Tel Aviv and in San Francisco. SalesPredict focuses on helping businesses, including several Fortune 100 companies, find new opportunities for sales or retain existing customers and clients. The company has raised US $4 million from venture funding and has been doubling its revenue every quarter—except the first quarter of 2015, when it tripled.
It’s big data meets game theory,” Radinsky says. “There was so much data at each company about their historical sales, but very few actions were taken to scientifically analyze it.
As a youth, Radinsky wanted to be a scientist. But the summer before high school, she botched a research project at a science camp. “It was biology and computers together. I was in charge of feeding the cells. I was not good at that, and I killed those cells by mistake. After that, they had me process the data instead—and it turned out to be something I enjoyed.
Radinsky enrolled at Technion in 2002 at just 15, earning extra cash by programming websites on the side. At 18, she did her mandatory Israel Defense Forces duty as a computer scientist in its intelligence division. She completed bachelor’s and master’s degrees in computer science in 2009, and a Ph.D. in machine learning and artificial intelligence in 2012, at 26.
Since graduating, she’s served full-time as SalesPredict’s chief technical officer. “Getting into predictive analytics usually requires master’s and Ph.D. degrees in data mining and machine learning. Data scientists have strong math and statistical backgrounds. It’s a very academically oriented field,” she says. “For my company, it helps if you’ve worked for a company like Google or have strong engineering skills on how to scale machine learning systems.
Radinsky believes in developing this technology to benefit not only business but also society. “Mining patterns in data can lead humanity to a new era of faster scientific discoveries. We are working on several pro bono activities in the medical domain to apply learning machine algorithms to predict cancers and find the driving causes of diseases.
This article originally appeared in print as “Kira Radinsky.”
By Susan Karlin
21 May 2015
Tagged , , , , , , , , , , , ,

Google a step closer to developing machines with human-like intelligence

ORIGINAL: The Guardian
Algorithms developed by Google designed to encode thoughts, could lead to computers with ‘common sense’ within a decade, says leading AI scientist
Joaquin Phoenix and his virtual girlfriend in the film Her. Professor Hinton think that there’s no reason why computers couldn’t become our friends, or even flirt with us. Photograph: Allstar/Warner Bros/Sportsphoto Ltd.
Computers will have developed “common sense” within a decade and we could be counting them among our friends not long afterwards, one of the world’s leading AI scientists has predicted.
Professor Geoff Hinton, who was hired by Google two years ago to help develop intelligent operating systems, said that the company is on the brink of developing algorithms with the capacity for logic, natural conversation and even flirtation.
The researcher told the Guardian said that Google is working on a new type of algorithm designed to encode thoughts as sequences of numbers – something he described as “thought vectors.
Although the work is at an early stage, he said there is a plausible path from the current software to a more sophisticated version that would have something approaching human-like capacity for reasoning and logic. “Basically, they’ll have common sense.”
The idea that thoughts can be captured and distilled down to cold sequences of digits is controversial, Hinton said. “There’ll be a lot of people who argue against it, who say you can’t capture a thought like that,” he added. “But there’s no reason why not. I think you can capture a thought by a vector.
Hinton, who is due to give a talk at the Royal Society in London on Friday, believes that the “thought vector” approach will help crack two of the central challenges in artificial intelligence:

  • mastering natural, conversational language, and
  • the ability to make leaps of logic.
He painted a picture of the near-future in which people will chat with their computers, not only to extract information, but for fun – reminiscent of the film, Her, in which Joaquin Phoenix falls in love with his intelligent operating system.
It’s not that far-fetched,” Hinton said. “I don’t see why it shouldn’t be like a friend. I don’t see why you shouldn’t grow quite attached to them.
In the past two years, scientists have already made significant progress in overcoming this challenge.
Richard Socher, an artificial intelligence scientist at Stanford University, recently developed a program called NaSent that he taught to recognise human sentiment by training it on 12,000 sentences taken from the film review website Rotten Tomatoes
  Sentiment Analysis

Sentiment Analysis Information Live Demo Sentiment Treebank |Help the Model Source Code


Please enter text to see its parses and sentiment prediction results:
You can also upload a file (limit 200 lines):


Part of the initial motivation for developing “thought vectors” was to improve translation software, such as Google Translate, which currently uses dictionaries to translate individual words and searches through previously translated documents to find typical translations for phrases. Although these methods often provide the rough meaning, they are also prone to delivering nonsense and dubious grammar.
Thought vectors, Hinton explained, work at a higher level by extracting something closer to actual meaning.
The technique works by ascribing each word a set of numbers (or vector) that define its position in a theoretical “meaning space” or cloud. A sentence can be looked at as a path between these words, which can in turn be distilled down to its own set of numbers, or thought vector.
The “thought” serves as a the bridge between the two languages because it can be transferred into the French version of the meaning space and decoded back into a new path between words.
The key is working out which numbers to assign each word in a language – this is where deep learning comes in. Initially the positions of words within each cloud are ordered at random and the translation algorithm begins training on a dataset of translated sentences.
At first the translations it produces are nonsense, but a feedback loop provides an error signal that allows the position of each word to be refined until eventually the positions of words in the cloud captures the way humans use them – effectively a map of their meanings.
Hinton said that the idea that language can be deconstructed with almost mathematical precision is surprising, but true. “If you take the vector for Paris and subtract the vector for France and add Italy, you get Rome,” he said. “It’s quite remarkable.”
Dr Hermann Hauser, a Cambridge computer scientist and entrepreneur, said that Hinton and others could be on the way to solving what programmers call the “genie problem”.
With machines at the moment, you get exactly what you wished for,” Hauser said. “The problem is we’re not very good at wishing for the right thing. When you look at humans, the recognition of individual words isn’t particularly impressive, the important bit is figuring out what the guy wants.
Hinton is our number one guru in the world on this at the moment,” he added.
Some aspects of communication are likely to prove more challenging, Hinton predicted. “Irony is going to be hard to get,” he said. “You have to be master of the literal first. But then, Americans don’t get irony either. Computers are going to reach the level of Americans before Brits.
A flirtatious program would “probably be quite simple” to create, however. “It probably wouldn’t be subtly flirtatious to begin with, but it would be capable of saying borderline politically incorrect phrases,” he said.
Many of the recent advances in AI have sprung from the field of deep learning, which Hinton has been working on since the 1980s. At its core is the idea that computer programs learn how to carry out tasks by training on huge datasets, rather than being taught a set of inflexible rules.
With the advent of huge datasets and powerful processors, the approach pioneered by Hinton decades ago has come into the ascendency and underpins the work of Google’s artificial intelligence arm, DeepMind, and similar programs of research at Facebook and Microsoft.
Hinton played down concerns about the dangers of AI raised by those such as the American entrepreneur Elon Musk, who has described the technologies under development as humanity’s greatest existential threat. “The risk of something seriously dangerous happening is in the five year timeframe. Ten years at most,” Musk warned last year.
I’m more scared about the things that have already happened, said Hinton in response. “The NSA is already bugging everything that everybody does. Each time there’s a new revelation from Snowden, you realise the extent of it.
I am scared that if you make the technology work better, you help the NSA misuse it more,” he added. “I’d be more worried about that than about autonomous killer robots.
Tagged , ,

Machine-Learning Algorithm Mines Rap Lyrics, Then Writes Its Own

ORIGINAL: Tech Review

The ancient skill of creating and performing spoken rhyme is thriving today because of the inexorable rise in the popularity of rapping. This art form is distinct from ordinary spoken poetry because it is performed to a beat, often with background music.

And the performers have excelled. Adam Bradley, a professor of English at the University of Colorado has described it in glowing terms. Rapping, he says, crafts “intricate structures of sound and rhyme, creating some of the most scrupulously formal poetry composed today.

The highly structured nature of rap makes it particularly amenable to computer analysis. And that raises an interesting question: if computers can analyze rap lyrics, can they also generate them?

Today, we get an affirmative answer thanks to the work of Eric Malmi at the University of Aalto in Finland and few pals. These guys have trained a machine-learning algorithm to recognize the salient features of a few lines of rap and then choose another line that rhymes in the same way on the same topic. The result is an algorithm that produces rap lyrics that rival human-generated ones for their complexity of rhyme.

Various forms of rhyme crop up in rap

  1. but the most common, and the one that helps distinguish it from other forms of poetry, is called assonance rhyme. This is the repetition of similar vowel sounds such as in the words “crazy” and “baby” which share two similar vowel sounds.
  2. (That’s different from consonance, which uses similar consonant sounds, such as in “pitter patter” and
  3. different from perfect rhyme where words share the same ending sound such as “slang” and “gang.”)

Because of its prevalence in rap, Malmi and co focus exclusively on the way assonance appears in rap lyrics. But they also assume a highly structured form of verse consisting of 16 lines, each of which equals one musical bar and so must be made up of four beats. The lines typically, but not necessarily, rhyme at the end.

To train their machine learning algorithm, they begin with a database of over 10,000 songs from more than 100 rap artists.

Spotting assonant rhymes is not hard. The words must first be converted into phonemes (assuming a typical American-English pronunciation). Finding rhymes is then simply a question of scanning the phonemes looking for similar vowels sounds while ignoring consonant sounds and spaces.

That immediately suggests a way of ranking the complexity of lyrics. Malmi and co look for sequences of matching vowel sounds in the previous two lines or so. They then define “rhyming density” as the average of all the longest sequences in the lyrics.

This measure has allowed them to rank all the rap artists in their database according to their rhyming density. The three rappers who head the list are Inspectah Deck, Rakim, and Redrama. Rakim, in particular, is known for his multisyllabic rhymes.

Curiously, the rapper Eminem, who is also famous for his multisyllabic rhymes, comes surprisingly low on the list. That’s probably because Eminem often achieves his rhymes by “bending” words, a trick that this technique does not allow for.

Nevertheless, this metric is an interesting measure of a rapper’s rhyming skill and one that the team can use to compare their automated raps with human generated ones.

They next set their machine learning algorithm, called DeepBeat, a task. Having mined the database, its goal is to analyze a sequence of lines from a rap lyric and then choose the next line from a list that contains randomly chosen lines from other songs as well as the actual line.

This it can do surprisingly well. “An 82% accuracy was achieved for separating the true next line from a randomly chosen line,” say Malmi and co.

That’s not bad and immediately suggests a way to generate lyrics automatically. Malmi and co start with a line from one rap lyric and ask the computer to search through the database for another line on the same topic that best rhymes. It then repeats this process for the next line and so on.

The results are something of an eye opener. Here is one DeepBeat generated on the topic of love

For a chance at romance I would love to enhance
But everything I love has turned to a tedious task
One day we gonna have to leave our love in the past
I love my fans but no one ever puts a grasp
I love you momma I love my momma – I love you momma
And I would love to have a thing like you on my team you take care
I love it when it’s sunny Sonny girl you could be my Cher
I’m in a love affair I can’t share it ain’t fair
Haha I’m just playin’ ladies you know I love you.
I know my love is true and I know you love me too
Girl I’m down for whatever cause my love is true
This one goes to my man old dirty one love we be swigging brew
My brother I love you Be encouraged man And just know
When you done let me know cause my love make you be like WHOA
If I can’t do it for the love then do it I won’t
All I know is I love you too much to walk away though

That’s impressive. Each of these lines is taken from another rap song—for example the final line is from Eminem’s

What’s more, this and other raps generated by DeepBeat have a rhyming density significantly higher than any human rapper. “DeepBeat outperforms the top human rappers by 21% in terms of length and frequency of the rhymes in the produced lyrics,” they point out.

Where DeepBeat falls down is in the coherence of its storytelling, which is unsurprising given that its focus is largely on rhyme. That’s clearly work for the future. DopeLearning: A Computational Approach to Rap Lyrics Generation

Tagged , ,

Silicon Chips That See Are Going to Make Your Smartphone Brilliant

Many gadgets will be able to understand images and video thanks to chips designed to run powerful artificial-intelligence algorithms.
Many applications for mobile computers could be more powerful with advanced image recognition.
Many of the devices around us may soon acquire powerful new abilities to understand images and video, thanks to hardware designed for the machine-learning technique called deep learning.
Companies like Google have made breakthroughs in image and face recognition through deep learning, using giant data sets and powerful computers (see “10 Breakthrough Technologies 2013: Deep Learning”). Now two leading chip companies and the Chinese search giant Baidu say hardware is coming that will bring the technique to phones, cars, and more.
Chip manufacturers don’t typically disclose their new features in advance. But at a conference on computer vision Tuesday, Synopsys, a company that licenses software and intellectual property to the biggest names in chip making, showed off a new image-processor core tailored for deep learning. It is expected to be added to chips that power smartphones, cameras, and cars. The core would occupy about one square millimeter of space on a chip made with one of the most commonly used manufacturing technologies.
Pierre Paulin, a director of R&D at Synopsys, told MIT Technology Review that the new processor design will be made available to his company’s customers this summer. Many have expressed strong interest in getting hold of hardware to help deploy deep learning, he said.
Synopsys showed a demo in which the new design recognized speed-limit signs in footage from a car. Paulin also presented results from using the chip to run a deep-learning network trained to recognize faces. It didn’t hit the accuracy levels of the best research results, which have been achieved on powerful computers, but it came pretty close, he said. “For applications like video surveillance it performs very well,” he said. The specialized core uses significantly less power than a conventional chip would need to do the same task.
The new core could add a degree of visual intelligence to many kinds of devices, from phones to cheap security cameras. It wouldn’t allow devices to recognize tens of thousands of objects on their own, but Paulin said they might be able to recognize dozens.
That might lead to novel kinds of camera or photo apps. Paulin said the technology could also enhance car, traffic, and surveillance cameras. For example, a home security camera could start sending data over the Internet only when a human entered the frame. “You can do fancier things like detecting if someone has fallen on the subway,” he said.

Jeff Gehlhaar, vice president of technology at Qualcomm Research, spoke at the event about his company’s work on getting deep learning running on apps for existing phone hardware. He declined to discuss whether the company is planning to build support for deep learning into its chips. But speaking about the industry in general, he said that such chips are surely coming. Being able to use deep learning on mobile chips will be vital to helping robots navigate and interact with the world, he said, and to efforts to develop autonomous cars.
I think you will see custom hardware emerge to solve these problems,” he said. “Our traditional approaches to silicon are going to run out of gas, and we’ll have to roll up our sleeves and do things differently.” Gehlhaar didn’t indicate how soon that might be. Qualcomm has said that its coming generation of mobile chips will include software designed to bring deep learning to camera and other apps (see “Smartphones Will Soon Learn to Recognize Faces and More”).
Ren Wu, a researcher at Chinese search company Baidu, also said chips that support deep learning are needed for powerful research computers in daily use. “You need to deploy that intelligence everywhere, at any place or any time,” he said.
Being able to do things like analyze images on a device without connecting to the Internet can make apps faster and more energy-efficient because it isn’t necessary to send data to and fro, said Wu. He and Qualcomm’s Gehlhaar both said that making mobile devices more intelligent could temper the privacy implications of some apps by reducing the volume of personal data such as photos transmitted off a device.
You want the intelligence to filter out the raw data and only send the important information, the metadata, to the cloud,” said Wu.
ORIGINAL: Tech Review
May 14, 2015
Tagged , , , , , , ,

Silicon Valley Then and Now: To Invent the Future, You Must Understand the Past

William Shockley’s employees toast him for his Nobel Prize, 1956. Photo courtesy Computer History Museum.
You can’t really understand what is going on now without understanding what came before.
Steve Jobs is explaining why, as a young man, he spent so much time with the Silicon Valley entrepreneurs a generation older, men like Robert Noyce, Andy Grove, and Regis McKenna.
It’s a beautiful Saturday morning in May, 2003, and I’m sitting next to Jobs on his living room sofa, interviewing him for a book I’m writing. I ask him to tell me more about why he wanted, as he put it, “to smell that second wonderful era of the valley, the semiconductor companies leading into the computer.” Why, I want to know, is it not enough to stand on the shoulders of giants? Why does he want to pick their brains?
It’s like that Schopenhauer quote about the conjurer,” he says. When I look blank, he tells me to wait and then dashes upstairs. He comes down a minute later holding a book and reading aloud:
Steve Jobs and Robert Noyce.
Courtesy Leslie Berlin.
He who lives to see two or three generations is like a man who sits some time in the conjurer’s booth at a fair, and witnesses the performance twice or thrice in succession. The tricks were meant to be seen only once, and when they are no longer a novelty and cease to deceive, their effect is gone.
History, Jobs understood, gave him a chance to see — and see through — the conjurer’s tricks before they happened to him, so he would know how to handle them.
Flash forward eleven years. It’s 2014, and I am going to see Robert W. Taylor. In 1966, Taylor convinced the Department of Defense to build the ARPANET that eventually formed the core of the Internet. He went on to run the famous Xerox PARC Computer Science Lab that developed the first modern personal computer. For a finishing touch, he led one of the teams at DEC behind the world’s first blazingly fast search engine — three years before Google was founded.
Visiting Taylor is like driving into a Silicon Valley time machine. You zip past the venture capital firms on Sand Hill Road, over the 280 freeway, and down a twisty two-lane street that is nearly impassable on weekends, thanks to the packs of lycra-clad cyclists on multi-thousand-dollar bikes raising their cardio thresholds along the steep climbs. A sharp turn and you enter what seems to be another world, wooded and cool, the coastal redwoods dense along the hills. Cell phone signals fade in and out in this part of Woodside, far above Buck’s Restaurant where power deals are negotiated over early-morning cups of coffee. GPS tries valiantly to ascertain a location — and then gives up.
When I get to Taylor’s home on a hill overlooking the Valley, he tells me about another visitor who recently took that drive, apparently driven by the same curiosity that Steve Jobs had: Mark Zuckerberg, along with some colleagues at the company he founded, Facebook.
Zuckerberg must have heard about me in some historical sense,” Taylor recalls in his Texas drawl. “He wanted to see what I was all about, I guess.
To invent the future, you must understand the past.

I am a historian, and my subject matter is Silicon Valley. So I’m not surprised that Jobs and Zuckerberg both understood that the Valley’s past matters today and that the lessons of history can take innovation further. When I talk to other founders and participants in the area, they also want to hear what happened before. Their questions usually boil down to two:

  1. Why did Silicon Valley happen in the first place, and 
  2. why has it remained at the epicenter of the global tech economy for so long?
I think I can answer those questions.

First, a definition of terms. When I use the term “Silicon Valley,” I am referring quite specifically to the narrow stretch of the San Francisco Peninsula that is sandwiched between the bay to the east and the Coastal Range to the west. (Yes, Silicon Valley is a physical valley — there are hills on the far side of the bay.) Silicon Valley has traditionally comprised 

  • Santa Clara County and 
  • the southern tip of San Mateo County. In the past few years, 
  • parts of Alameda County and 
  • the city of San Francisco 

can also legitimately be considered satellites of Silicon Valley, or perhaps part of “Greater Silicon Valley.

The name “Silicon Valley,” incidentally, was popularized in 1971 by a hard-drinking, story-chasing, gossip-mongering journalist named Don Hoefler, who wrote for a trade rag called Electronic News. Before, the region was called the Valley of the Hearts Delight,” renowned for its apricot, plum, cherry and almond orchards.
This was down-home farming, three generations of tranquility, beauty, health, and productivity based on family farms of small acreage but bountiful production,” reminisced Wallace Stegner, the famed Western writer. To see what the Valley looked like then, watch the first few minutes of this wonderful 1948 promotional video for the “Valley of the Heart’s Delight.”
Three historical forces — technical, cultural, and financial — created Silicon Valley.
On the technical side, in some sense the Valley got lucky. In 1955, one of the inventors of the transistor, William Shockley, moved back to Palo Alto, where he had spent some of his childhood. Shockley was also a brilliant physicist — he would share the Nobel Prize in 1956 — an outstanding teacher, and a terrible entrepreneur and boss. Because he was a brilliant scientist and inventor, Shockley was able to recruit some of the brightest young researchers in the country — Shockley called them “hot minds” — to come work for him 3,000 miles from the research-intensive businesses and laboratories that lined the Eastern Seaboard from Boston to Bell Labs in New Jersey. Because Shockley was an outstanding teacher, he got these young scientists, all but one of whom had never built transistors, to the point that they not only understood the tiny devices but began innovating in the field of semiconductor electronics on their own.
And because Shockley was a terrible boss — the sort of boss who posted salaries and subjected his employees to lie-detector tests — many who came to work for him could not wait to get away and work for someone else. That someone else, it turned out, would be themselves. The move by eight of Shockley’s employees to launch their own semiconductor operation called Fairchild Semiconductor in 1957 marked the first significant modern startup company in Silicon Valley. After Fairchild Semiconductor blew apart in the late-1960s, employees launched dozens of new companies (including Intel, National and AMD) that are collectively called the Fairchildren.
The Fairchild 8: Gordon Moore, Sheldon Roberts, Eugene Kleiner, Robert Noyce, Victor Grinich, Julius Blank, Jean Hoerni, and Jay Last. Photo courtesy Wayne Miller/Magnum Photos.
Equally important for the Valley’s future was the technology that Shockley taught his employees to build: the transistor. Nearly everything that we associate with the modern technology revolution and Silicon Valley can be traced back to the tiny, tiny transistor.
Think of the transistor as the grain of sand at the core of the Silicon Valley pearl. The next layer of the pearl appeared when people strung together transistors, along with other discrete electronic components like resistors and capacitors, to make an entire electronic circuit on a single slice of silicon. This new device was called a microchip. Then someone came up with a specialized microchip that could be programmed: the microprocessor. The first pocket calculators were built around these microprocessors. Then someone figured out that it was possible to combine a microprocessor with other components and a screen — that was a computer. People wrote code for those computers to serve as operating systems and software on top of those systems. At some point people began connecting these computers to each other: networking. Then people realized it should be possible to “virtualize” these computers and store their contents off-site in a “cloud,” and it was also possible to search across the information stored in multiple computers. Then the networked computer was shrunk — keeping the key components of screen, keyboard, and pointing device (today a finger) — to build tablets and palm-sized machines called smart phones. Then people began writing apps for those mobile devices … .
You get the picture. These changes all kept pace to the metronomic tick-tock of Moore’s Law.
The skills learned through building and commercializing one layer of the pearl underpinned and supported the development of the next layer or developments in related industries. Apple, for instance, is a company that people often speak of as sui generis, but Apple Computer’s early key employees had worked at Intel, Atari, or Hewlett-Packard. Apple’s venture capital backers had either backed Fairchild or Intel or worked there. The famous Macintosh, with its user-friendly aspect, graphical-user interface, overlapping windows, and mouse was inspired by a 1979 visit Steve Jobs and a group of engineers paid to XEROX PARC, located in the Stanford Research Park. In other words, Apple was the product of its Silicon Valley environment and technological roots.
This brings us to the second force behind the birth of Silicon Valley: culture. When Shockley, his transistor and his recruits arrived in 1955, the valley was still largely agricultural, and the small local industry had a distinctly high-tech (or as they would have said then, “space age”) focus. The largest employer was defense contractor Lockheed. IBM was about to open a small research facility. Hewlett-Packard, one of the few homegrown tech companies in Silicon Valley before the 1950s, was more than a decade old.
Stanford, meanwhile, was actively trying to build up its physics and engineering departments. Professor (and Provost from 1955 to 1965) Frederick Terman worried about a “brain drain” of Stanford graduates to the East Coast, where jobs were plentiful. So he worked with President J.E. Wallace Sterling to create what Terman called “a community of technical scholars” in which the links between industry and academia were fluid. This meant that as the new transistor-cum-microchip companies began to grow, technically knowledgeable engineers were already there.
Woz and Jobs.
Photo courtesy Computer History Museum.
These trends only accelerated as the population exploded. Between 1950 and 1970, the population of Santa Clara County tripled, from roughly 300,000 residents to more than 1 million. It was as if a new person moved into Santa Clara County every 15 minutes for 20 years. The newcomers were, overall, younger and better educated than the people already in the area. The Valley changed from a community of aging farmers with high school diplomas to one filled with 20-something PhDs.
All these new people pouring into what had been an agricultural region meant that it was possible to create a business environment around the needs of new companies coming up, rather than adapting an existing business culture to accommodate the new industries. In what would become a self-perpetuating cycle, everything from specialized law firms, recruiting operations and prototyping facilities; to liberal stock option plans; to zoning laws; to community college course offerings developed to support a tech-based business infrastructure.
Historian Richard White says that the modern American West was “born modern” because the population followed, rather than preceded, connections to national and international markets. Silicon Valley was bornpost-modern, with those connections not only in place but so taken for granted that people were comfortable experimenting with new types of business structures and approaches strikingly different from the traditional East Coast business practices with roots nearly two centuries old.
From the beginning, Silicon Valley entrepreneurs saw themselves in direct opposition to their East Coast counterparts. The westerners saw themselves as cowboys and pioneers, working on a “new frontier” where people dared greatly and failure was not shameful but just the quickest way to learn a hard lesson. In the 1970s, with the influence of the counterculture’s epicenter at the corner of Haight and Ashbury, only an easy drive up the freeway, Silicon Valley companies also became famous for their laid-back, dressed-down culture, and for their products, such as video games and personal computers, that brought advanced technology to “the rest of us.

The third key component driving the birth of Silicon Valley, along with the right technology seed falling into a particularly rich and receptive cultural soil, was money. Again, timing was crucial. Silicon Valley was kick-started by federal dollars. Whether it was

  • the Department of Defense buying 100% of the earliest microchips, 
  • Hewlett-Packard and Lockheed selling products to military customers, or 
  • federal research money pouring into Stanford, 

Silicon Valley was the beneficiary of Cold War fears that translated to the Department of Defense being willing to spend almost anything on advanced electronics and electronic systems. The government, in effect, served as the Valley’s first venture capitalist.

The first significant wave of venture capital firms hit Silicon Valley in the 1970s. Both Sequoia Capital and Kleiner Perkins Caufield and Byers were founded by Fairchild alumni in 1972. Between them, these venture firms would go on to fund Amazon, Apple, Cisco, Dropbox, Electronic Arts, Facebook, Genentech, Google, Instagram, Intuit, and LinkedIn — and that is just the first half of the alphabet.
This model of one generation succeeding and then turning around to offer the next generation of entrepreneurs financial support and managerial expertise is one of the most important and under-recognized secrets to Silicon Valley’s ongoing success. Robert Noyce called it “re-stocking the stream I fished from.” Steve Jobs, in his remarkable 2005 commencement address at Stanford, used the analogy of a baton being passed from one runner to another in an ongoing relay across time.
So that’s how Silicon Valley emerged. Why has it endured?

After all, if modern Silicon Valley was born in the 1950s, the region is now in its seventh decade. For roughly two-thirds of that time, Valley watchers have predicted its imminent demise, usually with an allusion to Detroit.

  • First, the oil shocks and energy crises of the 1970s were going to shut down the fabs (specialized factories) that build microchips. 
  • In the 1980s, Japanese competition was the concern. 
  • The bursting of the dot-com bubble
  • the rise of formidable tech regions in other parts of the world
  • the Internet and mobile technologies that make it possible to work from anywhere: 

all have been heard as Silicon Valley’s death knell.

The Valley of Heart’s Delight, pre-technology. OSU Special Collections.
The Valley economy is notorious for its cyclicity, but it has indeed endured. Here we are in 2015, a year in which more patents, more IPOs, and a larger share of venture capital and angel investments have come from the Valley than ever before. As a recent report from Joint Venture Silicon Valley (***) put it, “We’ve extended a four-year streak of job growth, we are among the highest income regions in the country, and we have the biggest share of the nation’s high-growth, high-wage sectors.” Would-be entrepreneurs continue to move to the Valley from all over the world. Even companies that are not started in Silicon Valley move there (witness Facebook).
Why? What is behind Silicon Valley’s staying power? The answer is that many of the factors that launched Silicon Valley in the 1950s continue to underpin its strength today even as the Valley economy has proven quite adaptable.
The Valley still glides in the long wake of the transistor, both in terms of technology and in terms of the infrastructure to support companies that rely on semiconductor technology. Remember the pearl. At the same time, when new industries not related directly to semiconductors have sprung up in the Valley — industries like biotechnology — they have taken advantage of the infrastructure and support structure already in place.
Venture capital has remained the dominant source of funding for young companies in Silicon Valley. In 2014, some $14.5 billion in venture capital was invested in the Valley, accounting for 43 percent of all venture capital investments in the country. More than half of Silicon Valley venture capital went to software investments, and the rise of software, too, helps to explain the recent migration of many tech companies to San Francisco. (San Francisco, it should be noted, accounted for nearly half of the $14.5 billion figure.) Building microchips or computers or specialized production equipment — things that used to happen in Silicon Valley — requires many people, huge fabrication operations and access to specialized chemicals and treatment facilities, often on large swaths of land. Building software requires none of these things; in fact, software engineers need little more than a computer and some server space in the cloud to do their jobs. It is thus easy for software companies to locate in cities like San Francisco, where many young techies want to live.
The Valley continues to be a magnet for young, educated people. The flood of intranational immigrants to Silicon Valley from other parts of the country in the second half of the twentieth century has become, in the twenty-first century, a flood of international immigrants from all over the world. It is impossible to overstate the importance of immigrants to the region and to the modern tech industry. Nearly 37 percent of the people in Silicon Valley today were born outside of the United States — of these, more than 60 percent were born in Asia and 20 percent in Mexico. Half of Silicon Valley households speak a language other than English in the home. Sixty-five percent of the people with Bachelors degrees working in science and engineering in the valley were born in another country. Let me say that again: 2/3 of people in working in sci-tech Valley industries who have completed their college education are foreign born. (Nearly half the college graduates working in all industries in the valley are foreign-born.)
Here’s another way to look at it: From 1995 to 2005, more than half of all Silicon Valley startups had at least one founder who was born outside the United States.[13] Their businesses — companies like Google and eBay — have created American jobs and billions of dollars in American market capitalization.
Silicon Valley, now, as in the past, is built and sustained by immigrants.
Gordon Moore and Robert Noyce at Intel in 1970. Photo courtesy Intel.
Stanford also remains at the center of the action. By one estimate, from 2012, companies formed by Stanford entrepreneurs generate world revenues of $2.7 trillion annually and have created 5.4 million jobs since the 1930s. This figure includes companies whose primary business is not tech: companies like Nike, Gap, and Trader Joe’s. But even if you just look at Silicon Valley companies that came out of Stanford, the list is impressive, including Cisco, Google, HP, IDEO, Instagram, MIPS, Netscape, NVIDIA, Silicon Graphics, Snapchat, Sun, Varian, VMware, and Yahoo. Indeed, some critics have complained that Stanford has become overly focused on student entrepreneurship in recent years — an allegation that I disagree with but is neatly encapsulated in a 2012 New Yorker article that called the university “Get Rich U.”
The above represent important continuities, but change has also been vital to the region’s longevity. Silicon Valley has been re-inventing itself for decades, a trend that is evident with a quick look at the emerging or leading technologies in the area:
• 1940s: instrumentation
• 1950s/60s: microchips
• 1970s: biotech, consumer electronics using chips (PC, video game, etc)
• 1980s: software, networking
• 1990s: web, search
• 2000s: cloud, mobile, social networking
The overriding sense of what it means to be in Silicon Valley — the lionization of risk-taking, the David-versus-Goliath stories, the persistent belief that failure teaches important business lessons even when the data show otherwise — has not changed, but over the past few years, a new trope has appeared alongside the Western metaphors of Gold Rushes and Wild Wests: Disruption.
“Disruption” is the notion, roughly based on ideas first proposed by Joseph Schumpeter in 1942, that a little company can come in and — usually with technology — completely remake an industry that seemed established and largely impervious to change. So: Uber is disrupting the taxi industry. Airbnb is disrupting the hotel industry. The disruption story is, in its essentials, the same as the Western tale: a new approach comes out of nowhere to change the establishment world for the better. You can hear the same themes of adventure, anti-establishment thinking, opportunity and risk-taking. It’s the same song, with different lyrics.
The shift to the new language may reflect the key role that immigrants play in today’s Silicon Valley. Many educated, working adults in the region arrived with no cultural background that promoted cowboys or pioneers. These immigrants did not even travel west to get to Silicon Valley. They came east, or north. It will be interesting to see how long the Western metaphor survives this cultural shift. I’m betting that it’s on its way out.
Something else new has been happening in Silicon Valley culture in the past decade. The anti-establishment little guys have become the establishment big guys. Apple settled an anti-trust case. You are hearing about Silicon Valley companies like Facebook or Google collecting massive amounts of data on American citizens, some of which has ended up in the hands of the NSA. What happens when Silicon Valley companies start looking like the Big Brother from the famous 1984 Apple Macintosh commercial?
A Brief Feint at the Future
I opened these musings by defining Silicon Valley as a physical location. I’m often asked how or whether place will continue to matter in the age of mobile technologies, the Internet and connections that will only get faster. In other words, is region an outdated concept?
I believe that physical location will continue to be relevant when it comes to technological innovation. Proximity matters. Creativity cannot be scheduled for the particular half-hour block of time that everyone has free to teleconference. Important work can be done remotely, but the kinds of conversations that lead to real breakthroughs often happen serendipitously. People run into each other down the hall, or in a coffee shop, or at a religious service, or at the gym, or on the sidelines of a kid’s soccer game.
It is precisely because place will continue to matter that the biggest threats to Silicon Valley’s future have local and national parameters. Silicon Valley’s innovation economy depends on its being able to attract the brightest minds in the world; they act as a constant innovation “refresh” button. If Silicon Valley loses its allure for those people —

  • if the quality of public schools declines so that their children cannot receive good educations, 
  • if housing prices remain so astronomical that fewer than half of first-time buyers can afford the median-priced home, or 
  • if immigration policy makes it difficult for high-skilled immigrants who want to stay here to do so — 

the Valley’s status, and that of the United States economy, will be threatened. Also worrisome: ever-expanding gaps between the highest and lowest earners in Silicon Valley; stagnant wages for low- and middle-skilled workers; and the persistent reality that as a group, men in Silicon Valley earn more than women at the same level of educational attainment. Moreover, today in Silicon Valley, the lowest-earning racial/ethnic group earns 70 percent less than the highest earning group, according to the Joint Venture report. The stark reality, with apologies to George Orwell, is that even in the Valley’s vaunted egalitarian culture, some people are more equal than others.

Another threat is the continuing decline in federal support for basic research. Venture capital is important for developing products into companies, but the federal government still funds the great majority of basic research in this country. Silicon Valley is highly dependent on that basic research — “No Basic Research, No iPhone” is my favorite title from a recently released report on research and development in the United States. Today, the US occupies tenth place among OECD nations in overall R&D investment. That is investment as a percentage of GDP — somewhere between 2.5 and 3 percent. This represents a 13 percent drop below where we were ten years ago (again as a percentage of GDP). China is projected to outspend the United States in R&D within the next ten years, both in absolute terms and as a fraction of economic development.
People around the world have tried to reproduce Silicon Valley. No one has succeeded.
And no one will succeed because no place else — including Silicon Valley itself in its 2015 incarnation — could ever reproduce the unique concoction of academic research, technology, countercultural ideals and a California-specific type of Gold Rush reputation that attracts people with a high tolerance for risk and very little to lose. Partially through the passage of time, partially through deliberate effort by some entrepreneurs who tried to “give back” and others who tried to make a buck, this culture has become self-perpetuating.
The drive to build another Silicon Valley may be doomed to fail, but that is not necessarily bad news for regional planners elsewhere. The high-tech economy is not a zero-sum game. The twenty-first century global technology economy is large and complex enough for multiple regions to thrive for decades to come — including Silicon Valley, if the threats it faces are taken seriously.
Follow Backchannel: Twitter | Facebook
Tagged , ,

Robert Reich: The Nightmarish Future for American Jobs and Incomes Is Here

Even knowledge-based jobs will disappear as wealth gets more concentrated at the top in the next 10 years.
Photo Credit: via YouTube
What will happen to American jobs, incomes, and wealth a decade from now?
Predictions are hazardous but survivable. In 1991, in my book The Work of Nations, I separated almost all work into three categories, and then predicted what would happen to each of them.
The first category I called “routine production services,” which entailed the kind of repetitive tasks performed by the old foot soldiers of American capitalism through most of the twentieth century — done over and over, on an assembly line or in an office.
I estimated that such work then constituted about one-quarter of all jobs in the United States, but would decline steadily as such jobs were replaced by
  • new labor-saving technologies and
  • by workers in developing nations eager to do them for far lower wages.

I also assumed the pay of remaining routine production workers in America would drop, for similar reasons.

I was not far wrong.
The second category I called “in-person services.This work had to be provided personally because the “human touch” was essential to it. It included retail sales workers, hotel and restaurant workers, nursing-home aides, realtors, childcare workers, home health-care aides, flight attendants, physical therapists, and security guards, among many others.
In 1990, by my estimate, such workers accounted for about 30 percent of all jobs in America, and I predicted their numbers would grow because — given that their services were delivered in person — neither advancing technologies nor foreign-based workers would be able to replace them.
I also predicted their pay would drop. They would be competing with
  • a large number of former routine production workers, who could only find jobs in the “in-person” sector.
  • They would also be competing with labor-saving machinery such as automated tellers, computerized cashiers, automatic car washes, robotized vending machines, and self-service gas pumps —
  • as well as “personal computers linked to television screensthrough which “tomorrow’s consumers will be able to buy furniture, appliances, and all sorts of electronic toys from their living rooms — examining the merchandise from all angles, selecting whatever color, size, special features, and price seem most appealing, and then transmitting the order instantly to warehouses from which the selections will be shipped directly to their homes. 
  • So, too, with financial transactions, airline and hotel reservations, rental car agreements, and similar contracts, which will be executed between consumers in their homes and computer banks somewhere else on the globe.”

Here again, my predictions were not far off. But I didn’t foresee how quickly advanced technologies would begin to make inroads even on in-person services. Ten years from now I expect Amazon will have wiped out many of today’s retail jobs, and Google‘s self-driving car will eliminate many bus drivers, truck drivers, sanitation workers, and even Uber drivers.

The third job category I named “symbolic-analytic services.” Here I included all the problem-solving, problem-identifying, and strategic thinking that go into the manipulation of symbols—data, words, oral and visual representations.
I estimated in 1990 that symbolic analysts accounted for 20 percent of all American jobs, and expected their share to continue to grow, as would their incomes, because the demand for people to do these jobs would continue to outrun the supply of people capable of doing them. This widening disconnect between symbolic-analytic jobs and the other two major categories of work would, I predicted, be the major force driving widening inequality.
Again, I wasn’t far off. But I didn’t anticipate how quickly or how wide the divide would become, or how great a toll inequality and economic insecurity would take. I would never have expected, for example, that the life expectancy of an American white woman without a high school degree would decrease by five years between 1990 and 2008.
We are now faced not just with labor-replacing technologies but with knowledge-replacing technologies. The combination of
  • advanced sensors,
  • voice recognition,
  • artificial intelligence,
  • big data,
  • text-mining, and
  • pattern-recognition algorithms,

is generating smart robots capable of quickly learning human actions, and even learning from one another. A revolution in life sciences is also underway, allowing drugs to be tailored to a patient’s particular condition and genome.

If the current trend continues, many more symbolic analysts will be replaced in coming years. The two largest professionally intensive sectors of the United States — health care and education — will be particularly affected because of increasing pressures to hold down costs and, at the same time, the increasing accessibility of expert machines.
We are on the verge of a wave of mobile health applications, for example, measuring everything from calories to blood pressure, along with software programs capable of performing the same functions as costly medical devices and diagnostic software that can tell you what it all means and what to do about it.
Schools and universities will likewise be reorganized around smart machines (although faculties will scream all the way). Many teachers and university professors are already on the way to being replaced by software — so-called “MOOCs” (Massive Open Online Courses) and interactive online textbooks — along with adjuncts that guide student learning.
As a result, income and wealth will become even more concentrated than they are today. Those who create or invest in blockbuster ideas will earn unprecedented sums and returns. The corollary is they will have enormous political power. But most people will not share in the monetary gains, and their political power will disappear. The middle class’s share of the total economic pie will continue to shrink, while the share going to the very top will continue to grow.
But the current trend is not preordained to last, and only the most rigid technological determinist would assume this to be our inevitable fate. We can — indeed, I believe we must — ignite a political movement to reorganize the economy for the benefit of the many, rather than for the lavish lifestyles of a precious few and their heirs. (I have more to say on this in my upcoming book, Saving Capitalism: For the Many, Not the Few, out at the end of September.)
Robert B. Reich has served in three national administrations, most recently as secretary of labor under President Bill Clinton. He also served on President Obama’s transition advisory board. His latest book is “Aftershock: The Next Economy and America’s Future.” His homepage is
May 7, 2015
ROBERT B. REICH, Chancellor’s Professor of Public Policy at the University of California at Berkeley and Senior Fellow at the Blum Center for Developing Economies, was Secretary of Labor in the Clinton administration. Time Magazine named him one of the ten most effective cabinet secretaries of the twentieth century. He has written thirteen books, including the best sellers “Aftershock” and “The Work of Nations.” His latest, “Beyond Outrage,” is now out in paperback. He is also a founding editor of the American Prospect magazine and chairman of Common Cause. His new film, “Inequality for All,” is now available on Netflix, iTunes, DVD, and On Demand.
Tagged , , , , , , , , ,

Quantum boost for artificial intelligence

Quantum computers able to learn could attack larger sets of data than classical computers.

Peter Arnold/Stegerphoto/Getty Images


Article tools Rights & Permissions

Programs running on future quantum computers could dramatically speed up complex tasks such as face recognition.
Quantum computers of the future will have the potential to give artificial intelligence a major boost, a series of studies suggests.
These computers, which encode information in ‘fuzzy’ quantum states that can be zero and one simultaneously, have the ability to someday solve problems, such as breaking encryption keys, that are beyond the reach of ‘classical’ computers.
Algorithms developed so far for quantum computers have typically focused on problems such as breaking encryption keys or searching a list — tasks that normally require speed but not a lot of intelligence. But in a series of papers posted online this month the arXiv preprint server1, 2, 3, Seth Lloyd of the Massachusetts Institute of Technology in Cambridge and his collaborators have put a quantum twist on AI.
The team developed a quantum version of ‘machine learning’, a type of AI in which programs can learn from previous experience to become progressively better at finding patterns in data. Machine learning is popular in applications ranging from e-mail spam filters to online-shopping suggestions. The team’s invention would take advantage of quantum computations to speed up machine-learning tasks exponentially.
Quantum leap
Related stories
At the heart of the scheme is a simpler algorithm that Lloyd and his colleagues developed in 2009 as a way of quickly solving systems of linear equations, each of which is a mathematical statement, such as x + y = 4. Conventional computers produce a solution through tedious number crunching, which becomes prohibitively difficult as the amount of data (and thus the number of equations) grows. A quantum computer can cheat by compressing the information and performing calculations on select features extracted from the data and mapped onto quantum bits, or qubits.
Quantum machine learning takes the results of algebraic manipulations and puts them to good use. Data can be split into groups — a task that is at the core of handwriting- and speech-recognition software — or can be searched for patterns. Massive amounts of information could therefore be manipulated with a relatively small number of qubits.
We could map the whole Universe — all of the information that has existed since the Big Bang — onto 300 qubits,” Lloyd says.
Such quantum AI techniques could dramatically speed up tasks such as image recognition for comparing photos on the web or for enabling cars to drive themselves — fields in which companies such as Google have invested considerable resources. (One of Lloyd’s collaborators, Masoud Mohseni, is in fact a Google researcher based in Venice, California.)
It’s really interesting to see that there are new ways to use quantum computers coming up, after focusing mostly on factoring and quantum searches,” says Stefanie Barz at the University of Vienna, who recently demonstrated quantum equation-solving in action. Her team used a simple quantum computer that had two qubits to work out a high-school-level maths problem: a system consisting of two equations4. Another group, led by Jian Pan at the University of Science and Technology of China in Hefei, did the same using four qubits5.
Putting quantum machine learning into practice will be more difficult. Lloyd estimates that a dozen qubits would be needed for a small-scale demonstration.

Nature doi:10.1038/nature.2013.13453

Related stories and links

From elsewhere
26 July 2013
Tagged , , , , , , , ,
%d bloggers like this: