Category: Jobs


JPMorgan Software Does in Seconds What Took Lawyers 360,000 Hours

By Hugo Angel,

New software does in seconds what took staff 360,000 hours Bank seeking to streamline systems, avoid redundancies

At JPMorgan Chase & Co., a learning machine is parsing financial deals that once kept legal teams busy for thousands of hours.

The program, called COIN, for Contract Intelligence, does the mind-numbing job of interpreting commercial-loan agreements that, until the project went online in June, consumed 360,000 hours of work each year by lawyers and loan officers. The software reviews documents in seconds, is less error-prone and never asks for vacation.

Attendees discuss software on Feb. 27, the eve of JPMorgan’s Investor Day.
Photographer: Kholood Eid/Bloomberg

While the financial industry has long touted its technological innovations, a new era of automation is now in overdrive as cheap computing power converges with fears of losing customers to startups. Made possible by investments in machine learning and a new private cloud network, COIN is just the start for the biggest U.S. bank. The firm recently set up technology hubs for teams specializing in big data, robotics and cloud infrastructure to find new sources of revenue, while reducing expenses and risks.

The push to automate mundane tasks and create new tools for bankers and clients — a growing part of the firm’s $9.6 billion technology budget — is a core theme as the company hosts its annual investor day on Tuesday.

Behind the strategy, overseen by Chief Operating Operating Officer Matt Zames and Chief Information Officer Dana Deasy, is an undercurrent of anxiety: Though JPMorgan emerged from the financial crisis as one of few big winners, its dominance is at risk unless it aggressively pursues new technologies, according to interviews with a half-dozen bank executives.


Redundant Software

That was the message Zames had for Deasy when he joined the firm from BP Plc in late 2013. The New York-based bank’s internal systems, an amalgam from decades of mergers, had too many redundant software programs that didn’t work together seamlessly.“Matt said, ‘Remember one thing above all else: We absolutely need to be the leaders in technology across financial services,’” Deasy said last week in an interview. “Everything we’ve done from that day forward stems from that meeting.

After visiting companies including Apple Inc. and Facebook Inc. three years ago to understand how their developers worked, the bank set out to create its own computing cloud called Gaia that went online last year. Machine learning and big-data efforts now reside on the private platform, which effectively has limitless capacity to support their thirst for processing power. The system already is helping the bank automate some coding activities and making its 20,000 developers more productive, saving money, Zames said. When needed, the firm can also tap into outside cloud services from Amazon.com Inc., Microsoft Corp. and International Business Machines Corp.

Tech SpendingJPMorgan will make some of its cloud-backed technology available to institutional clients later this year, allowing firms like BlackRock Inc. to access balances, research and trading tools. The move, which lets clients bypass salespeople and support staff for routine information, is similar to one Goldman Sachs Group Inc. announced in 2015.JPMorgan’s total technology budget for this year amounts to 9 percent of its projected revenue — double the industry average, according to Morgan Stanley analyst Betsy Graseck. The dollar figure has inched higher as JPMorgan bolsters cyber defenses after a 2014 data breach, which exposed the information of 83 million customers.

We have invested heavily in technology and marketing — and we are seeing strong returns,” JPMorgan said in a presentation Tuesday ahead of its investor day, noting that technology spending in its consumer bank totaled about $1 billion over the past two years.

Attendees inspect JPMorgan Markets software kiosk for Investors Day.
Photographer: Kholood Eid/Bloomberg

One-third of the company’s budget is for new initiatives, a figure Zames wants to take to 40 percent in a few years. He expects savings from automation and retiring old technology will let him plow even more money into new innovations.

Not all of those bets, which include several projects based on a distributed ledger, like blockchain, will pay off, which JPMorgan says is OK. One example executives are fond of mentioning: The firm built an electronic platform to help trade credit-default swaps that sits unused.

‘Can’t Wait’We’re willing to invest to stay ahead of the curve, even if in the final analysis some of that money will go to product or a service that wasn’t needed,Marianne Lake, the lender’s finance chief, told a conference audience in June. That’s “because we can’t wait to know what the outcome, the endgame, really looks like, because the environment is moving so fast.”As for COIN, the program has helped JPMorgan cut down on loan-servicing mistakes, most of which stemmed from human error in interpreting 12,000 new wholesale contracts per year, according to its designers.

JPMorgan is scouring for more ways to deploy the technology, which learns by ingesting data to identify patterns and relationships. The bank plans to use it for other types of complex legal filings like credit-default swaps and custody agreements. Someday, the firm may use it to help interpret regulations and analyze corporate communications.

Another program called X-Connect, which went into use in January, examines e-mails to help employees find colleagues who have the closest relationships with potential prospects and can arrange introductions.

Creating Bots
For simpler tasks, the bank has created bots to perform functions like granting access to software systems and responding to IT requests, such as resetting an employee’s password, Zames said. Bots are expected to handle 1.7 million access requests this year, doing the work of 140 people.

Matt Zames
Photographer: Kholood Eid/Bloomberg

While growing numbers of people in the industry worry such advancements might someday take their jobs, many Wall Street personnel are more focused on benefits. A survey of more than 3,200 financial professionals by recruiting firm Options Group last year found a majority expect new technology will improve their careers, for example by improving workplace performance.

Anything where you have back-office operations and humans kind of moving information from point A to point B that’s not automated is ripe for that,” Deasy said. “People always talk about this stuff as displacement. I talk about it as freeing people to work on higher-value things, which is why it’s such a terrific opportunity for the firm.

To help spur internal disruption, the company keeps tabs on 2,000 technology ventures, using about 100 in pilot programs that will eventually join the firm’s growing ecosystem of partners. For instance, the bank’s machine-learning software was built with Cloudera Inc., a software firm that JPMorgan first encountered in 2009.

We’re starting to see the real fruits of our labor,” Zames said. “This is not pie-in-the-sky stuff.

ORIGINAL:
Bloomberg

by Hugh Son
27 de febrero de 2017

The Rise of Artificial Intelligence and the End of Code

By Hugo Angel,

EDWARD C. MONAGHAN
Soon We Won’t Program Computers. We’ll Train Them Like Dogs
Before the invention of the computer, most experimental psychologists thought the brain was an unknowable black box. You could analyze a subject’s behavior—ring bell, dog salivates—but thoughts, memories, emotions? That stuff was obscure and inscrutable, beyond the reach of science. So these behaviorists, as they called themselves, confined their work to the study of stimulus and response, feedback and reinforcement, bells and saliva. They gave up trying to understand the inner workings of the mind. They ruled their field for four decades.
Then, in the mid-1950s, a group of rebellious psychologists, linguists, information theorists, and early artificial-intelligence researchers came up with a different conception of the mind. People, they argued, were not just collections of conditioned responses. They absorbed information, processed it, and then acted upon it. They had systems for writing, storing, and recalling memories. They operated via a logical, formal syntax. The brain wasn’t a black box at all. It was more like a computer.
The so-called cognitive revolution started small, but as computers became standard equipment in psychology labs across the country, it gained broader acceptance. By the late 1970s, cognitive psychology had overthrown behaviorism, and with the new regime came a whole new language for talking about mental life. Psychologists began describing thoughts as programs, ordinary people talked about storing facts away in their memory banks, and business gurus fretted about the limits of mental bandwidth and processing power in the modern workplace. 
This story has repeated itself again and again. As the digital revolution wormed its way into every part of our lives, it also seeped into our language and our deep, basic theories about how things work. Technology always does this. During the Enlightenment, Newton and Descartes inspired people to think of the universe as an elaborate clock. In the industrial age, it was a machine with pistons. (Freud’s idea of psychodynamics borrowed from the thermodynamics of steam engines.) Now it’s a computer. Which is, when you think about it, a fundamentally empowering idea. Because if the world is a computer, then the world can be coded. 
Code is logical. Code is hackable. Code is destiny. These are the central tenets (and self-fulfilling prophecies) of life in the digital age. As software has eaten the world, to paraphrase venture capitalist Marc Andreessen, we have surrounded ourselves with machines that convert our actions, thoughts, and emotions into data—raw material for armies of code-wielding engineers to manipulate. We have come to see life itself as something ruled by a series of instructions that can be discovered, exploited, optimized, maybe even rewritten. Companies use code to understand our most intimate ties; Facebook’s Mark Zuckerberg has gone so far as to suggest there might be a “fundamental mathematical law underlying human relationships that governs the balance of who and what we all care about.In 2013, Craig Venter announced that, a decade after the decoding of the human genome, he had begun to write code that would allow him to create synthetic organisms. “It is becoming clear,” he said, “that all living cells that we know of on this planet are DNA-software-driven biological machines.” Even self-help literature insists that you can hack your own source code, reprogramming your love life, your sleep routine, and your spending habits.
In this world, the ability to write code has become not just a desirable skill but a language that grants insider status to those who speak it. They have access to what in a more mechanical age would have been called the levers of power. “If you control the code, you control the world,” wrote futurist Marc Goodman. (In Bloomberg Businessweek, Paul Ford was slightly more circumspect: “If coders don’t run the world, they run the things that run the world.” Tomato, tomahto.)
But whether you like this state of affairs or hate it—whether you’re a member of the coding elite or someone who barely feels competent to futz with the settings on your phone—don’t get used to it. Our machines are starting to speak a different language now, one that even the best coders can’t fully understand. 
Over the past several years, the biggest tech companies in Silicon Valley have aggressively pursued an approach to computing called machine learning. In traditional programming, an engineer writes explicit, step-by-step instructions for the computer to follow. With machine learning, programmers don’t encode computers with instructions. They train them. If you want to teach a neural network to recognize a cat, for instance, you don’t tell it to look for whiskers, ears, fur, and eyes. You simply show it thousands and thousands of photos of cats, and eventually it works things out. If it keeps misclassifying foxes as cats, you don’t rewrite the code. You just keep coaching it.
This approach is not new—it’s been around for decades—but it has recently become immensely more powerful, thanks in part to the rise of deep neural networks, massively distributed computational systems that mimic the multilayered connections of neurons in the brain. And already, whether you realize it or not, machine learning powers large swaths of our online activity. Facebook uses it to determine which stories show up in your News Feed, and Google Photos uses it to identify faces. Machine learning runs Microsoft’s Skype Translator, which converts speech to different languages in real time. Self-driving cars use machine learning to avoid accidents. Even Google’s search engine—for so many years a towering edifice of human-written rules—has begun to rely on these deep neural networks. In February the company replaced its longtime head of search with machine-learning expert John Giannandrea, and it has initiated a major program to retrain its engineers in these new techniques. “By building learning systems,” Giannandrea told reporters this fall, “we don’t have to write these rules anymore.
 
Our machines speak a different language now, one that even the best coders can’t fully understand. 
But here’s the thing: With machine learning, the engineer never knows precisely how the computer accomplishes its tasks. The neural network’s operations are largely opaque and inscrutable. It is, in other words, a black box. And as these black boxes assume responsibility for more and more of our daily digital tasks, they are not only going to change our relationship to technology—they are going to change how we think about ourselves, our world, and our place within it.
If in the old view programmers were like gods, authoring the laws that govern computer systems, now they’re like parents or dog trainers. And as any parent or dog owner can tell you, that is a much more mysterious relationship to find yourself in.
Andy Rubin is an inveterate tinkerer and coder. The cocreator of the Android operating system, Rubin is notorious in Silicon Valley for filling his workplaces and home with robots. He programs them himself. “I got into computer science when I was very young, and I loved it because I could disappear in the world of the computer. It was a clean slate, a blank canvas, and I could create something from scratch,” he says. “It gave me full control of a world that I played in for many, many years.
Now, he says, that world is coming to an end. Rubin is excited about the rise of machine learning—his new company, Playground Global, invests in machine-learning startups and is positioning itself to lead the spread of intelligent devices—but it saddens him a little too. Because machine learning changes what it means to be an engineer.
People don’t linearly write the programs,” Rubin says. “After a neural network learns how to do speech recognition, a programmer can’t go in and look at it and see how that happened. It’s just like your brain. You can’t cut your head off and see what you’re thinking.When engineers do peer into a deep neural network, what they see is an ocean of math: a massive, multilayer set of calculus problems that—by constantly deriving the relationship between billions of data points—generate guesses about the world. 
Artificial intelligence wasn’t supposed to work this way. Until a few years ago, mainstream AI researchers assumed that to create intelligence, we just had to imbue a machine with the right logic. Write enough rules and eventually we’d create a system sophisticated enough to understand the world. They largely ignored, even vilified, early proponents of machine learning, who argued in favor of plying machines with data until they reached their own conclusions. For years computers weren’t powerful enough to really prove the merits of either approach, so the argument became a philosophical one. “Most of these debates were based on fixed beliefs about how the world had to be organized and how the brain worked,” says Sebastian Thrun, the former Stanford AI professor who created Google’s self-driving car. “Neural nets had no symbols or rules, just numbers. That alienated a lot of people.
The implications of an unparsable machine language aren’t just philosophical. For the past two decades, learning to code has been one of the surest routes to reliable employment—a fact not lost on all those parents enrolling their kids in after-school code academies. But a world run by neurally networked deep-learning machines requires a different workforce. Analysts have already started worrying about the impact of AI on the job market, as machines render old skills irrelevant. Programmers might soon get a taste of what that feels like themselves.
Just as Newtonian physics wasn’t obviated by quantum mechanics, code will remain a powerful tool set to explore the world. 
I was just having a conversation about that this morning,” says tech guru Tim O’Reilly when I ask him about this shift. “I was pointing out how different programming jobs would be by the time all these STEM-educated kids grow up.” Traditional coding won’t disappear completely—indeed, O’Reilly predicts that we’ll still need coders for a long time yet—but there will likely be less of it, and it will become a meta skill, a way of creating what Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, calls the “scaffolding” within which machine learning can operate. Just as Newtonian physics wasn’t obviated by the discovery of quantum mechanics, code will remain a powerful, if incomplete, tool set to explore the world. But when it comes to powering specific functions, machine learning will do the bulk of the work for us. 
Of course, humans still have to train these systems. But for now, at least, that’s a rarefied skill. The job requires both a high-level grasp of mathematics and an intuition for pedagogical give-and-take. “It’s almost like an art form to get the best out of these systems,” says Demis Hassabis, who leads Google’s DeepMind AI team. “There’s only a few hundred people in the world that can do that really well.” But even that tiny number has been enough to transform the tech industry in just a couple of years.
Whatever the professional implications of this shift, the cultural consequences will be even bigger. If the rise of human-written software led to the cult of the engineer, and to the notion that human experience can ultimately be reduced to a series of comprehensible instructions, machine learning kicks the pendulum in the opposite direction. The code that runs the universe may defy human analysis. Right now Google, for example, is facing an antitrust investigation in Europe that accuses the company of exerting undue influence over its search results. Such a charge will be difficult to prove when even the company’s own engineers can’t say exactly how its search algorithms work in the first place.
This explosion of indeterminacy has been a long time coming. It’s not news that even simple algorithms can create unpredictable emergent behavior—an insight that goes back to chaos theory and random number generators. Over the past few years, as networks have grown more intertwined and their functions more complex, code has come to seem more like an alien force, the ghosts in the machine ever more elusive and ungovernable. Planes grounded for no reason. Seemingly unpreventable flash crashes in the stock market. Rolling blackouts.
These forces have led technologist Danny Hillis to declare the end of the age of Enlightenment, our centuries-long faith in logic, determinism, and control over nature. Hillis says we’re shifting to what he calls the age of Entanglement. “As our technological and institutional creations have become more complex, our relationship to them has changed,” he wrote in the Journal of Design and Science. “Instead of being masters of our creations, we have learned to bargain with them, cajoling and guiding them in the general direction of our goals. We have built our own jungle, and it has a life of its own.The rise of machine learning is the latest—and perhaps the last—step in this journey. 
This can all be pretty frightening. After all, coding was at least the kind of thing that a regular person could imagine picking up at a boot camp. Coders were at least human. Now the technological elite is even smaller, and their command over their creations has waned and become indirect. Already the companies that build this stuff find it behaving in ways that are hard to govern. Last summer, Google rushed to apologize when its photo recognition engine started tagging images of black people as gorillas. The company’s blunt first fix was to keep the system from labeling anything as a gorilla.

To nerds of a certain bent, this all suggests a coming era in which we forfeit authority over our machines. “One can imagine such technology 

  • outsmarting financial markets, 
  • out-inventing human researchers, 
  • out-manipulating human leaders, and 
  • developing weapons we cannot even understand,” 

wrote Stephen Hawking—sentiments echoed by Elon Musk and Bill Gates, among others. “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.” 

 
But don’t be too scared; this isn’t the dawn of Skynet. We’re just learning the rules of engagement with a new technology. Already, engineers are working out ways to visualize what’s going on under the hood of a deep-learning system. But even if we never fully understand how these new machines think, that doesn’t mean we’ll be powerless before them. In the future, we won’t concern ourselves as much with the underlying sources of their behavior; we’ll learn to focus on the behavior itself. The code will become less important than the data we use to train it.
This isn’t the dawn of Skynet. We’re just learning the rules of engagement with a new technology. 
If all this seems a little familiar, that’s because it looks a lot like good old 20th-century behaviorism. In fact, the process of training a machine-learning algorithm is often compared to the great behaviorist experiments of the early 1900s. Pavlov triggered his dog’s salivation not through a deep understanding of hunger but simply by repeating a sequence of events over and over. He provided data, again and again, until the code rewrote itself. And say what you will about the behaviorists, they did know how to control their subjects.
In the long run, Thrun says, machine learning will have a democratizing influence. In the same way that you don’t need to know HTML to build a website these days, you eventually won’t need a PhD to tap into the insane power of deep learning. Programming won’t be the sole domain of trained coders who have learned a series of arcane languages. It’ll be accessible to anyone who has ever taught a dog to roll over. “For me, it’s the coolest thing ever in programming,” Thrun says, “because now anyone can program.
For much of computing history, we have taken an inside-out view of how machines work. First we write the code, then the machine expresses it. This worldview implied plasticity, but it also suggested a kind of rules-based determinism, a sense that things are the product of their underlying instructions. Machine learning suggests the opposite, an outside-in view in which code doesn’t just determine behavior, behavior also determines code. Machines are products of the world.
Ultimately we will come to appreciate both the power of handwritten linear code and the power of machine-learning algorithms to adjust it—the give-and-take of design and emergence. It’s possible that biologists have already started figuring this out. Gene-editing techniques like Crispr give them the kind of code-manipulating power that traditional software programmers have wielded. But discoveries in the field of epigenetics suggest that genetic material is not in fact an immutable set of instructions but rather a dynamic set of switches that adjusts depending on the environment and experiences of its host. Our code does not exist separate from the physical world; it is deeply influenced and transmogrified by it. Venter may believe cells are DNA-software-driven machines, but epigeneticist Steve Cole suggests a different formulation: “A cell is a machine for turning experience into biology.
A cell is a machine for turning experience into biology.” 
Steve Cole
And now, 80 years after Alan Turing first sketched his designs for a problem-solving machine, computers are becoming devices for turning experience into technology. For decades we have sought the secret code that could explain and, with some adjustments, optimize our experience of the world. But our machines won’t work that way for much longer—and our world never really did. We’re about to have a more complicated but ultimately more rewarding relationship with technology. We will go from commanding our devices to parenting them.

What the AI Behind AlphaGo Teaches Us About Humanity. Watch this on The Scene.
Editor at large Jason Tanz (@jasontanz) wrote about Andy Rubin’s new company, Playground, in issue 24.03.
This article appears in the June issue. Go Back to Top. Skip To: Start of Article.
ORIGINAL: Wired

Forward to the Future: Visions of 2045

By Hugo Angel,

DARPA asked the world and our own researchers what technologies they expect to see 30 years from now—and received insightful, sometimes funny predictions
Today—October 21, 2015—is famous in popular culture as the date 30 years in the future when Marty McFly and Doc Brown arrive in their time-traveling DeLorean in the movie “Back to the Future Part II.” The film got some things right about 2015, including in-home videoconferencing and devices that recognize people by their voices and fingerprints. But it also predicted trunk-sized fusion reactors, hoverboards and flying cars—game-changing technologies that, despite the advances we’ve seen in so many fields over the past three decades, still exist only in our imaginations.
A big part of DARPA’s mission is to envision the future and make the impossible possible. So ten days ago, as the “Back to the Future” day approached, we turned to social media and asked the world to predict: What technologies might actually surround us 30 years from now? We pointed people to presentations from DARPA’s Future Technologies Forum, held last month in St. Louis, for inspiration and a reality check before submitting their predictions.
Well, you rose to the challenge and the results are in. So in honor of Marty and Doc (little known fact: he is a DARPA alum) and all of the world’s innovators past and future, we present here some highlights from your responses, in roughly descending order by number of mentions for each class of futuristic capability:
  • Space: Interplanetary and interstellar travel, including faster-than-light travel; missions and permanent settlements on the Moon, Mars and the asteroid belt; space elevators
  • Transportation & Energy: Self-driving and electric vehicles; improved mass transit systems and intercontinental travel; flying cars and hoverboards; high-efficiency solar and other sustainable energy sources
  • Medicine & Health: Neurological devices for memory augmentation, storage and transfer, and perhaps to read people’s thoughts; life extension, including virtual immortality via uploading brains into computers; artificial cells and organs; “Star Trek”-style tricorder for home diagnostics and treatment; wearable technology, such as exoskeletons and augmented-reality glasses and contact lenses
  • Materials & Robotics: Ubiquitous nanotechnology, 3-D printing and robotics; invisibility and cloaking devices; energy shields; anti-gravity devices
  • Cyber & Big Data: Improved artificial intelligence; optical and quantum computing; faster, more secure Internet; better use of data analytics to improve use of resources
A few predictions inspired us to respond directly:
  • Pizza delivery via teleportation”—DARPA took a close look at this a few years ago and decided there is plenty of incentive for the private sector to handle this challenge.
  • Time travel technology will be close, but will be closely guarded by the military as a matter of national security”—We already did this tomorrow.
  • Systems for controlling the weather”—Meteorologists told us it would be a job killer and we didn’t want to rain on their parade.
  • Space colonies…and unlimited cellular data plans that won’t be slowed by your carrier when you go over a limit”—We appreciate the idea that these are equally difficult, but they are not. We think likable cell-phone data plans are beyond even DARPA and a total non-starter.
So seriously, as an adjunct to this crowd-sourced view of the future, we asked three DARPA researchers from various fields to share their visions of 2045, and why getting there will require a group effort with players not only from academia and industry but from forward-looking government laboratories and agencies:

Pam Melroy, an aerospace engineer, former astronaut and current deputy director of DARPA’s Tactical Technologies Office (TTO), foresees technologies that would enable machines to collaborate with humans as partners on tasks far more complex than those we can tackle today:
Justin Sanchez, a neuroscientist and program manager in DARPA’s Biological Technologies Office (BTO), imagines a world where neurotechnologies could enable users to interact with their environment and other people by thought alone:
Stefanie Tompkins, a geologist and director of DARPA’s Defense Sciences Office, envisions building substances from the atomic or molecular level up to create “impossible” materials with previously unattainable capabilities.
Check back with us in 2045—or sooner, if that time machine stuff works out—for an assessment of how things really turned out in 30 years.
# # #
Associated images posted on www.darpa.mil and video posted at www.youtube.com/darpatv may be reused according to the terms of the DARPA User Agreement, available here:http://www.darpa.mil/policy/usage-policy.
Tweet @darpa
ORIGINAL: DARPA
[email protected]
10/21/2015

Seven Emerging Technologies That Will Change the World Forever

By admin,

By Gray Scott
Sep 29, 2015

When someone asks me what I do, and I tell them that I’m a futurist,
the first thing they ask “what is a futurist?” The short answer that I
give is “I use current scientific research in emerging technologies to
imagine how we will live in the future.”  
However, as you can imagine the art of futurology and foresight is much more complex. I spend my days thinking, speaking and writing about the future, and emerging technologies. On any given day I might be in Warsaw speaking at an Innovation Conference, in London speaking at a Global Leadership Summit, or being interviewed by the Discovery Channel. Whatever the situation, I have one singular mission. I want you to think about the future. 


How will we live in the future? How will emerging technologies change our lives, our economy and our businesses? We should begin to think about the future now. It will be here faster than you think.


Let’s explore seven current emerging technologies that I am thinking about that are set to change the world forever.

1. Age Reversal
We will see the emergence of true biological age reversal by 2025.


It may be extraordinarily expensive, complex and risky, but for people who want to turn back the clock, it may be worth it. It may sound like science fiction but the science is real, and it has already begun. In fact, according to new research published in Nature’s Scientific Reports, Professor Jun-Ichi Hayashi from the University of Tsukuba in Japan has already reversed ageing in human cell lines by “turning on or off”mitochondrial function.


Another study published in CELL reports that Australian and US researchers have successfully reversed the aging process in the muscles of mice. They found that raising nuclear NAD+ in old mice reverses pseudohypoxia and metabolic dysfunction. Researchers gave the mice a compound called nicotinamide adenine dinucleotide or NAD for a week and found that the age indicators in two-year-old mice were restored to that of six-month-old mice. That would be like turning a 60-year-old human into a 20-year-old!


How will our culture deal with age reversal? Will we set limits on who can age-reverse? Do we ban criminals from this technology? These are the questions we will face in a very complex future. One thing is certain, age reversal will happen and when it does it will change our species and our world forever.


2. Artificial General Intelligence
The robots are coming and they are going to eat your job for lunch. Worldwide shipments of multipurpose industrial robots are forecast to exceed 207,000 units in 2015, and this is just the beginning. Robots like Care-o-bot 4 and Softbank’s Pepper may be in homes, offices and hotels within the next year. These robots will be our personal servants, assistants and caretakers.


Amazon has introduced a new AI assistant called ECHO that could replace the need for a human assistant altogether. We already have robots and automation that can make pizza, serve beer, write news articles, scan our faces for diseases, and drive cars. We will see AI in our factories, hospitals, restaurants and hotels around the world by 2020.

This “pinkhouse” at Caliber Biotherapeutics in Bryan, Texas, grows 2.2 million plants under the glow of blue and red LEDs.
Courtesy of Caliber Therapeutics


3. Vertical Pink Farms
We are entering the techno-agricultural era. Agricultural science is changing the way we harvest our food. Robots and automation are going to play a decisive role in the way we hunt and gather. The most important and disruptive idea is what I call “Vertical PinkFarms” and it is set to decentralise the food industry forever.


The United Nations (UN) predicts by 2050 80% of the Earth’s population will live in cities. Climate change will also make traditional food production more difficult and less productive in the future. We will need more efficient systems to feed these hungry urban areas. Thankfully, several companies around the world are already producing food grown in these Vertical PinkFarms and the results are remarkable.

Vertical PinkFarms will use blue and red LED lighting to grow organic, pesticide free, climate controlled food inside indoor environments. Vertical PinkFarms use less water, less energy and enable people to grow food underground or indoors year round in any climate.


Traditional food grown on outdoor farms are exposed to the full visible light spectrum. This range includes Red, Orange, Yellow, Green, Blue and Violet. However, agricultural science is now showing us that O, Y, G and V are not necessary for plant growth. You only need R and B. LED lights are much more efficient and cooler than indoor florescent grow lights used in most indoor greenhouses. LED lights are also becoming less expensive as more companies begin to invest in this technology. Just like the solar and electric car revolution, the change will be exponential. By 2025, we may see massive Vertical PinkFarms in most major cities around the world. We may even see small Vertical PinkFarm units in our homes in the future.


4. Transhumanism
By 2035, even if a majority of humans do not self-identify as Transhuman, technically they will be. If we define any bio-upgrade or human enhancement as Transhumanism, then the numbers are already quite high and growing exponentially. According to a UN Telecom Agency report, around 6 billion people have cell phones. This demonstrates the ubiquitous nature of technology that we keep on or around our body.


As human bio-enhancements become more affordable, billions of humans will become Transhuman. Digital implants, mind-controlled exoskeletal upgrades, age reversal pills, hyper-intelligence brain implants and bionic muscle upgrades. All of these technologies will continue our evolution as humans.


Reconstructive joint replacements, spinal implants, cardiovascular implants, dental implants, intraocular lens and breast implants are all part of our human techno-evolution into this new Transhuman species.


5. Wearables and Implantables  
Smartphones will fade into digital history as the high-resolution smart contact lens and corresponding in-ear audio plugs communicate with our wearable computers or “smart suits.” The digital world will be displayed directly on our eye in stunning interactive augmented beauty. The Gent University’s Centre of Microsystems Technology in Belgium has recently developed a spherical curved LCD display that can be embedded in contact lenses. This enables the entire lens to display information.


The bridge to the smart contact starts with smart glasses, VR headsets and yes, the Apple watch. Wearable technologies are growing exponentially. New smart augmented glasses like 
  • Google Glass, 
  • RECON JET, 
  • METAPro, and 
  • Vuzix M100 Smart Glasses 
are just the beginning. In fact, CastAR augmented 3D glasses recently received over a million dollars in funding on Kickstarter. Their goal was only four hundred thousand. The market is ready for smart vision, and tech companies should move away from handheld devices if they want to compete.

The question of what is real and augmented will be irrelevant in the future. We will be able to create our reality with clusters of information cults that can only see certain augmented information realities if you are in these groups. All information will be instantaneously available in the augmented visual future.

Mist Water Canarias
Gray Scott, an IEET Advisory Board member, is a futurist,
techno-philosopher, speaker, writer and artist. He is the founder and
CEO of SeriousWonder.com and a professional member of The World Future Society.


6. Atmospheric Water Harvesting
California and parts of the south-west in the US are currently experiencing an unprecedented drought. If this drought continues, the global agricultural system could become unstable.


Consider this: California and Arizona account for about 98% of commercial lettuce production in the United States. Thankfully we live in a world filled with exponential innovation right now.


An emerging technology called Atmospheric Water Harvesting could save California and other arid parts of the world from severe drought and possibly change the techno-agricultural landscape forever.


Traditional agricultural farming methods consume 80% of the water in California. According to the California Agricultural Resource Directory of 2009, California grows 
  • 99% of the U.S. almonds, artichokes, and walnuts; 
  • 97% of the kiwis, apricots and plums; 
  • 96% of the figs, olives and nectarines; 
  • 95% of celery and garlic; 
  • 88% of strawberries and lemons; 
  • 74% of peaches; 
  • 69% of carrots; 
  • 62% of tangerines and 
  • the list goes on.
Several companies around the world are already using atmospheric water harvesting technologies to solve this problem. Each company has a different technological approach but all of them combined could help alleviate areas suffering from water shortages.


The most basic, and possibly the most accessible, form of atmospheric water harvesting technology works by collecting water and moisture from the atmosphere using micro netting. These micro nets collect water that drains down into a collection chamber. This fresh water can then be stored or channelled into homes and farms as needed.


A company called FogQuest is already successfully using micro netting or “fog collectors” to harvest atmospheric water in places like Ethiopia, Guatemala, Nepal, Chile and Morocco.
Will people use this technology or will we continue to drill for water that may not be there?


7. 3D Printing
Today we already have 3D printers that can print clothing, circuit boards, furniture, homes and chocolate. A company called BigRep has created a 3D printer called the BigRep ONE.2 that enables designers to create entire tables, chairs or coffee tables in one print. Did you get that?


You can now buy a 3D printer and print furniture!
Fashion designers like 
  • Iris van Herpen, 
  • Bryan Oknyansky, 
  • Francis Bitonti, 
  • Madeline Gannon, and 
  • Daniel Widrig 
have all broken serious ground in the 3D printed fashion movement. These avant-garde designs may not be functional for the average consumer so what is one to do for a regular tee shirt? Thankfully a new Field Guided Fabrication 3D printer called ELECTROLOOM has arrived that can print and it may put a few major retail chains out of business. The ELECTROLOOM enables anyone to create seamless fabric items on demand.

So what is next? 3D printed cars. Yes, cars. Divergent Microfactories (DM) has recently created a first 3D printed high-performance car called the Blade. This car is no joke. The Blade has a chassis weight of just 61 pounds, goes 0-60 MPH in 2.2 seconds and is powered by a 4-cylinder 700-horsepower bi-fuel internal combustion engine.


These are just seven emerging technologies on my radar. I have a list of hundreds of innovations that will change the world forever. Some sound like pure sci-fi but I assure you they are real. Are we ready for a world filled with abundance, age reversal and self-replicating AI robots? I hope so.


——

It’s No Myth: Robots and Artificial Intelligence Will Erase Jobs in Nearly Every Industry

By admin,

With the unemployment rate falling to 5.3 percent, the lowest in seven years, policy makers are heaving a sigh of relief. Indeed, with the technology boom in progress, there is a lot to be optimistic about.

  • Manufacturing will be returning to U.S. shores with robots doing the job of Chinese workers; 
  • American carmakers will be mass-producing self-driving electric vehicles; 
  • technology companies will develop medical devices that greatly improve health and longevity; 
  • we will have unlimited clean energy and 3D print our daily needs. 

The cost of all of these things will plummet and make it possible to provide for the basic needs of every human being.

I am talking about technology advances that are happening now, which will bear fruit in the 2020s.
But policy makers will have a big new problem to deal with: the disappearance of human jobs. Not only will there be fewer jobs for people doing manual work, the jobs of knowledge workers will also be replaced by computers. Almost every industry and profession will be impacted and this will create a new set of social problems — because most people can’t adapt to such dramatic change.
If we can develop the economic structures necessary to distribute the prosperity we are creating, most people will no longer have to work to sustain themselves. They will be free to pursue other creative endeavors. The problem, however, is that without jobs, they will not have the dignity, social engagement, and sense of fulfillment that comes from work. The life, liberty and pursuit of happiness that the constitution entitles us to won’t be through labor, it will have to be through other means.
It is imperative that we understand the changes that are happening and find ways to cushion the impacts.
The technology elite who are leading this revolution will reassure you that there is nothing to worry about because we will create new jobs just as we did in previous centuries when the economy transitioned from agrarian to industrial to knowledge-based. Tech mogul Marc Andreessen has called the notion of a jobless future a “Luddite fallacy,” referring to past fears that machines would take human jobs away. Those fears turned out to be unfounded because we created newer and better jobs and were much better off.
True, we are living better lives. But what is missing from these arguments is the timeframe over which the transitions occurred. The industrial revolution unfolded over centuries. Today’s technology revolutions are happening within years. We will surely create a few intellectually-challenging jobs, but we won’t be able to retrain the workers who lose today’s jobs. They will experience the same unemployment and despair that their forefathers did. It is they who we need to worry about.
The first large wave of unemployment will be caused by self-driving cars. These will provide tremendous benefit by eliminating traffic accidents and congestion, making commuting time more productive, and reducing energy usage. But they will eliminate the jobs of millions of taxi and truck drivers and delivery people. Fully-automated robotic cars are no longer in the realm of science fiction; you can see Google’s cars on the streets of Mountain View, Calif. There are also self-driving trucks on our highways and self-driving tractors on farms. Uber just hired away dozens of engineers from Carnegie Mellon University to build its own robotic cars. It will surely start replacing its human drivers as soon as its technology is ready — later in this decade. As Uber CEO Travis Kalanick reportedly said in an interview, “The reason Uber could be expensive is you’re paying for the other dude in the car. When there is no other dude in the car, the cost of taking an Uber anywhere is cheaper. Even on a road trip.
The dude in the driver’s seat will go away.

Manufacturing will be the next industry to be transformed. Robots have, for many years, been able to perform surgery, milk cows, do military reconnaissance and combat, and assemble goods. But they weren’t dexterous enough to do the type of work that humans do in installing circuit boards. The latest generation of industrial robots by ABB of Switzerland and Rethink Robotics of Boston can do this however. ABB’s robot, Yumi, can even thread a needle. It costs only $40,000.

China, fearing the demise of its industry, is setting up fully-automated robotic factories in the hope that by becoming more price-competitive, it can continue to be the manufacturing capital of the world. But its advantage only holds up as long as the supply chains are in China and shipping raw materials and finished goods over the oceans remains cost-effective. Don’t forget that our robots are as productive as theirs are; they too don’t join labor unions (yet) and will work around the clock without complaining. Supply chains will surely shift and the trickle of returning manufacturing will become a flood.

But there will be few jobs for humans once the new, local factories are built.
With advances in artificial intelligence, any job that requires the analysis of information can be done better by computers. This includes the jobs of physicians, lawyers, accountants, and stock brokers. We will still need some humans to interact with the ones who prefer human contact, but the grunt work will disappear. The machines will need very few humans to help them.
This jobless future will surely create social problems — but it may be an opportunity for humanity to uplift itself. Why do we need to work 40, 50, or 60 hours a week, after all? Just as we were better off leaving the long and hard agrarian and factory jobs behind, we may be better off without the mindless work at the office. What if we could be working 10 or 15 hours per week from anywhere we want and have the remaining time for leisure, social work, or attainment of knowledge?
Yes, there will be a booming tourism and recreation industry and new jobs will be created in these — for some people.
There are as many things to be excited about as to fear. If we are smart enough to develop technologies that solve the problems of disease, hunger, energy, and education, we can — and surely will — develop solutions to our social problems. But we need to start by understanding where we are headed and prepare for the changes. We need to get beyond the claims of a Luddite fallacy — to a discussion about the new future.
ORIGINAL: Singularity Hub

ON JUL 07, 2015

Vivek
Wadhwa is a fellow at Rock Center for Corporate Governance at Stanford
University, director of research at Center for Entrepreneurship and
Research Commercialization at Duke, and distinguished fellow at
Singularity University.
His
past appointments include Harvard Law School, University of California
Berkeley, and Emory University. Follow him on Twitter @wadhwa.

An executive’s guide to machine learning

By admin,

An executive’s guide to machine learning

It’s no longer the preserve of artificial-intelligence researchers and born-digital companies like Amazon, Google, and Netflix.
Machine learning is based on algorithms that can learn from data without relying on rules-based programming. It came into its own as a scientific discipline in the late 1990s as steady advances in digitization and cheap computing power enabled data scientists to stop building finished models and instead train computers to do so. The unmanageable volume and complexity of the big data that the world is now swimming in have increased the potential of machine learning—and the need for it.
Stanford’s Fei-Fei Li

In 2007 Fei-Fei Li, the head of Stanford’s Artificial Intelligence Lab, gave up trying to program computers to recognize objects and began labeling the millions of raw images that a child might encounter by age three and feeding them to computers. By being shown thousands and thousands of labeled data sets with instances of, say, a cat, the machine could shape its own rules for deciding whether a particular set of digital pixels was, in fact, a cat.1 Last November, Li’s team unveiled a program that identifies the visual elements of any picture with a high degree of accuracy. IBM’s Watson machine relied on a similar self-generated scoring system among hundreds of potential answers to crush the world’s best Jeopardy! players in 2011.

Dazzling as such feats are, machine learning is nothing like learning in the human sense (yet). But what it already does extraordinarily well—and will get better at—is relentlessly chewing through any amount of data and every combination of variables. Because machine learning’s emergence as a mainstream management tool is relatively recent, it often raises questions. In this article, we’ve posed some that we often hear and answered them in a way we hope will be useful for any executive. Now is the time to grapple with these issues, because the competitive significance of business models turbocharged by machine learning is poised to surge. Indeed, management author Ram Charan suggests that any organization that is not a math house now or is unable to become one soon is already a legacy company.2
1. How are traditional industries using machine learning to gather fresh business insights?
Well, let’s start with sports. This past spring, contenders for the US National Basketball Association championship relied on the analytics of Second Spectrum, a California machine-learning start-up. By digitizing the past few seasons’ games, it has created predictive models that allow a coach to distinguish between, as CEO Rajiv Maheswaran puts it, “a bad shooter who takes good shots and a good shooter who takes bad shots”—and to adjust his decisions accordingly.
You can’t get more venerable or traditional than General Electric, the only member of the original Dow Jones Industrial Average still around after 119 years. GE already makes hundreds of millions of dollars by crunching the data it collects from deep-sea oil wells or jet engines to optimize performance, anticipate breakdowns, and streamline maintenance. But Colin Parris, who joined GE Software from IBM late last year as vice president of software research, believes that continued advances in data-processing power, sensors, and predictive algorithms will soon give his company the same sharpness of insight into the individual vagaries of a jet engine that Google has into the online behavior of a 24-year-old netizen from West Hollywood.
2. What about outside North America?
In Europe, more than a dozen banks have replaced older statistical-modeling approaches with machine-learning techniques and, in some cases, experienced 10 percent increases in sales of new products, 20 percent savings in capital expenditures, 20 percent increases in cash collections, and 20 percent declines in churn. The banks have achieved these gains by devising new recommendation engines for clients in retailing and in small and medium-sized companies. They have also built microtargeted models that more accurately forecast who will cancel service or default on their loans, and how best to intervene.
Closer to home, as a recent article in McKinsey Quarterly notes,3 our colleagues have been applying hard analytics to the soft stuff of talent management. Last fall, they tested the ability of three algorithms developed by external vendors and one built internally to forecast, solely by examining scanned résumés, which of more than 10,000 potential recruits the firm would have accepted. The predictions strongly correlated with the real-world results. Interestingly, the machines accepted a slightly higher percentage of female candidates, which holds promise for using analytics to unlock a more diverse range of profiles and counter hidden human bias.
As ever more of the analog world gets digitized, our ability to learn from data by developing and testing algorithms will only become more important for what are now seen as traditional businesses. Google chief economist Hal Varian calls this “computer kaizen.” For “just as mass production changed the way products were assembled and continuous improvement changed how manufacturing was done,” he says, “so continuous [and often automatic] experimentation will improve the way we optimize business processes in our organizations.4
3. What were the early foundations of machine learning?
Machine learning is based on a number of earlier building blocks, starting with classical statistics. Statistical inference does form an important foundation for the current implementations of artificial intelligence. But it’s important to recognize that classical statistical techniques were developed between the 18th and early 20th centuries for much smaller data sets than the ones we now have at our disposal. Machine learning is unconstrained by the preset assumptions of statistics. As a result, it can yield insights that human analysts do not see on their own and make predictions with ever-higher degrees of accuracy.
More recently, in the 1930s and 1940s, the pioneers of computing (such as Alan Turing, who had a deep and abiding interest in artificial intelligence) began formulating and tinkering with the basic techniques such as neural networks that make today’s machine learning possible. But those techniques stayed in the laboratory longer than many technologies did and, for the most part, had to await the development and infrastructure of powerful computers, in the late 1970s and early 1980s. That’s probably the starting point for the machine-learning adoption curve. New technologies introduced into modern economies—the steam engine, electricity, the electric motor, and computers, for example—seem to take about 80 years to transition from the laboratory to what you might call cultural invisibility. The computer hasn’t faded from sight just yet, but it’s likely to by 2040. And it probably won’t take much longer for machine learning to recede into the background.
4. What does it take to get started?
C-level executives will best exploit machine learning if they see it as a tool to craft and implement a strategic vision. But that means putting strategy first. Without strategy as a starting point, machine learning risks becoming a tool buried inside a company’s routine operations: it will provide a useful service, but its long-term value will probably be limited to an endless repetition of “cookie cutter” applications such as models for acquiring, stimulating, and retaining customers.
We find the parallels with M&A instructive. That, after all, is a means to a well-defined end. No sensible business rushes into a flurry of acquisitions or mergers and then just sits back to see what happens. Companies embarking on machine learning should make the same three commitments companies make before embracing M&A. Those commitments are,

  • first, to investigate all feasible alternatives;
  • second, to pursue the strategy wholeheartedly at the C-suite level; and,
  • third, to use (or if necessary acquire) existing expertise and knowledge in the C-suite to guide the application of that strategy.
The people charged with creating the strategic vision may well be (or have been) data scientists. But as they define the problem and the desired outcome of the strategy, they will need guidance from C-level colleagues overseeing other crucial strategic initiatives. More broadly, companies must have two types of people to unleash the potential of machine learning.

  • Quants” are schooled in its language and methods.
  • Translators” can bridge the disciplines of data, machine learning, and decision making by reframing the quants’ complex results as actionable insights that generalist managers can execute.
Access to troves of useful and reliable data is required for effective machine learning, such as Watson’s ability, in tests, to predict oncological outcomes better than physicians or Facebook’s recent success teaching computers to identify specific human faces nearly as accurately as humans do. A true data strategy starts with identifying gaps in the data, determining the time and money required to fill those gaps, and breaking down silos. Too often, departments hoard information and politicize access to it—one reason some companies have created the new role of chief data officer to pull together what’s required. Other elements include putting responsibility for generating data in the hands of frontline managers.
Start small—look for low-hanging fruit and trumpet any early success. This will help recruit grassroots support and reinforce the changes in individual behavior and the employee buy-in that ultimately determine whether an organization can apply machine learning effectively. Finally, evaluate the results in the light of clearly identified criteria for success.
5. What’s the role of top management?
Behavioral change will be critical, and one of top management’s key roles will be to influence and encourage it. Traditional managers, for example, will have to get comfortable with their own variations on A/B testing, the technique digital companies use to see what will and will not appeal to online consumers. Frontline managers, armed with insights from increasingly powerful computers, must learn to make more decisions on their own, with top management setting the overall direction and zeroing in only when exceptions surface. Democratizing the use of analytics—providing the front line with the necessary skills and setting appropriate incentives to encourage data sharing—will require time.
C-level officers should think about applied machine learning in three stages: machine learning 1.0, 2.0, and 3.0—or, as we prefer to say,

  1. description, 
  2. prediction, and
  3. prescription. 

They probably don’t need to worry much about the description stage, which most companies have already been through. That was all about collecting data in databases (which had to be invented for the purpose), a development that gave managers new insights into the past. OLAP—online analytical processing—is now pretty routine and well established in most large organizations.

There’s a much more urgent need to embrace the prediction stage, which is happening right now. Today’s cutting-edge technology already allows businesses not only to look at their historical data but also to predict behavior or outcomes in the future—for example, by helping credit-risk officers at banks to assess which customers are most likely to default or by enabling telcos to anticipate which customers are especially prone to “churn” in the near term (exhibit).
Exhibit
 
A frequent concern for the C-suite when it embarks on the prediction stage is the quality of the data. That concern often paralyzes executives. In our experience, though, the last decade’s IT investments have equipped most companies with sufficient information to obtain new insights even from incomplete, messy data sets, provided of course that those companies choose the right algorithm. Adding exotic new data sources may be of only marginal benefit compared with what can be mined from existing data warehouses. Confronting that challenge is the task of the “chief data scientist.”
Prescription—the third and most advanced stage of machine learning—is the opportunity of the future and must therefore command strong C-suite attention. It is, after all, not enough just to predict what customers are going to do; only by understanding why they are going to do it can companies encourage or deter that behavior in the future. Technically, today’s machine-learning algorithms, aided by human translators, can already do this. For example, an international bank concerned about the scale of defaults in its retail business recently identified a group of customers who had suddenly switched from using credit cards during the day to using them in the middle of the night. That pattern was accompanied by a steep decrease in their savings rate. After consulting branch managers, the bank further discovered that the people behaving in this way were also coping with some recent stressful event. As a result, all customers tagged by the algorithm as members of that microsegment were automatically given a new limit on their credit cards and offered financial advice.
 
The prescription stage of machine learning, ushering in a new era of man–machine collaboration, will require the biggest change in the way we work. While the machine identifies patterns, the human translator’s responsibility will be to interpret them for different microsegments and to recommend a course of action. Here the C-suite must be directly involved in the crafting and formulation of the objectives that such algorithms attempt to optimize.
6. This sounds awfully like automation replacing humans in the long run. Are we any nearer to knowing whether machines will replace managers?
It’s true that change is coming (and data are generated) so quickly that human-in-the-loop involvement in all decision making is rapidly becoming impractical. Looking three to five years out, we expect to see far higher levels of artificial intelligence, as well as the development of distributed autonomous corporations. These self-motivating, self-contained agents, formed as corporations, will be able to carry out set objectives autonomously, without any direct human supervision. Some DACs will certainly become self-programming.
One current of opinion sees distributed autonomous corporations as threatening and inimical to our culture. But by the time they fully evolve, machine learning will have become culturally invisible in the same way technological inventions of the 20th century disappeared into the background. The role of humans will be to direct and guide the algorithms as they attempt to achieve the objectives that they are given. That is one lesson of the automatic-trading algorithms which wreaked such damage during the financial crisis of 2008.
No matter what fresh insights computers unearth, only human managers can decide the essential questions, such as which critical business problems a company is really trying to solve. Just as human colleagues need regular reviews and assessments, so these “brilliant machines” and their works will also need to be regularly evaluated, refined—and, who knows, perhaps even fired or told to pursue entirely different paths—by executives with experience, judgment, and domain expertise.
The winners will be neither machines alone, nor humans alone, but the two working together effectively.
7. So in the long term there’s no need to worry?
It’s hard to be sure, but distributed autonomous corporations and machine learning should be high on the C-suite agenda. We anticipate a time when the philosophical discussion of what intelligence, artificial or otherwise, might be will end because there will be no such thing as intelligence—just processes. If distributed autonomous corporations act intelligently, perform intelligently, and respond intelligently, we will cease to debate whether high-level intelligence other than the human variety exists. In the meantime, we must all think about what we want these entities to do, the way we want them to behave, and how we are going to work with them.
About the authors
Dorian Pyle is a data expert in McKinsey’s Miami office, and Cristina San Jose is a principal in the Madrid office.
ORIGINAL: McKinsey
by Dorian Pyle and Cristina San Jose
June 2015

Meet Amelia, the AI Platform That Could Change the Future of IT

By admin,

Chetah Dube. Image credit: Photography by Jesse Dittmar

Her name is Amelia, and she is the complete package: smart, sophisticated, industrious and loyal. No wonder her boss, Chetan Dube, can’t get her out of his head.

My wife is convinced I’m having an affair with Amelia,” Dube says, leaning forward conspiratorially. “I have a great deal of passion and infatuation with her.

He’s not alone. Amelia beguiles everyone she meets, and those in the know can’t stop buzzing about her. The blue-eyed blonde’s star is rising so fast that if she were a Hollywood ingénue or fashion model, the tabloids would proclaim her an “It” girl, but the tag doesn’t really apply. Amelia is more of an IT girl, you see. In fact, she’s all IT.

Amelia is an artificial intelligence platform created by Dube’s managed IT services firm IPsoft, a virtual agent avatar poised to redefine how enterprises operate by automating and enhancing a wide range of business processes. The product of an obsessive and still-ongoing 16-year developmental cycle, she—yes, everyone at IPsoft speaks about Amelia using feminine pronouns—

leverages cognitive technologies to interface with consumers and colleagues in astoundingly human terms,

  • parsing questions,
  • analyzing intent and
  • even sensing emotions to resolve issues more efficiently and effectively than flesh-and-blood customer service representatives.

Install Amelia in a call center, for example, and her patent-pending intelligence algorithms absorb in a matter of seconds the same instruction manuals and guidelines that human staffers spend weeks or even months memorizing. Instead of simply recognizing individual words, Amelia grasps the deeper implications of what she reads, applying logic and making connections between concepts. She relies on that baseline information to reply to customer email and answer phone calls; if she understands the query, she executes the steps necessary to resolve the issue, and if she doesn’t know the answer, she scans the web or the corporate intranet for clues. Only when Amelia cannot locate the relevant information does she escalate the case to a human expert, observing the response and filing it away for the next time the same scenario unfolds.

… Continue reading

Silicon Valley Then and Now: To Invent the Future, You Must Understand the Past

By admin,

William Shockley’s employees toast him for his Nobel Prize, 1956. Photo courtesy Computer History Museum.
You can’t really understand what is going on now without understanding what came before.
Steve Jobs is explaining why, as a young man, he spent so much time with the Silicon Valley entrepreneurs a generation older, men like Robert Noyce, Andy Grove, and Regis McKenna.
It’s a beautiful Saturday morning in May, 2003, and I’m sitting next to Jobs on his living room sofa, interviewing him for a book I’m writing. I ask him to tell me more about why he wanted, as he put it, “to smell that second wonderful era of the valley, the semiconductor companies leading into the computer.” Why, I want to know, is it not enough to stand on the shoulders of giants? Why does he want to pick their brains?
It’s like that Schopenhauer quote about the conjurer,” he says. When I look blank, he tells me to wait and then dashes upstairs. He comes down a minute later holding a book and reading aloud:
Steve Jobs and Robert Noyce.
Courtesy Leslie Berlin.
He who lives to see two or three generations is like a man who sits some time in the conjurer’s booth at a fair, and witnesses the performance twice or thrice in succession. The tricks were meant to be seen only once, and when they are no longer a novelty and cease to deceive, their effect is gone.
History, Jobs understood, gave him a chance to see — and see through — the conjurer’s tricks before they happened to him, so he would know how to handle them.
Flash forward eleven years. It’s 2014, and I am going to see Robert W. Taylor. In 1966, Taylor convinced the Department of Defense to build the ARPANET that eventually formed the core of the Internet. He went on to run the famous Xerox PARC Computer Science Lab that developed the first modern personal computer. For a finishing touch, he led one of the teams at DEC behind the world’s first blazingly fast search engine — three years before Google was founded.
Visiting Taylor is like driving into a Silicon Valley time machine. You zip past the venture capital firms on Sand Hill Road, over the 280 freeway, and down a twisty two-lane street that is nearly impassable on weekends, thanks to the packs of lycra-clad cyclists on multi-thousand-dollar bikes raising their cardio thresholds along the steep climbs. A sharp turn and you enter what seems to be another world, wooded and cool, the coastal redwoods dense along the hills. Cell phone signals fade in and out in this part of Woodside, far above Buck’s Restaurant where power deals are negotiated over early-morning cups of coffee. GPS tries valiantly to ascertain a location — and then gives up.
When I get to Taylor’s home on a hill overlooking the Valley, he tells me about another visitor who recently took that drive, apparently driven by the same curiosity that Steve Jobs had: Mark Zuckerberg, along with some colleagues at the company he founded, Facebook.
Zuckerberg must have heard about me in some historical sense,” Taylor recalls in his Texas drawl. “He wanted to see what I was all about, I guess.
 
To invent the future, you must understand the past.

I am a historian, and my subject matter is Silicon Valley. So I’m not surprised that Jobs and Zuckerberg both understood that the Valley’s past matters today and that the lessons of history can take innovation further. When I talk to other founders and participants in the area, they also want to hear what happened before. Their questions usually boil down to two:

  1. Why did Silicon Valley happen in the first place, and 
  2. why has it remained at the epicenter of the global tech economy for so long?
I think I can answer those questions.

First, a definition of terms. When I use the term “Silicon Valley,” I am referring quite specifically to the narrow stretch of the San Francisco Peninsula that is sandwiched between the bay to the east and the Coastal Range to the west. (Yes, Silicon Valley is a physical valley — there are hills on the far side of the bay.) Silicon Valley has traditionally comprised 

  • Santa Clara County and 
  • the southern tip of San Mateo County. In the past few years, 
  • parts of Alameda County and 
  • the city of San Francisco 

can also legitimately be considered satellites of Silicon Valley, or perhaps part of “Greater Silicon Valley.

The name “Silicon Valley,” incidentally, was popularized in 1971 by a hard-drinking, story-chasing, gossip-mongering journalist named Don Hoefler, who wrote for a trade rag called Electronic News. Before, the region was called the Valley of the Hearts Delight,” renowned for its apricot, plum, cherry and almond orchards.
This was down-home farming, three generations of tranquility, beauty, health, and productivity based on family farms of small acreage but bountiful production,” reminisced Wallace Stegner, the famed Western writer. To see what the Valley looked like then, watch the first few minutes of this wonderful 1948 promotional video for the “Valley of the Heart’s Delight.”
<

Three historical forces — technical, cultural, and financial — created Silicon Valley.
 
Technology
On the technical side, in some sense the Valley got lucky. In 1955, one of the inventors of the transistor, William Shockley, moved back to Palo Alto, where he had spent some of his childhood. Shockley was also a brilliant physicist — he would share the Nobel Prize in 1956 — an outstanding teacher, and a terrible entrepreneur and boss. Because he was a brilliant scientist and inventor, Shockley was able to recruit some of the brightest young researchers in the country — Shockley called them “hot minds” — to come work for him 3,000 miles from the research-intensive businesses and laboratories that lined the Eastern Seaboard from Boston to Bell Labs in New Jersey. Because Shockley was an outstanding teacher, he got these young scientists, all but one of whom had never built transistors, to the point that they not only understood the tiny devices but began innovating in the field of semiconductor electronics on their own.
And because Shockley was a terrible boss — the sort of boss who posted salaries and subjected his employees to lie-detector tests — many who came to work for him could not wait to get away and work for someone else. That someone else, it turned out, would be themselves. The move by eight of Shockley’s employees to launch their own semiconductor operation called Fairchild Semiconductor in 1957 marked the first significant modern startup company in Silicon Valley. After Fairchild Semiconductor blew apart in the late-1960s, employees launched dozens of new companies (including Intel, National and AMD) that are collectively called the Fairchildren.
The Fairchild 8: Gordon Moore, Sheldon Roberts, Eugene Kleiner, Robert Noyce, Victor Grinich, Julius Blank, Jean Hoerni, and Jay Last. Photo courtesy Wayne Miller/Magnum Photos.
Equally important for the Valley’s future was the technology that Shockley taught his employees to build: the transistor. Nearly everything that we associate with the modern technology revolution and Silicon Valley can be traced back to the tiny, tiny transistor.
 
Think of the transistor as the grain of sand at the core of the Silicon Valley pearl. The next layer of the pearl appeared when people strung together transistors, along with other discrete electronic components like resistors and capacitors, to make an entire electronic circuit on a single slice of silicon. This new device was called a microchip. Then someone came up with a specialized microchip that could be programmed: the microprocessor. The first pocket calculators were built around these microprocessors. Then someone figured out that it was possible to combine a microprocessor with other components and a screen — that was a computer. People wrote code for those computers to serve as operating systems and software on top of those systems. At some point people began connecting these computers to each other: networking. Then people realized it should be possible to “virtualize” these computers and store their contents off-site in a “cloud,” and it was also possible to search across the information stored in multiple computers. Then the networked computer was shrunk — keeping the key components of screen, keyboard, and pointing device (today a finger) — to build tablets and palm-sized machines called smart phones. Then people began writing apps for those mobile devices … .
You get the picture. These changes all kept pace to the metronomic tick-tock of Moore’s Law.
The skills learned through building and commercializing one layer of the pearl underpinned and supported the development of the next layer or developments in related industries. Apple, for instance, is a company that people often speak of as sui generis, but Apple Computer’s early key employees had worked at Intel, Atari, or Hewlett-Packard. Apple’s venture capital backers had either backed Fairchild or Intel or worked there. The famous Macintosh, with its user-friendly aspect, graphical-user interface, overlapping windows, and mouse was inspired by a 1979 visit Steve Jobs and a group of engineers paid to XEROX PARC, located in the Stanford Research Park. In other words, Apple was the product of its Silicon Valley environment and technological roots.
Culture
This brings us to the second force behind the birth of Silicon Valley: culture. When Shockley, his transistor and his recruits arrived in 1955, the valley was still largely agricultural, and the small local industry had a distinctly high-tech (or as they would have said then, “space age”) focus. The largest employer was defense contractor Lockheed. IBM was about to open a small research facility. Hewlett-Packard, one of the few homegrown tech companies in Silicon Valley before the 1950s, was more than a decade old.
Stanford, meanwhile, was actively trying to build up its physics and engineering departments. Professor (and Provost from 1955 to 1965) Frederick Terman worried about a “brain drain” of Stanford graduates to the East Coast, where jobs were plentiful. So he worked with President J.E. Wallace Sterling to create what Terman called “a community of technical scholars” in which the links between industry and academia were fluid. This meant that as the new transistor-cum-microchip companies began to grow, technically knowledgeable engineers were already there.
Woz and Jobs.
Photo courtesy Computer History Museum.
These trends only accelerated as the population exploded. Between 1950 and 1970, the population of Santa Clara County tripled, from roughly 300,000 residents to more than 1 million. It was as if a new person moved into Santa Clara County every 15 minutes for 20 years. The newcomers were, overall, younger and better educated than the people already in the area. The Valley changed from a community of aging farmers with high school diplomas to one filled with 20-something PhDs.
All these new people pouring into what had been an agricultural region meant that it was possible to create a business environment around the needs of new companies coming up, rather than adapting an existing business culture to accommodate the new industries. In what would become a self-perpetuating cycle, everything from specialized law firms, recruiting operations and prototyping facilities; to liberal stock option plans; to zoning laws; to community college course offerings developed to support a tech-based business infrastructure.
Historian Richard White says that the modern American West was “born modern” because the population followed, rather than preceded, connections to national and international markets. Silicon Valley was bornpost-modern, with those connections not only in place but so taken for granted that people were comfortable experimenting with new types of business structures and approaches strikingly different from the traditional East Coast business practices with roots nearly two centuries old.
From the beginning, Silicon Valley entrepreneurs saw themselves in direct opposition to their East Coast counterparts. The westerners saw themselves as cowboys and pioneers, working on a “new frontier” where people dared greatly and failure was not shameful but just the quickest way to learn a hard lesson. In the 1970s, with the influence of the counterculture’s epicenter at the corner of Haight and Ashbury, only an easy drive up the freeway, Silicon Valley companies also became famous for their laid-back, dressed-down culture, and for their products, such as video games and personal computers, that brought advanced technology to “the rest of us.
 
Money

The third key component driving the birth of Silicon Valley, along with the right technology seed falling into a particularly rich and receptive cultural soil, was money. Again, timing was crucial. Silicon Valley was kick-started by federal dollars. Whether it was

  • the Department of Defense buying 100% of the earliest microchips, 
  • Hewlett-Packard and Lockheed selling products to military customers, or 
  • federal research money pouring into Stanford, 

Silicon Valley was the beneficiary of Cold War fears that translated to the Department of Defense being willing to spend almost anything on advanced electronics and electronic systems. The government, in effect, served as the Valley’s first venture capitalist.

The first significant wave of venture capital firms hit Silicon Valley in the 1970s. Both Sequoia Capital and Kleiner Perkins Caufield and Byers were founded by Fairchild alumni in 1972. Between them, these venture firms would go on to fund Amazon, Apple, Cisco, Dropbox, Electronic Arts, Facebook, Genentech, Google, Instagram, Intuit, and LinkedIn — and that is just the first half of the alphabet.
This model of one generation succeeding and then turning around to offer the next generation of entrepreneurs financial support and managerial expertise is one of the most important and under-recognized secrets to Silicon Valley’s ongoing success. Robert Noyce called it “re-stocking the stream I fished from.” Steve Jobs, in his remarkable 2005 commencement address at Stanford, used the analogy of a baton being passed from one runner to another in an ongoing relay across time.
 
So that’s how Silicon Valley emerged. Why has it endured?

After all, if modern Silicon Valley was born in the 1950s, the region is now in its seventh decade. For roughly two-thirds of that time, Valley watchers have predicted its imminent demise, usually with an allusion to Detroit.

  • First, the oil shocks and energy crises of the 1970s were going to shut down the fabs (specialized factories) that build microchips. 
  • In the 1980s, Japanese competition was the concern. 
  • The bursting of the dot-com bubble
  • the rise of formidable tech regions in other parts of the world
  • the Internet and mobile technologies that make it possible to work from anywhere: 

all have been heard as Silicon Valley’s death knell.

The Valley of Heart’s Delight, pre-technology. OSU Special Collections.
The Valley economy is notorious for its cyclicity, but it has indeed endured. Here we are in 2015, a year in which more patents, more IPOs, and a larger share of venture capital and angel investments have come from the Valley than ever before. As a recent report from Joint Venture Silicon Valley (***) put it, “We’ve extended a four-year streak of job growth, we are among the highest income regions in the country, and we have the biggest share of the nation’s high-growth, high-wage sectors.” Would-be entrepreneurs continue to move to the Valley from all over the world. Even companies that are not started in Silicon Valley move there (witness Facebook).
Why? What is behind Silicon Valley’s staying power? The answer is that many of the factors that launched Silicon Valley in the 1950s continue to underpin its strength today even as the Valley economy has proven quite adaptable.
Technology
The Valley still glides in the long wake of the transistor, both in terms of technology and in terms of the infrastructure to support companies that rely on semiconductor technology. Remember the pearl. At the same time, when new industries not related directly to semiconductors have sprung up in the Valley — industries like biotechnology — they have taken advantage of the infrastructure and support structure already in place.
Money
Venture capital has remained the dominant source of funding for young companies in Silicon Valley. In 2014, some $14.5 billion in venture capital was invested in the Valley, accounting for 43 percent of all venture capital investments in the country. More than half of Silicon Valley venture capital went to software investments, and the rise of software, too, helps to explain the recent migration of many tech companies to San Francisco. (San Francisco, it should be noted, accounted for nearly half of the $14.5 billion figure.) Building microchips or computers or specialized production equipment — things that used to happen in Silicon Valley — requires many people, huge fabrication operations and access to specialized chemicals and treatment facilities, often on large swaths of land. Building software requires none of these things; in fact, software engineers need little more than a computer and some server space in the cloud to do their jobs. It is thus easy for software companies to locate in cities like San Francisco, where many young techies want to live.
Culture
The Valley continues to be a magnet for young, educated people. The flood of intranational immigrants to Silicon Valley from other parts of the country in the second half of the twentieth century has become, in the twenty-first century, a flood of international immigrants from all over the world. It is impossible to overstate the importance of immigrants to the region and to the modern tech industry. Nearly 37 percent of the people in Silicon Valley today were born outside of the United States — of these, more than 60 percent were born in Asia and 20 percent in Mexico. Half of Silicon Valley households speak a language other than English in the home. Sixty-five percent of the people with Bachelors degrees working in science and engineering in the valley were born in another country. Let me say that again: 2/3 of people in working in sci-tech Valley industries who have completed their college education are foreign born. (Nearly half the college graduates working in all industries in the valley are foreign-born.)
Here’s another way to look at it: From 1995 to 2005, more than half of all Silicon Valley startups had at least one founder who was born outside the United States.[13] Their businesses — companies like Google and eBay — have created American jobs and billions of dollars in American market capitalization.
Silicon Valley, now, as in the past, is built and sustained by immigrants.
Gordon Moore and Robert Noyce at Intel in 1970. Photo courtesy Intel.
Stanford also remains at the center of the action. By one estimate, from 2012, companies formed by Stanford entrepreneurs generate world revenues of $2.7 trillion annually and have created 5.4 million jobs since the 1930s. This figure includes companies whose primary business is not tech: companies like Nike, Gap, and Trader Joe’s. But even if you just look at Silicon Valley companies that came out of Stanford, the list is impressive, including Cisco, Google, HP, IDEO, Instagram, MIPS, Netscape, NVIDIA, Silicon Graphics, Snapchat, Sun, Varian, VMware, and Yahoo. Indeed, some critics have complained that Stanford has become overly focused on student entrepreneurship in recent years — an allegation that I disagree with but is neatly encapsulated in a 2012 New Yorker article that called the university “Get Rich U.”
 
Change
The above represent important continuities, but change has also been vital to the region’s longevity. Silicon Valley has been re-inventing itself for decades, a trend that is evident with a quick look at the emerging or leading technologies in the area:
• 1940s: instrumentation
• 1950s/60s: microchips
• 1970s: biotech, consumer electronics using chips (PC, video game, etc)
• 1980s: software, networking
• 1990s: web, search
• 2000s: cloud, mobile, social networking
The overriding sense of what it means to be in Silicon Valley — the lionization of risk-taking, the David-versus-Goliath stories, the persistent belief that failure teaches important business lessons even when the data show otherwise — has not changed, but over the past few years, a new trope has appeared alongside the Western metaphors of Gold Rushes and Wild Wests: Disruption.
“Disruption” is the notion, roughly based on ideas first proposed by Joseph Schumpeter in 1942, that a little company can come in and — usually with technology — completely remake an industry that seemed established and largely impervious to change. So: Uber is disrupting the taxi industry. Airbnb is disrupting the hotel industry. The disruption story is, in its essentials, the same as the Western tale: a new approach comes out of nowhere to change the establishment world for the better. You can hear the same themes of adventure, anti-establishment thinking, opportunity and risk-taking. It’s the same song, with different lyrics.
The shift to the new language may reflect the key role that immigrants play in today’s Silicon Valley. Many educated, working adults in the region arrived with no cultural background that promoted cowboys or pioneers. These immigrants did not even travel west to get to Silicon Valley. They came east, or north. It will be interesting to see how long the Western metaphor survives this cultural shift. I’m betting that it’s on its way out.
Something else new has been happening in Silicon Valley culture in the past decade. The anti-establishment little guys have become the establishment big guys. Apple settled an anti-trust case. You are hearing about Silicon Valley companies like Facebook or Google collecting massive amounts of data on American citizens, some of which has ended up in the hands of the NSA. What happens when Silicon Valley companies start looking like the Big Brother from the famous 1984 Apple Macintosh commercial?
A Brief Feint at the Future
I opened these musings by defining Silicon Valley as a physical location. I’m often asked how or whether place will continue to matter in the age of mobile technologies, the Internet and connections that will only get faster. In other words, is region an outdated concept?
I believe that physical location will continue to be relevant when it comes to technological innovation. Proximity matters. Creativity cannot be scheduled for the particular half-hour block of time that everyone has free to teleconference. Important work can be done remotely, but the kinds of conversations that lead to real breakthroughs often happen serendipitously. People run into each other down the hall, or in a coffee shop, or at a religious service, or at the gym, or on the sidelines of a kid’s soccer game.
It is precisely because place will continue to matter that the biggest threats to Silicon Valley’s future have local and national parameters. Silicon Valley’s innovation economy depends on its being able to attract the brightest minds in the world; they act as a constant innovation “refresh” button. If Silicon Valley loses its allure for those people —

  • if the quality of public schools declines so that their children cannot receive good educations, 
  • if housing prices remain so astronomical that fewer than half of first-time buyers can afford the median-priced home, or 
  • if immigration policy makes it difficult for high-skilled immigrants who want to stay here to do so — 

the Valley’s status, and that of the United States economy, will be threatened. Also worrisome: ever-expanding gaps between the highest and lowest earners in Silicon Valley; stagnant wages for low- and middle-skilled workers; and the persistent reality that as a group, men in Silicon Valley earn more than women at the same level of educational attainment. Moreover, today in Silicon Valley, the lowest-earning racial/ethnic group earns 70 percent less than the highest earning group, according to the Joint Venture report. The stark reality, with apologies to George Orwell, is that even in the Valley’s vaunted egalitarian culture, some people are more equal than others.

Another threat is the continuing decline in federal support for basic research. Venture capital is important for developing products into companies, but the federal government still funds the great majority of basic research in this country. Silicon Valley is highly dependent on that basic research — “No Basic Research, No iPhone” is my favorite title from a recently released report on research and development in the United States. Today, the US occupies tenth place among OECD nations in overall R&D investment. That is investment as a percentage of GDP — somewhere between 2.5 and 3 percent. This represents a 13 percent drop below where we were ten years ago (again as a percentage of GDP). China is projected to outspend the United States in R&D within the next ten years, both in absolute terms and as a fraction of economic development.
People around the world have tried to reproduce Silicon Valley. No one has succeeded.
And no one will succeed because no place else — including Silicon Valley itself in its 2015 incarnation — could ever reproduce the unique concoction of academic research, technology, countercultural ideals and a California-specific type of Gold Rush reputation that attracts people with a high tolerance for risk and very little to lose. Partially through the passage of time, partially through deliberate effort by some entrepreneurs who tried to “give back” and others who tried to make a buck, this culture has become self-perpetuating.
The drive to build another Silicon Valley may be doomed to fail, but that is not necessarily bad news for regional planners elsewhere. The high-tech economy is not a zero-sum game. The twenty-first century global technology economy is large and complex enough for multiple regions to thrive for decades to come — including Silicon Valley, if the threats it faces are taken seriously.
Follow Backchannel: Twitter | Facebook

Robert Reich: The Nightmarish Future for American Jobs and Incomes Is Here

By admin,

Even knowledge-based jobs will disappear as wealth gets more concentrated at the top in the next 10 years.
Photo Credit: via YouTube
What will happen to American jobs, incomes, and wealth a decade from now?
Predictions are hazardous but survivable. In 1991, in my book The Work of Nations, I separated almost all work into three categories, and then predicted what would happen to each of them.
The first category I called “routine production services,” which entailed the kind of repetitive tasks performed by the old foot soldiers of American capitalism through most of the twentieth century — done over and over, on an assembly line or in an office.
I estimated that such work then constituted about one-quarter of all jobs in the United States, but would decline steadily as such jobs were replaced by
  • new labor-saving technologies and
  • by workers in developing nations eager to do them for far lower wages.

I also assumed the pay of remaining routine production workers in America would drop, for similar reasons.

I was not far wrong.
The second category I called “in-person services.This work had to be provided personally because the “human touch” was essential to it. It included retail sales workers, hotel and restaurant workers, nursing-home aides, realtors, childcare workers, home health-care aides, flight attendants, physical therapists, and security guards, among many others.
In 1990, by my estimate, such workers accounted for about 30 percent of all jobs in America, and I predicted their numbers would grow because — given that their services were delivered in person — neither advancing technologies nor foreign-based workers would be able to replace them.
I also predicted their pay would drop. They would be competing with
  • a large number of former routine production workers, who could only find jobs in the “in-person” sector.
  • They would also be competing with labor-saving machinery such as automated tellers, computerized cashiers, automatic car washes, robotized vending machines, and self-service gas pumps —
  • as well as “personal computers linked to television screensthrough which “tomorrow’s consumers will be able to buy furniture, appliances, and all sorts of electronic toys from their living rooms — examining the merchandise from all angles, selecting whatever color, size, special features, and price seem most appealing, and then transmitting the order instantly to warehouses from which the selections will be shipped directly to their homes. 
  • So, too, with financial transactions, airline and hotel reservations, rental car agreements, and similar contracts, which will be executed between consumers in their homes and computer banks somewhere else on the globe.”

Here again, my predictions were not far off. But I didn’t foresee how quickly advanced technologies would begin to make inroads even on in-person services. Ten years from now I expect Amazon will have wiped out many of today’s retail jobs, and Google‘s self-driving car will eliminate many bus drivers, truck drivers, sanitation workers, and even Uber drivers.

The third job category I named “symbolic-analytic services.” Here I included all the problem-solving, problem-identifying, and strategic thinking that go into the manipulation of symbols—data, words, oral and visual representations.
I estimated in 1990 that symbolic analysts accounted for 20 percent of all American jobs, and expected their share to continue to grow, as would their incomes, because the demand for people to do these jobs would continue to outrun the supply of people capable of doing them. This widening disconnect between symbolic-analytic jobs and the other two major categories of work would, I predicted, be the major force driving widening inequality.
Again, I wasn’t far off. But I didn’t anticipate how quickly or how wide the divide would become, or how great a toll inequality and economic insecurity would take. I would never have expected, for example, that the life expectancy of an American white woman without a high school degree would decrease by five years between 1990 and 2008.
We are now faced not just with labor-replacing technologies but with knowledge-replacing technologies. The combination of
  • advanced sensors,
  • voice recognition,
  • artificial intelligence,
  • big data,
  • text-mining, and
  • pattern-recognition algorithms,

is generating smart robots capable of quickly learning human actions, and even learning from one another. A revolution in life sciences is also underway, allowing drugs to be tailored to a patient’s particular condition and genome.

If the current trend continues, many more symbolic analysts will be replaced in coming years. The two largest professionally intensive sectors of the United States — health care and education — will be particularly affected because of increasing pressures to hold down costs and, at the same time, the increasing accessibility of expert machines.
We are on the verge of a wave of mobile health applications, for example, measuring everything from calories to blood pressure, along with software programs capable of performing the same functions as costly medical devices and diagnostic software that can tell you what it all means and what to do about it.
Schools and universities will likewise be reorganized around smart machines (although faculties will scream all the way). Many teachers and university professors are already on the way to being replaced by software — so-called “MOOCs” (Massive Open Online Courses) and interactive online textbooks — along with adjuncts that guide student learning.
As a result, income and wealth will become even more concentrated than they are today. Those who create or invest in blockbuster ideas will earn unprecedented sums and returns. The corollary is they will have enormous political power. But most people will not share in the monetary gains, and their political power will disappear. The middle class’s share of the total economic pie will continue to shrink, while the share going to the very top will continue to grow.
But the current trend is not preordained to last, and only the most rigid technological determinist would assume this to be our inevitable fate. We can — indeed, I believe we must — ignite a political movement to reorganize the economy for the benefit of the many, rather than for the lavish lifestyles of a precious few and their heirs. (I have more to say on this in my upcoming book, Saving Capitalism: For the Many, Not the Few, out at the end of September.)
Robert B. Reich has served in three national administrations, most recently as secretary of labor under President Bill Clinton. He also served on President Obama’s transition advisory board. His latest book is “Aftershock: The Next Economy and America’s Future.” His homepage is www.robertreich.org.
May 7, 2015
ROBERT B. REICH, Chancellor’s Professor of Public Policy at the University of California at Berkeley and Senior Fellow at the Blum Center for Developing Economies, was Secretary of Labor in the Clinton administration. Time Magazine named him one of the ten most effective cabinet secretaries of the twentieth century. He has written thirteen books, including the best sellers “Aftershock” and “The Work of Nations.” His latest, “Beyond Outrage,” is now out in paperback. He is also a founding editor of the American Prospect magazine and chairman of Common Cause. His new film, “Inequality for All,” is now available on Netflix, iTunes, DVD, and On Demand.

Rise of the Machines: The Future has Lots of Robots, Few Jobs for Humans

By admin,

ORIGINAL: Wired
Martin Ford
The robots haven’t just landed in the workplace—they’re expanding skills, moving up the corporate ladder, showing awesome productivity and retention rates, and increasingly shoving aside their human counterparts. One multi-tasker bot, from Momentum Machines, can make (and flip) a gourmet hamburger in 10 seconds and could soon replace an entire McDonalds crew. A manufacturing device from Universal Robots doesn’t just solder, paint, screw, glue, and grasp—it builds new parts for itself on the fly when they wear out or bust. And just this week, Google won a patent to start building worker robots with personalities.
Fast Food Company Develops Robots

 

Universal Robots: UR3: The world’s most flexible, light-weight table-top robot to work alongside humans
 As intelligent machines begin their march on labor and become more sophisticated and specialized than first-generation cousins like Roomba or Siri, they have an outspoken champion in their corner: author and entrepreneur Martin Ford. In his new book, Rise of the Robots, he argues that AI and robotics will soon overhaul our economy. 
There’s some logic to the thesis, of course, and other economists such as Andrew (The Second Machine Age) McAfee have sided generally with Ford’s outlook. Oxford University researchers have estimated that 47 percent of U.S. jobs could be automated within the next two decades. And if even half that number is closer to the mark, workers are in for a rude awakening.
In Ford’s vision, a full-on worker revolt is on the horizon, followed by a radically new economic state whereby humans will live more productive and entrepreneurial lives, subsisting on guaranteed incomes generated by our amazing machines. (Don’t laugh — even some conservative influencers believe this may be the ultimate means of solving the wealth-inequality dilemma.)
Sound a little nuts? We thought so—we’re human, after all—so we invited Ford to defend his turf.
Rise of the Robots
Critics say your vision of a jobless future isn’t founded in good research or logic. What makes you so convinced this phenomenon is real? 
I see the advances happening in technology and it’s becoming evident that computers, machines, robots, and algorithms are going to be able to do most of the routine, repetitive types of jobs. That’s the essence of what machine learning is all about. What types of jobs are on some level fundamentally predictable? A lot of different skill levels fall into that category. It’s not just about lower-skilled jobs either. People with college degrees, even professional degrees, people like lawyers are doing things that ultimately are predictable. A lot of those jobs are going to be susceptible over time. 
Right now there’s still a lot of debate over it. There are economists who think it’s totally wrong, that problems really stem from things like globalization or the fact that we’ve wiped out unions or haven’t raised the minimum wage. Those are all important, but I tend to believe that technology is a bigger issue, especially as we look to the future. 
 
Eventually I think we’ll get to the point where there’s less debate about whether this is really happening or not. There will be more widespread agreement that it really is a problem and at that point we’ll have to figure out what to do about it. 
 
Aren’t you relying on some pretty radical and unlikely assumptions? 
People who are very skeptical tend to look at the historical record. It’s true that the economy has always adapted over time. It has created new kinds of jobs. The classic example of that is agriculture. In the 1800s, 80 percent of the U.S. labor force worked on farms. Today it’s 2 percent. Obviously mechanization didn’t destroy the economy; it made it better off. Food is now really cheap compared to what it was relative to income, and as a result people have money to spend on other things and they’ve transitioned to jobs in other areas. Skeptics say that will happen again. 
 
The agricultural revolution was about specialized technology that couldn’t be implemented in other industries. You couldn’t take the farm machinery and have it go flip hamburgers. Information technology is totally different. It’s a broad-based general purpose technology. There isn’t a new place for all these workers to move. 
 
You can imagine lots of new industries—nanotechnology and synthetic biology—but they won’t employ many people. They’ll use lots of technology, rely on big computing centers, and be heavily automated. 
So in the all-automated economy, what will ambitious 20-somethings choose to do with their lives and careers? 
My proposed solution is to have some kind of a guaranteed income that incentivizes education. We don’t want people to get halfway through high school and say, ‘Well if I drop out I’m still going to get the same income as everyone else.’ 
 
Then I believe that a guaranteed income would actually result in more entrepreneurship. A lot of people would start businesses just as they do today. The problem with these types of businesses you can start online today is it’s hard to put enough together to generate a middle-class income. 
 
If people had an income floor, and if the incentives were such that on top of that they could do other things and still keep that extra money, without having it all taxed away, then I think a lot of people would pursue those opportunities. 
 
There’s a phenomenon called the Peltzman Effect, based on research from an economist at the University of Chicago who studied auto accidents. He found that when you introduce more safety features like seat belts into cars, the number of fatalities and injuries doesn’t drop. The reason is that people compensate for it. When you have a safety net in place, people will take more risks. That probably is true of the economic arena as well. 
 
People say that having a guaranteed income will turn everyone into a slacker and destroy the economy. I think the opposite might be true, that it might push us toward more entrepreneurship and more risk-taking. 

‘Highly creative’ professionals won’t lose their jobs to robots, study finds

By admin,

ORIGINAL Fortune
APRIL 22, 2015
A University of Oxford study finds that there are some things that a robot won’t be able to do. Unfortunately, these gigs don’t pay all that well.
Many people are in “robot overlord denial,” according to a recent online poll run by jobs board Monster.com. They think computers could not replace them at work. Sadly, most are probably wrong.
University of Oxford researchers Carl Benedikt Frey and Michael Osborne estimated in 2013 that 47% of total U.S. jobs could be automated by 2033. The combination of robotics, automation, artificial intelligence, and machine learning is so powerful that some white collar workers are already being replaced — and we’re talking journalists, lawyers, doctors, and financial analysts, not the person who used to file all the incoming faxes.
But there’s hope, at least for some. According to an advanced copy of a new report that U.K. non-profit Nesta sent to Fortune, 21% of US employment requires people to be “highly creative.” Of them, 86% (18% of the total workforce) are at low or no risk from automation. In the U.K., 87% of those in creative fields
Artists, musicians, computer programmers, architects, advertising specialists … there’s a very wide range of creative occupations,” said report co-author Hasan Bakhshi, director of creative economy at Nesta, to Fortune. Some other types would be financial managers, judges, management consultants, and IT managers. “Those jobs have a very high degree of resistance to automation.”
The study is based on the work of Frey and Osborne, who are also co-authors of this new report. The three researchers fed 120 job descriptions from the US Department of Labor into a computer and analyzed them to see which were most likely to require extensive creativity, or the use of imagination or ideas to make something new.
Creativity is one of the three classic bottlenecks to automating work, according to Bakhshi. “Tasks which involve a high degree of human manipulation and human perception — subtle tasks — other things being equal will be more difficult to automate,” he said. For instance, although goods can be manufactured in a robotic factory, real craft work still “requires the human touch.
So will jobs that need social intelligence, such as your therapist or life insurance agent.
Of course, the degree of creativity matters. Financial journalists who rewrite financial statements are already beginning to be supplanted by software. The more repetitive and dependent on data the work is, the more easily a human can be pushed aside.
In addition, just because certain types of creative occupations can’t easily be replaced doesn’t mean that their industries won’t see disruption. Packing and shipping crafts can be automated, as can could some aspects of the film industry that aren’t such things as directing, acting, and design. “These industries are going to be disrupted and are vulnerable,” Bakhshi said.
Also, not all these will necessarily provide a financial windfall. The study found an “inverse U-shape” relationship between the probability of an occupation being highly creative and the average income it might deliver. Musicians, actors, dancers, and artists might make relatively little, while people in technical, financial, and legal creative occupations can do quite well. So keeping that creative job may not seem much of a financial blessing in many cases.
Are you in a “creative” role that will be safe from automation? You can find out what these Oxford researchers think by taking their online quiz.

Apple co-founder on artificial intelligence: ‘The future is scary and very bad for people’

By admin,

Steve Wozniak speaks at the Worldwebforum in Zurich on March 10. (Steffen Schmidt/European Pressphoto Agency)

The Super Rich Technologists Making Dire Predictions About Artificial Intelligence club gained another fear-mongering member this week: Apple co-founder Steve Wozniak.In an interview with the Australian Financial Review, Wozniak joined original club members Bill Gates, Stephen Hawking and Elon Musk by making his own casually apocalyptic warning about machines superseding the human race.

Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people,” Wozniak said. “If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently.

[Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned’]

Doling out paralyzing chunks of fear like gumdrops to sweet-toothed children on Halloween, Woz continued: “Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don’t know about that … But when I got that thinking in my head about if I’m going to be treated in the future as a pet to these smart machines … well I’m going to treat my own pet dog really nice.

Seriously? Should we even get up tomorrow morning, or just order pizza, log onto Netflix and wait until we find ourselves looking through the bars of a dog crate? Help me out here, man!

Wozniak’s warning seemed to follow the exact same story arc as Season 1 Episode 2 of Adult Swim‘s “Rick and Morty Show.” Not accusing him of apocalyptic plagiarism or anything; just noting.

For what it’s worth, Wozniak did outline a scenario by which super-machines will be stopped in their human-enslaving tracks. Citing Moore’s Law — “the pattern whereby computer processing speeds double every two years” — Wozniak pointed out that at some point, the size of silicon transistors, which allow processing speeds to increase as they reduce size, will eventually reach the size of an atom, according to the Financial Review.

Any smaller than that, and scientists will need to figure out how to manipulate subatomic particles — a field commonly referred to as quantum computing — which has not yet been cracked,Quartz notes.

Wozniak’s predictions represent a bit of a turnaround, the Financial Review pointed out. While he previously rejected the predictions of futurists such as the pill-popping Ray Kurzweil, who argued that super machines will outpace human intelligence within several decades, Wozniak told the Financial Review that he came around after he realized the prognostication was coming true.

Computers are going to take over from humans, no question,” Wozniak said, nearly prompting me to tender my resignation and start watching this cute puppies compilation video until forever.

“I hope it does come, and we should pursue it because it is about scientific exploring,” he added. “But in the end we just may have created the species that is above us.

In January, during a Reddit AMA, Gates wrote: “I am in the camp that is concerned about super intelligence.” His comment came a month after Hawking said artificial intelligence “could spell the end of the human race.

British inventor Clive Sinclair has also said he thinks artificial intelligence will doom humankind.Once you start to make machines that are rivaling and surpassing humans with intelligence, it’s going to be very difficult for us to survive,he told the BBC. “It’s just an inevitability.

Musk was among the earliest members of this club. Speaking at the MIT aeronautics and astronautics department’s Centennial Symposium in October, the Tesla founder said: “With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like, yeah, he’s sure he can control the demon. Didn’t work out.


MORE READING:

ORIGINAL: Washington Post

March 24, 2015