Google Builds An AI That Can Learn And Master Video Games

Google has built an artificial intelligence system that can learn – and become amazing at – video games all on its own, given no commands but a simple instruction to play titles. The project, detailed by Bloomberg, is the result of research from the London-based DeepMind AI startup Google acquired in a deal last year, and involves 49 games from the Atari 2600 that likely provided the first video game experience for many of those reading this.
While this is an amazing announcement for so many reasons, the most impressive part might be that the AI not only matched wits with human players in most cases, but actually went above and beyond the best scores of expert meat-based players in 29 of the 49 games it learned, and bested existing computer based players in a whopping 43.
Google and DeepMind aren’t looking to just put their initials atop the best score screens of arcades everywhere with this project – the long-term goal is to create the building blocks for optimal problem solving given a set of criteria, which is obviously useful in any place Google might hope to use AI in the future, including in self-driving cars. Google is calling this the “first time anyone has built a single learning system that can learn directly from experience,” according to Bloomberg, which has potential in a virtually limitless number of applications.
It’s still an early step, however, and Google expects it’ll be decades before it achieves its goal of building general-purpose machines that have their own intelligence and can respond to a range of situations. Still, it’s a system that doesn’t require the kind of arduous training and hand-holding to learn what it’s supposed to do, which is a big leap even from things like IBM’s Watson super computer.
Next up for the arcade AI is mastering the Doom-era 3D virtual worlds, which should help the AI edge closer to mastering similar tasks in the real world, like driving a car. And there’s one more detail here that may keep you up at night: Google trained the AI to get better at the Atari games it mastered using a virtual take on operant conditioning – ‘rewarding’ the computer for successful behavior the way you might a dog.
ORIGINAL: Tech Crunch
Tagged , , , , ,

Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter

Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents – systems that perceive and act in some environment. In this context, “intelligence” is related to statistical and economic notions of rationality – colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among  
  • AI, 
  • machine learning, 
  • statistics, 
  • control theory, 
  • neuroscience, and 
  • other fields.

The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as

  • speech recognition,
  • image classification,
  • autonomous vehicles,
  • machine translation,
  • legged locomotion, and
  • question-answering systems.

As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research. There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose.

Attendees at Asilomar, Pacific Grove, February 21–22, 2009 (left to right): Michael Wellman, Eric Horvitz, David Parkes, Milind Tambe, David Waltz, Thomas Dietterich, Edwina Rissland (front), Sebastian Thrun, David McAllester, Magaret Boden, Sheila McIlraith, Tom Dean, Greg Cooper, Bart Selman, Manuela Veloso, Craig Boutilier, Diana Spears (front), Tom Mitchell, Andrew Ng.

We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. The attached research priorities document gives many examples of such research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from

  • economics,
  • law and
  • philosophy to
  • computer security,
  • formal methods and, of course,
  • various branches of AI itself.

In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.

List of signatories

Tagged , , , ,

A DNA hard drive has been built that can store data for 1 MILLION years

ORIGINAL: Science Alert
FIONA MACDONALD
17 FEB 2015
PinkDNA

DNA

Scientists have found a way to preserve the world’s data for millions of years, by storing it on a tiny strand of DNA preserved in glass. When you think of humanity’s legacy, the most powerful message for us to leave behind for future civilisations would surely be our billions of terabytes of data. But right now the hard drives and discs that we use to store all this information are frustratingly vulnerable, and unlikely to survive more than a couple of hundred years.

Fortunately scientists have built a DNA time capsule that’s capable of safely preserving all of our data for more than a million years. And we’re kind of freaking out over how huge the implications are.

Researchers already knew that DNA was ideal for data storage. In theory, just 1 gram of DNA is capable of holding 455 exabytes, which is the equivalent of one billion gigabytes, and more than enough space to store all of Google, Facebook and pretty much everyone else’s data.

Storing information on DNA is also surprisingly simple – researchers just need to program the A and C base pairs of DNA as a binary ‘0’, and the T and G as a ‘1’. But the researchers, led by Robert Grass from ETH Zürich in Switzerland, wanted to find out just how long this data would last.

DNA can definitely be durable – in 2013 scientists managed to sequence genetic code from 700,000-year-old horse bones – but it has to be preserved in pretty specific conditions, otherwise it can change and break down as it’s exposed to the environment. So Glass’s team decided to try to replicate a fossil, to see if it would help them create a long-lasting DNA hard drive.

Image: Philipp Stössel/ETH Zurich

Image: Philipp Stössel/ETH Zurich

Similar to these bones, we wanted to protect the information-bearing DNA with a synthetic ‘fossil’ shell,” explained Grass in a press release.

In order to do that, the team encoded Switzerland’s Federal Charter of 1921 and The Methods of Mechanical Theorems by Archimedes onto a DNA strand – a total of 83 kilobytes of data. They then encapsulated the DNA into tiny glass spheres, which were around 150 nanometres in diameter.

The researchers compared these glass spheres against other packaging methods by exposing them to temperatures of between 60 and 70 degrees Celsius - conditions that replicated the chemical degradation that would usually occur over hundreds of years, all crammed into a few destructive weeks.

They found that even after this sped-up degradation process, the DNA inside the glass spheres could easily be extracted using a fluoride solution, and the data on it could still be read. In fact, these glass casings seem to work much like fossilised bones.

Based on their results, which have been published in Angewandte Chemie, the team predicts that data stored on DNA could survive over a million years if it was stored in temperatures below -18 degrees Celsius, for example, in a facility like the Svalbard Global Seed Vault, which is also known as the ‘Doomsday Vault’. They say it could last 2,000 years if stored somewhere less secure at 10 degrees Celsius – a similar average temperature to central Europe.

The tricky part of this whole process is that the data stored in DNA needs to be read properly in order for future civilisations to be able to access it. And despite advances in sequencing technology, errors still arise from DNA sequencing.

The team overcame this by embedding a method for correcting any errors within the glass spheres, based on the Reed-Solomon Codes, which help researchers transmit data over long distances. Basically, additional information is attached to the actual data, to help people read it on the other end.

This worked so well that even after the test DNA had been kept in scorching and degrading conditions for a month, the team could still read Switzerland’s Federal Charter and Archimedes’ wise words at the end of the study.

The other major problem, which is not so easy to overcome, is the fact that storing information on DNA is still extremely expensive – it cost around US$1,500 just to encode the 83 kilobytes of data used in this study. Hopefully this cost will go down as we get better at writing information onto DNA. Researchers out there are already storing books onto DNA, and the band OK Go are also writing their new album into genetic information.

The question is, what would Grass store, now that he’s developed this mind-blowing time capsule? The documents in Unesco’s Memory of the World Programme, and… Wikipedia, he says.

Many entries are described in detail, others less so. This probably provides a good overview of what our society knows, what occupies it and to what extent,” said Grass in the release.

It’s ridiculously cool to think that even if we do wipe ourselves off the face of the Earth, our civilisation might still live on for millennia to come in the form of Wikipedia pages and Facebook updates.

We really are (almost) infinite.

Source: New Scientist

Tagged , , , , , , ,

MS’s Bill Gates insists AI is a threat

ORIGINAL: BBC
By Kevin Rawlinson BBC News
29 January 2015

Bill Gates said he could not understand why people were not concerned by AI

Humans should be worried about the threat posed by artificial Intelligence, Bill Gates has said.

The Microsoft founder said he didn’t understand people who were not troubled by the possibility that AI could grow too strong for people to control.

Mr Gates contradicted one of Microsoft Research’s chiefs, Eric Horvitz, who has said he “fundamentally” did not see AI as a threat.

Mr Horvitz has said about a quarter of his team’s resources are focused on AI.

During an “ask me anything” question and answer session on Reddit, Mr Gates wrote: “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well.

A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

Watch: Stephen Hawking has warned of the threat AI poses

His view was backed up by the likes of Mr Musk and Professor Stephen Hawking, who have both warned about the possibility that AI could evolve to the point that it was beyond human control. Prof Hawking said he felt that machines with AI could “spell the end of the human race”.

Mr Horvitz has said: “There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences. I fundamentally don’t think that’s going to happen.”

He was giving an interview marking his acceptance of the AAAI Feigenbaum Prize for “outstanding advances” in AI research.

Ex Machina explores the relationship between humans and AI robots

I think that we will be very proactive in terms of how we field AI systems, and that in the end we’ll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life.”

Mr Horvitz runs Microsoft Research‘s lab at the parent company’s Redmond headquarters. His division’s work has already helped introduce Cortana, Microsoft’s virtual assistant.

Despite his own reservations, Mr Gates wrote on Reddit that, had Microsoft not worked out, he would probably be a researcher on AI.

When I started Microsoft I was worried I would miss the chance to do basic work in that field,” he said.

Marvel’s latest Avengers film features an AI character named Ultron

He added that he believed the firm he founded would see “more progress… than ever” over the next three decades.

Even in the next 10 [years,] problems like vision and speech understanding and translation will be very good.

He predicted that, in that time, robots would perform tasks such as picking fruit or moving hospital patients. “Once computers/robots get to a level of capability where seeing and moving is easy for them then they will be used very extensively.

He said he was working on a project with Microsoft called “Personal Agent“, which he said would “remember everything and help you go back and find things and help you pick what things to pay attention to“.

He wrote: “The idea that you have to find applications and pick them and they each are trying to tell you what is new is just not the efficient model – the agent will help solve this. It will work across all your devices.

Forthcoming film CHAPPiE will feature an AI robot that needs to find its place in the world

But he admitted that he felt “pretty stupid” because he cannot speak any language other than English.

I took Latin and Greek in High School and got As and I guess it helps my vocabulary but I wish I knew French or Arabic or Chinese.

I keep hoping to get time to study one of these – probably French because it is the easiest… Mark Zuckerberg amazingly learned Mandarin and did a Q&A with Chinese students – incredible,” he wrote.
More on This Story

Related Stories

From other news sites

 

Tagged , , , , ,

BrainCard, pattern recognition for ALL

ORIGINAL: IndieGogo
Embedded recognition for images, speech, sound, biosensors or any signal with zero programming. Petaluma, California, United States Technology

Text and Numbers
 

Pattern & image recognition module with neuromorphic learning for all your maker projects.

Robotics fans, drone pilots, hackers & data-miners – rejoice!

The BrainCard is an open source hardware platform featuring the worlds only fully functional and field-tested
Neuromorphic Chip containing 1024 silicon neurons. It is able to learn and
recognize patterns within any dataset generated by any source, from the physical (sensors), to the virtual
(data).

Offered here, for the first time, to makers
in a format compatible with nearly all other popular electronics platforms — from
Raspberry Pi to Arduino and Intel Edison —  we aim to help
you add cognitive perception to any electronics project.

Add a brain to: Robots, toys or an old
GoPro
. Give them the ability to recognize and recall almost anything… You can also add a brain to any digital cameras including dash
cams. Vision not your thing? The same technology can recognize patterns in data like that packet of code you’re looking for in a sea of C++, a phrase in an eBook (regardless of the books length), even real time data: Build your own biosensors!  Make any appliance you like “smart”, like a coffee pot that recognizes you and starts making your coffee the way you like best.

Simply put; make it think.

 The BrainCard is an open source hardware platform featuring the worlds only fully functional and field-tested

Neuromorphic Chip containing 1024 silicon neurons. It is able to learn and
recognize patterns within any dataset generated by any source, from the physical (sensors), to the virtual
(data).
Offered here, for the first time, to makers
in a format compatible with nearly all other popular electronics platforms — from
Raspberry Pi to Arduino and Intel Edison —  we aim to help
you add cognitive perception to any electronics project.
Add a brain to: Robots, toys or an old
GoPro
. Give them the ability to recognize and recall almost anything… You can also add a brain to any digital cameras including dash cams. Vision not your thing? The same technology can recognize patterns in data like that packet of code you’re looking for in a sea of C++, a phrase in an eBook (regardless of the books length), even real time data: Build your own biosensors!  Make any appliance you like “smart”, like a coffee pot that recognizes you and starts making your coffee the way you like best.
Simply put; make it think.

Cannot wait for technical details?

Before we carry on, for those of you that are quick studies and/or already know everything, we thought you might like to skip straight to the specs so here you go:

BrainCard Specifications (Hardware and API)

For everyone else – please read on…

Unfamiliar with Neural Networks or Neuromorphic Chips? Watch this:

(If you want some more background info, click here)

Now back to you project…

The BrainCard™ is a small electronics board with a NeuroMem® CM1K device plus a FPGA (Field Programmable Gate Array) chip to connect to platform buses and sensor inputs. There is even an optional image sensor featured on the BrainCard 1KIS (Image Sensor) version. It can be connected to almost any popular electronics platform including Arduino/Raspberry Pi/Intel Edison and enables users to massively boost any devices capability by creating a brain-like system architecture – hence the name.

The CM1K chip(s) on the BrainCard essentially acts as a right-brain hemisphere ready to learn, recognize and recall patterns/images/sounds/inputs from any incoming data stream. This allows the accompanying MPU device to concentrate on what it’s good at — left-brain functions such as logic, procedural computing and as a communications and I/O interface.

The BrainCard is an open source hardware platform featuring the world’s only fully functional and field-tested
Neuromorphic Chip containing
1024 silicon neurons. It is able to learn and
recognize patterns within any dataset generated by any source, from the physical (sensors), to the virtual (data).
Offered here, for the first time, to makers in a format compatible with nearly all other popular electronics platforms — from
Raspberry Pi to Arduino and Intel Edison —  we aim to help you add cognitive perception to any electronics project.

Add a brain to: Robots, toys or an old GoPro. Give them the ability to recognize and recall almost anything… You can also add a brain to any digital cameras including dash cams. Vision not your thing? The same technology can recognize patterns in data like that packet of code you’re looking for in a sea of C++, a phrase in an eBook (regardless of the books length), even real time data: Build your own biosensors!  Make any appliance you like “smart”, like a coffee pot that recognizes you and starts making your coffee the way you like best.

Simply put; make it think.

The key to success is teaching BrainCard as you would a child: Teach it too conservatively and it will not generalize enough; too moderately and it could get confused. It is not like traditional programming and we have found that part of the fun in building projects with the BrainCard is in this new learning parameter.

It’s really quite simple: Show the BrainCard what it must recognize and assign the example a category. So: This face is John, that voice is Emma, this vibration is made by your cat purring and so on.

Getting started:

The BrainCard is delivered with a default configuration which can communicate with either one of the proposed controllers (Arduino, Raspberry PI or Edison) through a same communication protocol over their SPI lines.  Access to generic pattern learning and recognition functions using the CM1K chip are made through a simple API delivered for the different IDE (Arduino and Eclipse). More specific function libraries will be released shortly after and we hope to start a repository of your libraries too!

  1. Install and connect the BrainCard to the MPU/Device of your choice. View the hardware datasheet
  2. Install the API in the IDE of your choice (Arduino, Eclipse). View the BrainCard API preliminary datasheet
  3. Now, you can program to teach the BrainCard using examples previously collected and saved to disk (waveforms, images, movies). Or you can program some GPIOs to trigger teaching (bush buttons, keyboard inputs and even voice control! As illustrated in the following video, teaching amounts to selecting examples and sending one of more signatures of this example to the neurons of the BrainCard. The neurons will decide if the example is worth learning based on what they already know. If applicable, some neurons will autonomosuly correct themselfves if they contradict the teacher and never repeat this mistake again.
  4. Recognition is the same as learning except that this time, your program monitors the response of the neurons to the incoming signatures instead of sending them learning commands. Your program can then act based on what is recognized using the wealth of GPIOs available through Arduino Shields, as well as  DeviceToDevice or DeviceToCloud communications, and more. 

 

So what can it do?

This is a great
question, as even we have not fully explored the full range of the
BrainCard/CM1K’s capabilities. Almost every day we are coming up with new
applications for the technology, which is one of our quandaries, and is where YOU come in. It’s also why we are
choosing to announce ourselves to the world via Indiegogo.

A simple list of known capabilities

Object recognition
Using the KIS vesion or an off-the-shelf image sensor of your own and teach your BrainCard to recognize shapes, colors, objects, signs, people and animals.
Stereoscopic vision
With two image sensors attached, along with a CPU, your project can work in stereoscopic vision! The processor can
triangulate distance and the CM1K can recognize what it’s looking at. Add some motors to the image sensors and it can track things too.

 

Audio RecognitionAttach a microphone and teach theBrainCard to recognize a noise, a voice, YOUR voice or other audio signals like a bird song or a dog.Vibration and motionAttach a MEMS (Micro Electrical Mechanical Systems) device and teach the BrainCard to recognize vibrations or physical motion.

Bio signals
BrainCard can recognize data from any Bio-signal source – such as:

Electroencephalogram (EEG), Electrocardiogram (ECG), Electromyogram (EMG), Mechanomyogram (MMG), Electrooculography (EOG), Galvanic skin response (GSR), Magnetoencephalogram (MEG).

 

 

Text and Numbers

You can run your data through theBrainCard in any form — from text to binary to DNA sequences — and teach it to recognize patterns, which will allow it to detect anomalies, identify clusters and make predictions.There are MANY MORE applications we just haven’t tried yet…

Flexibility
If you go crazy while teaching and fill all 1024 neurons on a chip, don’t panic. BrainCard provides an expansion bus to stack more CM1K chips in boards of two, thereby increasing the number of modules (subject to availability) you can teach by increments of 2048 (1x CM1K equals 1,024 neurons). This expansion can be done at any time to its maximum of 8,192 (plus the original 1024 on the BrainCard), and will not impact your teaching allowing you to experiment to your heart’s content.

Maturity
The NeuroMem CM1K technology has already found many applications in industry and has been working in the real world since 2007 – so we know everything we’re claiming above is 100% true, because most of these applications have been built somewhere.

What we need, and what you getThis Indiegogo campaign has been launched with one aim: To generate the volume and revenue we need to manufacture the maker version of the CM1K technology — the BrainCard.

By supporting this Indiegogo project you will be a part of the first chapter of a much bigger story: We aim to change the way the world computes with neural network technology. We’re looking to raise at least $200k to start manufacturing in volume, which will make the BrainCard as cheap as possible.

We’re beginning with 1000 chips that we already have in inventory which were originally ordered by an industrial client. After that, we will aim to start manufacturing on a mass production line, and this will take approximately six months. So, those first 1000 purchasers will be the only ones able to experience the unique capabilities of the BrainCard until mid-2015.

The first 1000 BrainCard’s will cost $199 and are what we call IWIN (I Want It Now), or $219 for a version including an image sensor (the IS version) – so 500 of each version.

If we don’t reach the goal, all the money raised will be aimed at manufacturing as many BrainCards as we can, so that it can be more affordable for the masses.

This is why we’re turning to the maker community — we’d like to crowdsource our research and development through YOU!

The impact
Neural networksshould be everywhere by now, in your phone, in wearable technology. TheNeuroMem technology is mature and the market needs exist. This projecthas the ability to propelneuromorphic technology into the mainstream consciousness by showing electronics manufacturers whatcan be done with it.This is why we’re turning to the maker community — we’d like to crowdsource our research and development through YOU!

Risks and challenges

The core of the NeuroMem/NeuromorThings team has been in place for 16 years and has plenty of research and industrial customers already using the CM1K chip, so this is not a typical “prototype” project.

We have a full supply chain already in place for both the board and for mounting the chips. We also have a wealth of knowledge in developing board-level and semiconductor technologies — all of which makes the risks to you a bare minimum.

We just need your support to complete prototyping/testing and to begin volume manufacturing. The first 1000 IWIN BrainCards will have exclusive access to the technology for the three months it takes us to make the new batch of chips.

Once we begin mass manufacturing the BrainCard, we will begin our long development roadmap on its successors and other neuromorthings.

After the first run of IWIN devices, the rest of the time will be dedicated to mounting the chips to the boards and testing them. With enough support we can get production runs up to very large numbers per month very quickly.

Shipping
Shipping a technology product is fraught with issues like export restrictions. We’ve tried to make it as simple as possible and built shipping as a perk.

In the US, Mexico and Canada? included

Rest of World? $30 Shipping & Packing

Due to the technical nature of the BrainCard it can be liable to Export Restrictions in certain countries under United States Law. If you are unsure if you are effected – please contact us at: info@neuromorthings.com and put “Export” in the subject line and we’ll do everything we can to help.

Other Ways You Can HelpCan’t buy a BrainCard? How about giving us a High $5? High 5’ers will all feature on the website and be written into NeuromorThings lore… it’s a program for those interested in the technology and who want to help but who can’t spring for their own BrainCard.

Got no cash at all? No problem – simply SPREAD THE WORD! Tell everyone you know about us and help us that way instead, on Facebook, on Twitter – wherever.

Every little bit helps!

Export regulations:
It might occurs, in certain rare cases that your country is under export embargo and we cannot ship because of the nature of the technology included in the BrainCard.If this exceptional situation occurs your money will be fully refunded.
Find This Campaign On
Team
Do you think this campaign contains prohibited content?Let us know.

Tagged , , , , , , , , , ,

How It Works: IBM’s Concept Insights

Concept Insights is a new Web service available on the IBM Watson Developer Cloud, where developers can tap into the capability via our Bluemix development platform for Web services and mobile apps.
Tagged , , , , ,

AI Won’t End the World, But It Might Take Your Job

Andrew Ng. Ariel Zambelich/WIRED

There’s been a lot of fear about the future of artificial intelligence.

Stephen Hawking
and Elon Musk worry that AI-powered computers might one day become uncontrollable super-intelligent demons. So does Bill Gates.

But Baidu chief scientist Andrew Ng—one of the world’s best-known AI researchers and a guy who’s building out what is likely one of the world’s largest applied AI projects—says we really ought to worry more about robot truck drivers than the Terminator.

In fact, he’s irritated by the discussion about scientists somehow building an apocalyptic super-intelligence. “I think it’s a distraction from the conversation about…serious issues,” Ng said at an AI conference in San Francisco last week.

Ng isn’t alone in thinking this way. A select group of AI luminaries met recently at a closed door retreat in Puerto Rico to discuss ethics and AI. WIRED interviewed some of them, and the consensus was that there are short-term and long-term AI issues to worry about. But it’s the long-term questions getting all the press.
Artificial intelligence is likely to start having an important effect on society over the next five to 10 years, according to Murray Shanahan, a professor of cognitive robotics with Imperial College, Professor of Cognitive Robotics. “It’s hard to predict exactly what’s going on,” he told WIRED a few weeks ago, “but we can be pretty sure that these technologies are going to impact and society quite a bit.

The way Ng sees it, it took the US about 200 years to switch from an agricultural economy where 90 percent of the country worked on farms, to our current economy, where the number is closer to 2 percent. The AI switchover promises to come must faster, and that could make it a bigger problem.

That’s an idea echoed in two MIT academics, Erik Brynjolfsson and Andrew McAfee, who argue that we’re entering a “second machine age,” where the accelerating rate of change brought on by digital technologies could leave millions of medium-and-low skilled workers behind.

Some AI technologies, such as the self-driving car, could be extremely disruptive, but over a much shorter period of time than the industrial revolution. There are three million truck drivers in the US, according to the American Trucking Association. What happens if self-driving vehicles put them all out of a job in a matter of years?

With recent advances in perception, the range of things that machines can do is getting a boost. Computers are better at understanding what we say and analyzing data in a way that used to be the exclusive domain of humans.

Last month, Audi’s self-driving car took WIRED’s Alex Davies for a 500 mile ride. In Cupertino, California’s Aloft Hotel a robot butler can deliver you a toothbrush. Paralegals are now finding their work performed by data-sifting computers. And just last year, Google told us about a group of workers who were doing mundane image recognition work for the search giant—jobs like figuring out the difference between telephone numbers and street addresses on building walls. Google figured out how to do this by machine, and so they’ve now moved onto other things.

Ng, who also co-founded the online learning company Coursera, says that if AI really starts taking jobs, retraining all of those workers could present a major challenge. When it comes to retraining workers, he said, “our education system has historically found it very difficult.

ORIGINAL: Wired

By Robert McMillan
02.02.15
Tagged , , , , , , ,
%d bloggers like this: