This 17-Year-Old Has Discovered DNA Mutations That Could Combat HIV And Meningitis

photo: Gio.tto via Shutterstock]

High schooler Andrew Jin is answering previously unasked questions in biology.


Like plenty of science-oriented high school kids, Andrew Jin is interested in human evolution. But Jin, one of three $150,000 first-place winners in this year’s Intel Science Talent Search, took that interest further than most. For his project, the high school senior came up with machine learning algorithms that detect mutations in the human genome—mutations that could one day be used to develop drugs to combat diseases like HIV and schizophrenia.

Initially, Jin wanted to investigate how humans have evolved over the past 10,000 years. “I was doing it out of curiosity,” he says. “I started thinking
about natural selection and evolution, and that we understand so much about its theory, but we know nothing about reality. I was curious about what mutations help us be sophisticated human beings.

Jin decided to examine 179 human DNA sequences from different parts of the world.
Each sequence consisted of 3 million base pairs of DNA—far too much to look at without help from an algorithm. So he set up a machine learning algorithm and found 130 potentially adaptive mutations, related to things like immune response and metabolism, that played a role in human evolution.Working from a summer program at MIT, Jin refined his research and came up with a handful of mutations, including ones involved in resistance to meningitis and decreased susceptibility to viruses like influenza and HIV, that could potentially be used by pharmaceutical companies in new drug development.There have been other natural-selection studies in the past looking for adaptive mutations, but Jin says that many of his findings are new.
There’s still a long way to go before he starts chatting up Big Pharma, however. “There’s very, very strong evidence for these mutations playing a role in disease resistance, but in order to confirm, I would have to do biological experiments to study their protective mechanisms. That’s what I’m interested in doing now,” says Jin.Once he gets to college (he’s not yet sure where that will be), Jin plans to pursue computer science or biology. But that’s not all he’s good at: The teen is a talented pianist who has played at Carnegie Hall. “I’m also an avid Boy Scout,” he says.

Tagged , , , , , , , , ,

Artificial Intelligence Is Almost Ready for Business

Artificial Intelligence (AI) is an idea that has oscillated through many hype cycles over many years, as scientists and sci-fi visionaries have declared the imminent arrival of thinking machines. But it seems we’re now at an actual tipping point. AI, expert systems, and business intelligence have been with us for decades, but this time the reality almost matches the rhetoric, driven by

  • the exponential growth in technology capabilities (e.g., Moore’s Law),
  • smarter analytics engines, and
  • the surge in data.

Most people know the Big Data story by now: the proliferation of sensors (the “Internet of Things”) is accelerating exponential growth in “structured” data. And now on top of that explosion, we can also analyze “unstructured” data, such as text and video, to pick up information on customer sentiment. Companies have been using analytics to mine insights within this newly available data to drive efficiency and effectiveness. For example, companies can now use analytics to decide

  • which sales representatives should get which leads,
  • what time of day to contact a customer, and
  • whether they should e-mail them, text them, or call them.

Such mining of digitized information has become more effective and powerful as more info is “tagged” and as analytics engines have gotten smarter. As Dario Gil, Director of Symbiotic Cognitive Systems at IBM Research, told me:

Data is increasingly tagged and categorized on the Web – as people upload and use data they are also contributing to annotation through their comments and digital footprints. This annotated data is greatly facilitating the training of machine learning algorithms without demanding that the machine-learning experts manually catalogue and index the world. Thanks to computers with massive parallelism, we can use the equivalent of crowdsourcing to learn which algorithms create better answers. For example, when IBM’s Watson computer played ‘Jeopardy!,’ the system used hundreds of scoring engines, and all the hypotheses were fed through the different engines and scored in parallel. It then weighted the algorithms that did a better job to provide a final answer with precision and confidence.”

Beyond the Quants

Interestingly, for a long time, doing detailed analytics has been quite labor- and people-intensive. You need “quants,” the statistically savvy mathematicians and engineers who build models that make sense of the data. As Babson professor and analytics expert Tom Davenport explained to me, humans are traditionally necessary to

  • create a hypothesis,
  • identify relevant variables,
  • build and run a model, and
  • then iterate it.

Quants can typically create one or two good models per week.

However, machine learning tools for quantitative data – perhaps the first line of AI – can create thousands of models a week. For example, in programmatic ad buying on the Web, computers decide which ads should run in which publishers’ locations. Massive volumes of digital ads and a never-ending flow of clickstream data depend on machine learning, not people, to decide which Web ads to place where. Firms like DataXu use machine learning to generate up to 5,000 different models a week, making decisions in under 15 milliseconds, so that they can more accurately place ads that you are likely to click on.

Tom Davenport:

I initially thought that AI and machine learning would be great for augmenting the productivity of human quants. One of the things human quants do, that machine learning doesn’t do, is to understand what goes into a model and to make sense of it. That’s important for convincing managers to act on analytical insights. For example, an early analytics insight at Osco Pharmacy uncovered that people who bought beer also bought diapers. But because this insight was counter-intuitive and discovered by a machine, they didn’t do anything with it. But now companies have needs for greater productivity than human quants can address or fathom. They have models with 50,000 variables. These systems are moving from augmenting humans to automating decisions.”

In business, the explosive growth of complex and time-sensitive data enables decisions that can give you a competitive advantage, but these decisions depend on analyzing at a speed, volume, and complexity that is too great for humans. AI is filling this gap as it becomes ingrained in the analytics technology infrastructure in industries like health care, financial services, and travel.

The Growing Use of AI

IBM is leading the integration of AI in industry. It has made a $1 billion investment in AI through the launch of its IBM Watson Group and has made many advancements and published research touting the rise of “cognitive computing” – the ability of computers like Watson to understand words (“natural language”), not just numbers. Rather than take the cutting edge capabilities developed in its research labs to market as a series of products, IBM has chosen to offer a platform of services under the Watson brand. It is working with an ecosystem of partners who are developing applications leveraging the dynamic learning and cloud computing capabilities of Watson.

The biggest application of Watson has been in health care. Watson excels in situations where you need to bridge between massive amounts of dynamic and complex text information (such as the constantly changing body of medical literature) and another mass of dynamic and complex text information (such as patient records or genomic data), to generate and evaluate hypotheses. With training, Watson can provide recommendations for treatments for specific patients. Many prestigious academic medical centers, such as The Cleveland Clinic, The Mayo Clinic, MD Anderson, and Memorial Sloan-Kettering are working with IBM to develop systems that will help healthcare providers better understand patients’ diseases and recommend personalized courses of treatment. This has provento be a challenging domain to automate and most of the projects are behind schedule.Another large application area for AI is in financial services. Mike Adler, Global Financial Services Leader at The Watson Group, told me they have 45 clients working mostly on three applications:

  • (1) a “digital virtual agent” that enables banks and insurance companies to engage their customers in a new, personalized way,
  • (2) a “wealth advisor” that enables financial planning and wealth management, either for self-service or in combination with a financial advisor, and
  • (3) risk and compliance management.

For example, USAA, the $20 billion provider of financial services to people that serve, or have served, in the United States military, is using Watson to help their members transition from the military to civilian life. Neff Hudson, vice president of emerging channels at USAA, told me, “We’re always looking to help our members, and there’s nothing more critical than helping the 150,000+ people leaving the military every year. Their financial security goes down when they leave the military. We’re trying to use a virtual agent to intervene to be more productive for them.” USAA also uses AI to enhance navigation on their popular mobile app. The Enhanced Virtual Assistant, or Eva, enables members to do 200 transactions by just talking, including transferring money and paying bills. “It makes search better and answers in a Siri-like voice. But this is a 1.0 version. Our next step is to create a virtual agent that is capable of learning. Most of our value is in moving money day-to-day for our members, but there are a lot of unique things we can do that happen less frequently with our 140 products. Our goal is to be our members’ personal financial agent for our full range of services.

In addition to working with large, established companies, IBM is also providing Watson’s capabilities to startups. IBM has set aside $100 million for investments in startups. One of the startups that is leveraging Watson is WayBlazer, a new venture in travel planning that is led by Terry Jones, a founder of Travelocity and Kayak. He told me:

I’ve spent my whole career in travel and IT.

  • I started as a travel agent, and people would come in, and I’d send them a letter in a couple weeks with a plan for their trip. 
  • The Sabre reservation system made the process better by automating the channel between travel agents and travel providers
  • Then with Travelocity we connected travelers directly with travel providers through the Internet. 
  • Then with Kayak we moved up the chain again, providing offers across travel systems
  • Now with WayBlazer we have a system that deals with words. Nobody has helped people with a tool for dreaming and planning their travel. 

Our mission is to make it easy and give people several personalized answers to a complicated trip, rather than the millions of clues that search provides today. This new technology can take data out of all the silos and dark wells that companies don’t even know they have and use it to provide personalized service.
What’s Next

As Moore’s Law marches on, we have more power in our smartphones than the most powerful supercomputers did 30 or 40 years ago. Ray Kurzweil has predicted that the computing power of a $4,000 computer will surpass that of a human brain in 2019 (20 quadrillion calculations per second).

What does it all mean for the future of AI?

To get a sense, I talked to some venture capitalists, whose profession it is to keep their eyes and minds trained on the future. Mark Gorenberg, Managing Director at Zetta Venture Partners, which is focused on investing in analytics and data startups, told me, “AI historically was not ingrained in the technology structure. Now we’re able to build on top of ideas and infrastructure that didn’t exist before. We’ve gone through the change of Big Data. Now we’re adding machine learning. AI is not the be-all and end-all; it’s an embedded technology. It’s like taking an application and putting a brain into it, using machine learning. It’s the use of cognitive computing as part of an application.” Another veteran venture capitalist, Promod Haque, senior managing partner at Norwest Venture Partners, explained to me, “if you can have machines automate the correlations and build the models, you save labor and increase speed. With tools like Watson, lots of companies can do different kinds of analytics automatically.

Manoj Saxena, former head of IBM’s Watson efforts and now a venture capitalist, believes that analytics is moving to the “cognitive cloud” where massive amounts of first- and third-party data will be fused to deliver real-time analysis and learning. Companies often find AI and analytics technology difficult to integrate, especially with the technology moving so fast; thus, he sees collaborations forming where companies will bring their people with domain knowledge, and emerging service providers will bring system and analytics people and technology. Cognitive Scale (a startup that Saxena has invested in) is one of the new service providers adding more intelligence into business processes and applications through a model they are calling “Cognitive Garages.” Using their “10-10-10 method”: they

  • deploy a cognitive cloud in 10 seconds,
  • build a live app in 10 hours, and
  • customize it using their client’s data in 10 days.

Saxena told me that the company is growing extremely rapidly.

I’ve been tracking AI and expert systems for years. What is most striking now is its genuine integration as an important strategic accelerator of Big Data and analytics. Applications such as USAA’s Eva, healthcare systems using IBM’s Watson, and WayBlazer, among others, are having a huge impact and are showing the way to the next generation of AI.
Brad Power has consulted and conducted research on process innovation and business transformation for the last 30 years. His latest research focuses on how top management creates breakthrough business models enabling today’s performance and tomorrow’s innovation, building on work with the Lean Enterprise Institute, Hammer and Company, and FCB Partners.


ORIGINAL:
HBR

Brad PowerMarch 19, 2015

A Brain-Computer Interface That Lasts for Weeks

Photo: John Rogers/University of Illinois
Brain signals can be read using soft, flexible, wearable electrodes that stick onto and near the ear like a temporary tattoo and can stay on for more than two weeks even during highly demanding activities such as exercise and swimming, researchers say.
The invention could be used for a persistent brain-computer interface (BCI) to help people operate prosthetics, computers, and other machines using only their minds, scientists add.
For more than 80 years, scientists have analyzed human brain activity non-invasively by recording electroencephalograms (EEGs). Conventionally, this involves electrodes stuck onto the head with conductive gel. The electrodes typically cannot stay mounted to the skin for more than a few days, which limits widespread use of EEGs for applications such as BCIs.
Now materials scientist John Rogers at the University of Illinois at Urbana-Champaign and his colleagues have developed a wearable device that can help record EEGs uninterrupted for more than 14 days. Moreover, their invention survived despite showering, bathing, and sleeping. And it did so without irritating the skin. The two weeks might be “a rough upper limit, defined by the timescale for natural exfoliation of skin cells,” Rogers says. 
The device consists of a soft, foldable collection of gold electrodes only 300 nanometers thick and 30 micrometers wide mounted on a soft plastic film. This assemblage stays stuck to the body using electric forces known as van der Waals interactions—the same forces that help geckoes cling cling to walls.
The electrodes are flexible enough to mold onto the ear and the mastoid process behind the ear. The researchers mounted the device onto three volunteers using tweezers. Spray-on bandage was used once twice a day to help the electrodes survive normal daily activities.
The electrodes on the mastoid process recorded brain activity while those on the ear were used as a ground wire. The electrodes were connected to a stretchable wire that could plug into monitoring devices. “Most of the experiments used devices mounted on just one side, but dual sides is certainly possible,” Rogers says.
The device helped record brain signals well enough for the volunteers to operate a text-speller by thought, albeit at a slow rate of 2.3 to 2.5 letters per minute.
According to Rogers, this research: 
…could enable a persistent BCI that one could imagine might help disabled people, for whom mind control is an attractive option for operating prosthetics… It could also be useful for monitoring cognitive states—for instance, 

  • to see if people are paying attention while they’re driving a truck, 
  • flying an airplane, or 
  • operating complex machinery. 

It could also help monitor patterns of sleep to better understand sleep disorders such as sleep apnea, or for monitoring brain function during learning.

The scientists hope to improve the speed at which people can use this device to communicate mentally, which could expand its use into commercial wearable electronics. They also plan to explore devices that can operate wirelessly, Rogers says. The researchers detailed their findings online March 16 in the journal Proceedings of the National Academy of Sciences.
ORIGINAL: IEEE Spectrum
By Charles Q. Choi
16 Mar 2015 
Tagged , , , , , , , , ,

Mitsubishi Quiets Car Noise With Machine Learning

ORIGINAL: IEEE Spectrum
By John Boyd
9 Mar 2015
Photo: John Boyd A spectrograph of the sound the car’s microphone picks up when the driver is speaking [left]. A system developed using machine learning lets through only the person’s voice [right].
Mitsubishi Electric is claiming a breakthrough with its development of noise suppression technology to aid hands-free phone calls in the car and elsewhere. The technology improves the quality of the communication by filtering out almost all of the unwanted ambient sound that enters a far-field microphone while speaking.Noises removed include rapidly changing sounds—which were, until now, difficult to deal with—such as passing cars, windshield wipers, and turn signals. In tests thesystem cancelled out 96 percent of the ambient noise compared to just 78 percent achieved by conventional methodsPreviously, only stationary noises such as road noise or the sound of the air conditioner were really dealt with, because the noise mixed with the speech could be easily predicted from past observations when the driver was not talking,” says Jonathan Le Roux, a principal researcher at Mitsubishi Electric Research Labs in Cambridge, Mass. “It is much harder to reduce noise when its characteristics are largely unpredictable.

To better distinguish human speech from other sounds, the researchers are developing speech-enhancement systems that learn to exploit spectral and dynamic characteristics of human speech such as pitch and timber.

These systems employ machine-learning methods based on deep neural networks. (Facebook’s AI chief, Yann Le Cun, explained deep neural networks for us here.)These are trained to distinguish and suppress the noise and retain the clean speech using massive amounts of noise -contaminated speech data. The systems have millions of parameters that are optimized during training in order to reduce the difference between the output of the system and the original clean speech.

In order to reconstruct the clean speech, the neural networks construct special time-varying filters on the fly and apply them to the contaminated speech.

The frequency contents of the speech and the noise can be intricately intermingled, and change abruptly,” says Le Roux. “Transient noises may last only tens of milliseconds, while speech changes from one phoneme to another every 100 to 200 milliseconds. So to effectively remove the noise, the filter needs to have a fine frequency resolution and be updated very rapidly.

In tests, Le Roux says they were able to cancel out 96 percent of the ambient noise compared to just 78 percent achieved by conventional methods.

This technology fundamentally differs in approach and aim from active noise-cancellation methods such as those in anti-noise headphones, which try to physically remove ambient noise in a user’s environment. Examples of these methods applied in the car are Bose’s engine-noise cancellation and Harman’s road noise suppression.

Mitsubishi’s goal is to eliminate the noise picked up by the microphone while the user is speaking during telephone calls. Although active noise-cancellation methods could indirectly help with this problem by reducing noise in the cabin, Mitsubishi says they can only suppress low-frequency noise.

We want to make the driver’s speech more clear and intelligible to the person on the other end of the call by cancelling as much noise as possible, not just low-frequency noise,” says Le Roux. “Our technology will also be useful for hands-free command and control situations, such as when using Apple’s Siri or Google’s Voice Search in smart phones, as well as in call centers that use speech recognition to handle common requests.

Mitsubishi plans to launch the technology in 2018 in its line of automotive navigation and communication devices.

 

Tagged , , , , , , ,

Nobel Prize-Winning Economist Reveals Why Robots Really Are Coming For Your Job

REUTERS/Robert Pratta
Nobel prize-winning economist Joe Stiglitz has a new NBER paper out that comes to a worrying conclusion — the robots really are coming for your job.
In economic theory innovation should make workers more efficient — they can produce more for less — but it comes at the cost of lower skilled jobs as fewer people are required to produce the same amount of output.
However, (again in theory) the gains made by workers who remain in employment should be greater than the losses incurred by those who lose their jobs and their gains help drive more skilled job creation in other industries.
Unfortunately theory doesn’t always fit neatly when confronted by reality. And Stiglitz claims this is exactly what has happened with innovation. As he puts it (emphasis added):
The statement that such skill-biased innovation could be welfare enhancing is usually taken to mean that the gains of the skilled workers are more than sufficient to compensate the losses of the unskilled workers. But while the skilled workers could compensate the unskilled workers, such compensation seldom occurs.
Stiglitz argues that truly disruptive innovations, of the kind that drove the Industrial Revolution, require widespread economic restructuring to allow those who are being pushed out of an industry to locate alternatives. Sadly, he says, “markets often do not manage such restructurings well” leading to long periods of high unemployment and increased inequality.
The economist claims that these market failures helped create the conditions for the Great Depression. Banks and businesses failed to anticipate the collapse of rural workers’ incomes driven by technological changes in the farming sector in the 1920s. The legacy of this was a debt overhang that prevented these workers from moving to cities in order to gain new skills and ultimately caused a crash in demand.
AP Images Joseph Stiglitz
If left unchecked, the decline in manufacturing during the present era could be having a similarly worrying impact. Worse, without government introducing policy to counteract the impact of labour-saving innovation low income workers could end up being worse off “even in the longer run” with lower wages and higher unemployment concentrated among some of the most vulnerable groups in society.
Indeed this may help explain why wages of low-skilled workers in the US have stagnated for more than 40 years.
For more than a century the Luddites, who believed that modern machines would lead to unemployment and impoverishment, have been held up as an example of how small-minded traditionalists hold up social and economic progress. In short, Stiglitz’s message is that they were in fact right.
NOV. 13, 2014, 2:00 PM
Tagged , , , , , ,

What will happen when the internet of things becomes artificially intelligent?

ORIGINAL: The Guardian
Stephen Balkam
Friday 20 February 2015
From Stephen Hawking to Spike Jonze, the existential threat posed by the onset of the ‘conscious web’ is fuelling much debate – but should we be afraid?

Who’s afraid of artificial intelligence? Quite a few notable figures, it turns out. Photograph: Alamy

When Stephen Hawking, Bill Gates and Elon Musk all agree on something, it’s worth paying attention.

All three have warned of the potential dangers that artificial intelligence or AI can bring. The world’s foremost physicist, Hawking said that the full development of artificial intelligence (AI) could spell the end of the human race. Musk, the tech entrepreneur who brought us PayPal, Tesla and SpaceX described artificial intelligence as our biggest existential threatand said that playing around with AI was like “summoning the demon”. Gates, who knows a thing or two about tech, puts himself in the concerned camp when it comes to machines becoming too intelligent for us humans to control.

What are these wise souls afraid of? AI is broadly described as the ability of computer systems to ape or mimic human intelligent behavior. This could be anything from recognizing speech, to visual perception, making decisions and translating languages. Examples run from Deep Blue who beat chess champion Garry Kasparov to supercomputer Watson who outguessed the world’s best Jeopardy player. Fictionally, we have Her, Spike Jonze’s movie that depicts the protagonist, played by Joaquin Phoenix, falling in love with his operating system, seductively voiced by Scarlet Johansson. And coming soon, Chappie stars a stolen police robot who is reprogrammed to make conscious choices and to feel emotions.

An important component of AI, and a key element in the fears it engenders, is the ability of machines to take action on their own without human intervention. This could take the form of a computer reprogramming itself in the face of an obstacle or restriction. In other words, to think for itself and to take action accordingly.

Needless to say, there are those in the tech world who have a more sanguine view of AI and what it could bring. Kevin Kelly, the founding editor of Wired magazine, does not see the future inhabited by HAL’s – the homicidal computer on board the spaceship in 2001: A Space Odyssey. Kelly sees a more prosaic world that looks more like Amazon Web Services: a cheap, smart, utility which is also exceedingly boring simply because it will run in the background of our lives. He says AI will enliven inert objects in the way that electricity did over 100 years ago. “Everything that we formerly electrified, we will now cognitize.” And he sees the business plans of the next 10,000 startups as easy to predict: “Take X and add AI.

While he acknowledges the concerns about artificial intelligence, Kelly writes: “As AI develops, we might have to engineer ways to prevent consciousness in them – our most premium AI services will be advertised as consciousness-free.” (my emphasis).

Running parallel to the extraordinary advances in the field of AI is the even bigger development of what is loosely called, the internet of things (IoT). This can be broadly described as the emergence of countless objects, animals and even people with uniquely identifiable, embedded devices that are wirelessly connected to the internet. These ‘nodes’ can send or receive information without the need for human intervention. There are estimates that there will be 50 billion connected devices by 2020. Current examples of these smart devices include Nest thermostats, wifi-enabled washing machines and the increasingly connected cars with their built-in sensors that can avoid accidents and even park for you.

The US Federal Trade Commission is sufficiently concerned about the security and privacy implications of the Internet of Things, and has conducted a public workshop and released a report urging companies to adopt best practices and “bake in” procedures to minimise data collection and to ensure consumer trust in the new networked environment.


Tim O’Reilly
, coiner of the phrase “Web 2.0” sees the internet of things as the most important online development yet. He thinks the name is misleading – that IoT is “really about human augmentation”. O’Reilly believes that we should “expect our devices to anticipate us in all sorts of ways”. He uses the “intelligent personal assistant”, Google Now, to make his point.

So what happens when these millions of embedded devices connect to artificially intelligent machines? What does AI + IoT = ? Will it mean the end of civilisation as we know it? Will our self-programming computers send out hostile orders to the chips we’ve added to our everyday objects? Or is this just another disruptive moment, similar to the harnessing of steam or the splitting of the atom? An important step in our own evolution as a species, but nothing to be too concerned about?

The answer may lie in some new thinking about consciousness. As a concept, as well as an experience, consciousness has proved remarkably hard to pin down. We all know that we have it (or at least we think we do), but scientists are unable to prove that we have it or, indeed, exactly what it is and how it arises.

Dictionaries describe consciousness as the state of being awake and aware of our own existence. It is an “internal knowledge” characterized by sensation, emotions and thought.

Just over 20 years ago, an obscure Australian philosopher named David Chalmers created controversy in philosophical circles by raising what became known as the Hard Problem of Consciousness. He asked how the grey matter inside our heads gave rise to the mysterious experience of being. What makes us different to, say, a very efficient robot, one with, perhaps, artificial intelligence? And are we humans the only ones with consciousness?

  • Some scientists propose that consciousness is an illusion, a trick of the brain.
  • Still others believe we will never solve the consciousness riddle.
  • But a few neuroscientists think we may finally figure it out, provided we accept the remarkable idea that soon computers or the internet might one day become conscious.

In an extensive Guardian article, the author Oliver Burkeman wrote how Chalmers and others put forth a notion that all things in the universe might be (or potentially be) conscious, “providing the information it contains is sufficiently interconnected and organized.” So could an iPhone or a thermostat be conscious? And, if so, could we in the midst of a ‘Conscious Web’?Back in the mid-1990s, the author Jennifer Cobb Kreisberg wrote an influential piece for Wired, A Globe, Clothing Itself with a Brain. In it she described the work of a little known Jesuit priest and paleontologist, Teilhard de Chardin, who 50 years earlier described a global sphere of thought, the “living unity of a single tissue” containing our collective thoughts, experiences and consciousness.

Teilhard called it the “nooshphere” (noo is Greek for mind). He saw it as the evolutionary step beyond our geosphere (physical world) and biosphere (biological world). The informational wiring of a being, whether it is made up of neurons or electronics, gives birth to consciousness. As the diversification of nervous connections increase, de Chardin argued, evolution is led towards greater consciousness. Or as John Perry Barlow, Grateful Dead lyricist, cyber advocate and Teilhard de Chardin fan said: “With cyberspace, we are, in effect, hard-wiring the collective consciousness.

So, perhaps we shouldn’t be so alarmed. Maybe we are on the cusp of a breakthrough not just in the field of artificial intelligence and the emerging internet of things, but also in our understanding of consciousness itself. If we can resolve the privacy, security and trust issues that both AI and the IoT present, we might make an evolutionary leap of historic proportions. And it’s just possible Teilhard’s remarkable vision of an interconnected “thinking layer” is what the web has been all along.

• Stephen Balkam is CEO of the Family Online Safety Institute in the US

Tagged , , , , , ,

Google Builds An AI That Can Learn And Master Video Games

Google has built an artificial intelligence system that can learn – and become amazing at – video games all on its own, given no commands but a simple instruction to play titles. The project, detailed by Bloomberg, is the result of research from the London-based DeepMind AI startup Google acquired in a deal last year, and involves 49 games from the Atari 2600 that likely provided the first video game experience for many of those reading this.
While this is an amazing announcement for so many reasons, the most impressive part might be that the AI not only matched wits with human players in most cases, but actually went above and beyond the best scores of expert meat-based players in 29 of the 49 games it learned, and bested existing computer based players in a whopping 43.
Google and DeepMind aren’t looking to just put their initials atop the best score screens of arcades everywhere with this project – the long-term goal is to create the building blocks for optimal problem solving given a set of criteria, which is obviously useful in any place Google might hope to use AI in the future, including in self-driving cars. Google is calling this the “first time anyone has built a single learning system that can learn directly from experience,” according to Bloomberg, which has potential in a virtually limitless number of applications.
It’s still an early step, however, and Google expects it’ll be decades before it achieves its goal of building general-purpose machines that have their own intelligence and can respond to a range of situations. Still, it’s a system that doesn’t require the kind of arduous training and hand-holding to learn what it’s supposed to do, which is a big leap even from things like IBM’s Watson super computer.
Next up for the arcade AI is mastering the Doom-era 3D virtual worlds, which should help the AI edge closer to mastering similar tasks in the real world, like driving a car. And there’s one more detail here that may keep you up at night: Google trained the AI to get better at the Atari games it mastered using a virtual take on operant conditioning – ‘rewarding’ the computer for successful behavior the way you might a dog.
ORIGINAL: Tech Crunch
Tagged , , , , ,
%d bloggers like this: