|Michael Nova, Chief Medical Officer, Pathway Genomics|
|Surfing in Indonesia|
That mythology, in turn, has spurred a reactionary, perpetual spasm from people who are horrified by what they hear. You’ll have a figure say, “The computers will take over the Earth, but that’s a good thing, because people had their chance and now we should give it to the machines.” Then you’ll have other people say, “Oh, that’s horrible, we must stop these computers.” Most recently, some of the most beloved and respected figures in the tech and science world, including Stephen Hawking and Elon Musk, have taken that position of: “Oh my God, these things are an existential threat. They must be stopped.“
LifeLearn Sofie is an intelligent treatment support tool for veterinarians of all backgrounds and levels of experience. Sofie is powered by IBM WatsonTM, the world’s leading cognitive computing system. She can understand and process natural language, enabling interactions that are more aligned with how humans think and interact.
Among the giants of tech, this is an increasingly common goal. Google’s Go programming language aims for a similar balance of power and simplicity, as does the Swift language that Apple recently unveiled. In the past, the programming world was split in two:
But now, these two worlds are coming together. “D is similar to C++, but better,” says Brad Anderson, a longtime C++ programmer from Utah who has been using D as well. “It’s high performance, but it’s expressive. You can get a lot done without very much code.”
The Interpreted Language That Isn’t
Amazon Echo is a speaker that has a voice assistant built in. If you ask it a question its got an answer. If you tell it to do stuff, it complies. Well, this is different.
Echo is an always-on speaker that you plop into a corner of your house and turns it into the futuristic homes we’ve been dreaming about. It’s like Jarvis, or the assistant computer from Her.
When you say the wake word “Alexa,” it starts listening and you can ask it for information or to perform any of a number of tasks. For example, you can ask it for the weather, to play a particular style of music, or to add something to you calendar.
Of course voice assistants aren’t an entirely new concept, but building the technology into a home appliance rather than into a a smartphone makes a lot of sense and gives the technology a more conversational and natural feel. To that end, its got what Amazon calls “far-field recognition” that allows you to talk to it from across the room. It eliminates the clumsiness of assistants like Siri and Google Now that you have to be right on top of.
Besides being an assistant, Echo is also a little Bluetooth speaker with 360-degree sound. It stands 9-inches tall, has a 2-inch tweeter and a 2.5-inch woofer.
If you’re not near the speaker, you can also access it using an app for Android and Fire OS as well as through web browsers on iOS.
Right now, Echo is available by invitation only. It costs $200 for regular people and $100 for people who have an Amazon Prime account. [Amazon]
ORIGINAL: IEEE Spectrum
By Evan Ackerman
Posted 4 Nov 2014
The WDS Virtual Agent taps into intelligence gleaned from terabytes of data that the company keeps about real customer interactions. Armed with this info, the virtual agent can more reliably solve problems itself, as it learns through experience. The more customer care data it is exposed to, the more effective it becomes in delivering relevant responses to real customer questions.
Of course, AI proponents have been saying this for decades, so the proof will be in how well it works.
It may be a long time before we get virtual AI companions like in the movie Her, where actor Joaquin Phoenix’s character falls in love with Siri-like AI. But virtual assistants are becoming popular because, Xerox says, they cost about a fiftieth of what a human being costs.
Xerox has applied its research from its PARC (formerly Palo Alto Research Center) and Xerox Research Centre Europe in AI, machine learning, and natural language processing. The AI can understand, diagnose, and solve customer problems — without being specifically programmed to give rote responses. It analyzes and learns from human agents
“Because many first-generation virtual agents rely on basic keyword searches, they aren’t able to understand the context of a customer’s question like a human agent can,” said WDS’ Nick Gyles, chief technology officer, in a statement. “The WDS Virtual Agent has the confidence to solve problems itself because it learns just like we do, through experience. The more care data it’s exposed to, the more effective it becomes in delivering relevant and proven responses.”
Xerox captures data like customer sentiment, described symptoms, problem types, root causes and the techniques agents use to resolve customer problems. The data have been there for a while; it just needs AI that is smart enough to absorb it all.
“We’ve found a way for organizations to unlock that data potential to deliver benefit across their wider care channels,” Gyles said. “No other virtual agent technology is able to deliver this consistency and connect intelligence from multiple sources to ensure that the digital experience is as reliable and authentic as a human one.”
Xerox is delivering the WDS Virtual Agent as a cloud-based solution. It will be available in the fourth quarter.
“Our technology helps overcome one of the key barriers brands face in trying to deliver a truly omni-channel care experience; the ability to be consistent. Digital care tools often lag behind the intelligence that resides in the contact center, with outdated content or no awareness of new problems. Our research in artificial intelligence is changing this,” said Jean-Michel Renders, senior scientist at XRCE in a statement. “With our machine learning technology, the WDS Virtual Agent has the ability to learn how to solve new problems as they arise across a company’s wider care channels.”
The overeager adoption of big data is likely to result in catastrophes of analysis comparable to a national epidemic of collapsing bridges. Hardware designers creating chips based on the human brain are engaged in a faith-based undertaking likely to prove a fool’s errand.
Despite recent claims to the contrary, we are no further along with computer vision than we were with physics when Isaac Newton sat under his apple tree.
The course aims to provide a foundation in artificial intelligence techniques for planning, with an overview of the wide spectrum of different problems and approaches, including their underlying theory and their applications. It will allow you to:
Planning is a fundamental part of intelligent systems. In this course, for example, you will learn the basic algorithms that are used in robots to deliberate over a course of actions to take. Simpler, reactive robots don’t need this, but if a robot is to act intelligently, this type of reasoning about actions is vital.
Week 5: Plan Execution and ApplicationsM
The MOOC is based on a Masters level course at the University of Edinburgh but is designed to be accessible at several levels of engagement from an “Awareness Level”, through the core “Foundation Level” requiring a basic knowledge of logic and mathematical reasoning, to a more involved “Performance Level” requiring programming and other assignments.
Five weeks of study comprising 10 hours of video lecture material and special features videos. Quizzes and assessments throughout the course will assist in learning. Some weeks will involve recommended readings. Discussion on the course forum and via other social media will be encouraged. A mid-course catch up break week and a final week for exams and completion of assignments allows for flexibility in study.
You can engage with the course at a number of levels to suit your interests and the time you have available:
Neuromorphic computer chips meant to mimic the neural network architecture of biological brains have generally fallen short of their wetware counterparts in efficiency—a crucial factor that has limited practical applications for such chips. That could be changing. At a power density of just 20 milliwatts per square centimeter, IBM’s new brain-inspired chip comes tantalizingly close to such wetware efficiency. The hope is that it could bring brainlike intelligence to the sensors of smartphones, smart cars, and—if IBM has its way—everything else.
The latest IBM neurosynaptic computer chip, called TrueNorth, consists of 1 million programmable neurons and 256 million programmable synapses conveying signals between the digital neurons. Each of the chip’s 4,096 neurosynaptic cores includes the entire computing package:
Such architecture helps to bypass the bottleneck in traditional von Neumann computing, where program instructions and operation data cannot pass through the same route simultaneously.
“This is literally a supercomputer the size of a postage stamp, light like a feather, and low power like a hearing aid,” says Dharmendra Modha, IBM fellow and chief scientist for brain-inspired computing at IBM Research-Almaden, in San Jose, Calif.
Such chips can emulate the human brain’s ability to recognize different objects in real time; TrueNorth showed it could distinguish among pedestrians, bicyclists, cars, and trucks. IBM envisions its new chips working together with traditional computing devices as hybrid machines, providing a dose of brainlike intelligence. The chip’s architecture, developed together by IBM and Cornell University, was first detailed in August in the journal Science.
In February 2011 an artificially intelligent computer system called IBM Watson astonished audiences worldwide by beating the two all-time greatest Jeopardy champions at their own game.
Thanks to its ability to apply
Watson represented an important milestone in the development of artificial intelligence, but the field has been progressing rapidly – particularly with regard to natural language processing and machine learning.
In 2012, Google used 16,000 computer processors to build a simulated brain that could correctly identify cats in YouTube videos; the Kinect, which provides a 3D body-motion interface for Microsoft’s Xbox, uses algorithms that emerged from artificial intelligence research, as does the iPhone’s Siri virtual personal assistant.
Today a new artificial intelligence computing system has been unveiled, which promises to transform the global workforce. Named ‘Amelia‘ after American aviator and pioneer Amelia Earhart, the system is able to shoulder the burden of often tedious and laborious tasks, allowing human co-workers to take on more creative roles.
“Watson is perhaps the best data analytics engine that exists on the planet; it is the best search engine that exists on the planet; but IBM did not set out to create a cognitive agent. It wanted to build a program that would win Jeopardy, and it did that,” said Chetan Dube, chief executive Officer of IPsoft, the company behind Amelia.
“Amelia, on the other hand, started out not with the intention of winning Jeopardy, but with the pure intention of answering the question posed by Alan Turing in 1950 – can machines think?”
Amelia learns by following the same written instructions as her human colleagues, but is able to absorb information in a matter of seconds. She understands the full meaning of what she reads rather than simply recognising individual words. This involves
When exposed to the same information as any new employee in a company, Amelia can quickly apply her knowledge to solve queries in a wide range of business processes. Just like any smart worker she learns from her colleagues and, by observing their work, she continually builds her knowledge.
While most ‘smart machines’ require humans to adapt their behaviour in order to interact with them, Amelia is intelligent enough to interact like a human herself. She speaks more than 20 languages, and her core knowledge of a process needs only to be learned once for her to be able to communicate with customers in their language.
Independently, rather than through time-intensive programming, Amelia creates her own ‘process map’ of the information she is given so that she can work out for herself what actions to take depending on the problem she is solving.
“Intelligence is the ability to acquire and apply knowledge. If a system claims to be intelligent, it must be able to read and understand documents, and answer questions on the basis of that. It must be able to understand processes that it observes. It must be able to solve problems based on the knowledge it has acquired. And when it cannot solve a problem, it must be capable of learning the solution through noticing how a human did it,” said Dube.
IPsoft has been working on this technology for 15 years with the aim of developing a platform that does not simply mimic human thought processes but can comprehend the underlying meaning of what is communicated – just like a human.
Just as machines transformed agriculture and manufacturing, IPsoft believes that cognitive technologies will drive the next evolution of the global workforce, so that in the future companies will have digital workforces that comprise a mixture of human and virtual employees.
Amelia has already been trialled within a number of Fortune 1000 companies, in areas such as manning technology help desks, procurement processing, financial trading operations support and providing expert advice for field engineers.
In each of these environments, she has learnt not only from reading existing manuals and situational context but also by observing and working with her human colleagues and discerning for herself a map of the business processes being followed.
In a help desk situation, for example, Amelia can understand what a caller is looking for, ask questions to clarify the issue, find and access the required information and determine which steps to follow in order to solve the problem.
As a knowledge management advisor, she can help engineers working in remote locations who are unable to carry detailed manuals, by diagnosing the cause of failed machinery and guiding them towards the best steps to rectifying the problem.
During these trials, Amelia was able to go from solving very few queries independently to 42 per cent of the most common queries within one month. By the second month she could answer 64 per cent of those queries independently.
“That’s a true learning cognitive agent. Learning is the key to the kingdom, because humans learn from experience. A child may need to be told five times before they learn something, but Amelia needs to be told only once,” said Dube.
“Amelia is that Mensa kid, who personifies a major breakthrough in cognitive technologies.”
Analysts at Gartner predict that, by 2017, managed services offerings that make use of autonomics and cognitive platforms like Amelia will drive a 60 per cent reduction in the cost of services, enabling organisations to apply human talent to higher level tasks requiring creativity, curiosity and innovation.
IPsoft even has plans to start embedding Amelia into humanoid robots such as Softbank’s Pepper, Honda’s Asimo or Rethink Robotics’ Baxter, allowing her to take advantage of their mechanical functions.
“The robots have got a fair degree of sophistication in all the mechanical functions – the ability to climb up stairs, the ability to run, the ability to play ping pong. What they don’t have is the brain, and we’ll be supplementing that brain part with Amelia,” said Dube.
“I am convinced that in the next decade you’ll pass someone in the corridor and not be able to discern if it’s a human or an android.”
Given the premise of IPsoft’s artificial intelligence system, it seems logical that the ultimate measure of Amelia’s success would be passing the Turing Test – which sets out to see whether humans can discern whether they are interacting with a human or a machine.
Earlier this year, a chatbot named Eugene Goostman became the first machine to pass the Turing Test by convincingly imitating a 13-year-old boy. In a five-minute keyboard conversation with a panel of human judges, Eugene managed to convince 33 per cent that it was human.
Interestingly, however, IPsoft believes that the Turing Test needs reframing, to redefine what it means to ‘think’. While Eugene was able to immitate natural language, he was only mimicking understanding. He did not learn from the interaction, nor did he demonstrate problem solving skills.
“Natural language understanding is a big step up from parsing. Parsing is syntactic, understanding is semantic, and there’s a big cavern between the two,” said Dube.
“The aim of Amelia is not just to get an accolade for managing to fool one in three people on a panel. The assertion is to create something that can answer to the fundamental need of human beings – particularly after a certain age – of companionship. That is our intent.”
In the past year or two, big companies have been locked in a land grab for talent, paying big bucks for startups and even hiring away deep learning experts from each other. But this event, focused mostly on startups, including several that demonstrated their products before the panel, also revealed there’s still a lot of entrepreneurial activity. In particular, several companies aim to democratize deep learning by offering it as a service or coming up with cheaper hardware to make it more accessible to businesses.
Jurvetson explained why deep learning has pushed the boundaries of AI so much further recently.
Adam Berenzweig, cofounder and CTO of image recognition firm Clarifai and former engineer at Google for 10 years, made the case that deep learning is “adding a new primary sense to computing” in the form of useful computer vision. “Deep learning is forming that bridge between the physical world and the world of computing,” he said.
And it’s allowing that to happen in real time. “Now we’re getting into a world where we can take measurements of the physical world, like pixels in a picture, and turn them into symbols that we can sort,” he said. Clarifai has been working on taking an image and producing a meaningful description very quickly, like 80 milliseconds, and show very similar images.
One interesting application relevant to advertising and marketing, he noted: Once you can recognize key objects in images, you can target ads not just on keywords but on objects in an image.
Even more sweeping, said Naveen Rao, cofounder and CEO of deep-learning hardware and software startup Nervana Systems and former researcher in neuromorphic computing at Qualcomm, deep learning is “that missing link between computing and what the brain does.” Instead of doing specific computations very fast, as conventional computers do, “we can start building new hardware to take computer processing in a whole new direction,” assessing probabilities, like the brain does. “Now there’s actually a business case for this kind of computing,” he said.
And not just for big businesses. Elliot Turner, founder and CEO of AlchemyAPI, a deep-learning platform in the cloud, said his company’s mission is to “democratize deep learning.” The company is working in 10 industries from advertising to business intelligence, helping companies apply it to their businesses. “I look forward to the day that people actually stop talking about deep learning, because that will be when it has really succeeded,” he added.
Despite the obvious advantages of large companies such as Google, which have untold amounts of both data and computer power that deep learning requires to be useful, startups can still have a big impact, a couple of the panelists said. “There’s data in a lot of places. There’s a lot of nooks and crannies that Google doesn’t have access to,” Berenzweig said hopefully. “Also, you can trade expertise for data. There’s also a question of how much data is enough.”
Turner agreed. “It’s not just a matter of stockpiling data,” he said. “Better algorithms can help an application perform better.” He noted that even Facebook, despite its wealth of personal data, found this in its work on image recognition.
Those algorithms may have broad applicability, too. Even if they’re initially developed for specific applications such as speech recognition, it looks like they can be used on a wide variety of applications. “These algorithms are extremely fungible,” said Rao. And he said companies such as Google aren’t keeping them as secret as expected, often publishing them in academic journals and at conferences–though Berenzweig noted that “it takes more than what they publish to do what they do well.”
For all that, it’s not yet clear how much deep learning will actually emulate the brain, even if they are intelligent. But Ilya Sutskever, research scientist at Google Brain and a protege of Geoffrey Hinton, the University of Toronto deep learning guru since the 1980s who’s now working part-time at Google, said it almost doesn’t matter. “You can still do useful predictions” using them. And while the learning principles for dealing with all the unlabeled data out there remain primitive, he said he and many others are working on this and likely will make even more progress.
Rao said he’s unworried that we’ll end up creating some kind of alien intelligence that could run amok if only because advances will be driven by market needs. Besides, he said, “I think a lot of the similarities we’re seeing in computation and brain functions is coincidental. It’s driven that way because we constrain it that way.”
OK, so how are these companies planning to make money on this stuff? Jurvetson wondered. Of course, we’ve already seen improvements in speech and image recognition that make smartphones and apps more useful, leading more people to buy them. “Speech recognition is useful enough that I use it,” said Sutskever. “I’d be happy if I didn’t press a button ever again. And language translation could have a very large impact.”
Beyond that, Berenzweig said, “we’re looking for the low-hanging fruit,” common use cases such as visual search for shopping, organizing your personal photos, and various business niches such as security.
[2014-09-01] Neurons in human skin perform advanced calculations, previously believed that only the brain could perform. This is according to a study from Umeå University in Sweden published in the journal Nature Neuroscience.
|Suggestions have diagonal slats. Timeful|