Category: IBM


Why Apple Joined Rivals Amazon, Google, Microsoft In AI Partnership

By Hugo Angel,

Apple CEO Tim Cook (Photo credit: David Paul Morris/Bloomberg)

Apple is pushing past its famous secrecy for the sake of artificial intelligence.

In December, the Cupertino tech giant quietly published its first AI research paper. Now, it’s joining the Partnership on AI to Benefit People and Society, an industry nonprofit group founded by some of its biggest rivals, including Microsoft, Google and Amazon.

On Friday, the partnership announced that Apple’s head of advanced development for SiriTom Gruber, is joining its board. Gruber has been at Apple since 2010 when the iPhone maker bought Siri, the company he cofounded and where he served as CTO.

“We’re glad to see the industry engaging on some of the larger opportunities and concerns created with the advance of machine learning and AI,” wrote Gruber in a statement on the nonprofit’s website. “We believe it’s beneficial to Apple, our customers, and the industry to play an active role in its development and look forward to collaborating with the group to help drive discussion on how to advance AI while protecting the privacy and security of consumers.”

Other members of the board include

  • Greg Corrado from Google’s DeepMind,
  • Ralf Herbrich from Amazon,
  • Eric Horvitz from Microsoft,
  • Yann Lecun from Facebook, and
  • Francesca Rossi from IBM.

Outside of large companies, the group announced it’s also adding members from the

  • American Civil Liberties Union,
  • OpenAI,
  • MacArthur Foundation,
  • Peterson Institute of International Economics,
  • Arizona State University and the
  • University of California, Berkeley.

The group was formally announced in September.

Board member Horvitz, who is director of Microsoft Research, said the members of the group started meeting with each other at various AI conferences. They were already close colleagues in the field and they thought they could start working together to discuss emerging challenges and opportunities in AI.

 “We believed there were a lot of things companies could do together on issues and challenges in the realm of AI and society,” Horvitz said in an interview. “We don’t see these as areas for competition but for rich cooperation.

The organization will work together to develop best practices and educate the public around AI. Horvitz said the group tackle, for example, critical areas like health care and transportation. The group will look at the potential for biases in AI — after some experiments have shown that the way researchers train the AI algorithms can lead to biases in gender and race. The nonprofit will also try to develop standards around human-machine collaboration, for example, to deal with questions like when should a self-driving car hand off control to the driver.

“I think there’s a realization that AI will touch society quite deeply in the coming years in powerful and nuanced ways,” Horitz said. “We think it’s really important to involve the public as well as experts. Some of these directions has no simple answer. It can’t come from a company. We need to have multiple constituents checking in.”

The AI community has been critical of Apple’s secrecy for several years secrecy has hurt the company’s recruiting efforts for AI talent. The company has been falling behind in some of the major advancements in AI, especially as intelligent voice assistants from Amazon and Google have started taking off with consumers.

Horvitz said the group had been in discussions with Apple since before its launch in September. But Apple wasn’t ready to formally join the group until now. “My own sense is that Apple was in the middle of their iOS 10 and iPhone 7 launches” and wasn’t ready to announce, he said. “We’ve always treated Apple as a founding member of the group.

I think Apple had a realization that to do the best AI research and to have access to the top minds in the field is the expectation of engaging openly with academic research communities,” Horitz said. “Other companies like Microsoft have discovered this over the years. We can be quite competitive and be open to sharing ideas when it comes to the core foundational science.

“It’s my hope that this partnership with Apple shows that the company has a rich engagement with people, society and stakeholders,” he said.

ORIGINAL: Forbes
Aaron TilleyFORBES STAFF
Jan 27, 2017

Partnership on Artificial Intelligence to Benefit People and Society

By Hugo Angel,

Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

THE LATEST
INDUSTRY LEADERS ESTABLISH PARTNERSHIP ON AI BEST PRACTICES

Press ReleasesSeptember 28, 2016 NEW YORK —  IBM, DeepMind,/Google,  Microsoft, Amazon, and Facebook today announced that they will create a non-profit organization that will work to advance public understanding of artificial intelligence technologies (AI) and formulate best practices on the challenges and opportunities within the field. Academics, non-profits, and specialists in policy and ethics will be invited to join the Board of the organization, named the Partnership on Artificial Intelligence to Benefit People and Society (Partnership on AI).

The objective of the Partnership on AI is to address opportunities and challenges with AI technologies to benefit people and society. Together, the organization’s members will conduct research, recommend best practices, and publish research under an open license in areas such as ethics, fairness, and inclusivity; transparency, privacy, and interoperability; collaboration between people and AI systems; and the trustworthiness, reliability, and robustness of the technology. It does not intend to lobby government or other policymaking bodies.

The organization’s founding members will each contribute financial and research resources to the partnership and will share leadership with independent third-parties, including academics, user group advocates, and industry domain experts. There will be equal representation of corporate and non-corporate members on the board of this new organization. The Partnership is in discussions with professional and scientific organizations, such as the Association for the Advancement of Artificial Intelligence (AAAI), as well as non-profit research groups including the Allen Institute for Artificial Intelligence (AI2), and anticipates announcements regarding additional participants in the near future.

AI technologies hold tremendous potential to improve many aspects of life, ranging from healthcare, education, and manufacturing to home automation and transportation. Through rigorous research, the development of best practices, and an open and transparent dialogue, the founding members of the Partnership on AI hope to maximize this potential and ensure it benefits as many people as possible.

… Continue reading

IBM, Local Motors debut Olli, the first Watson-powered self-driving vehicle

By Hugo Angel,

Olli hits the road in the Washington, D.C. area and later this year in Miami-Dade County and Las Vegas.
Local Motors CEO and co-founder John B. Rogers, Jr. with “Olli” & IBM, June 15, 2016.Rich Riggins/Feature Photo Service for IBM

IBM, along with the Arizona-based manufacturer Local Motors, debuted the first-ever driverless vehicle to use the Watson cognitive computing platform. Dubbed “Olli,” the electric vehicle was unveiled at Local Motors’ new facility in National Harbor, Maryland, just outside of Washington, D.C.

Olli, which can carry up to 12 passengers, taps into four Watson APIs (

  • Speech to Text, 
  • Natural Language Classifier, 
  • Entity Extraction and 
  • Text to Speech

) to interact with its riders. It can answer questions like “Can I bring my children on board?” and respond to basic operational commands like, “Take me to the closest Mexican restaurant.” Olli can also give vehicle diagnostics, answering questions like, “Why are you stopping?

Olli learns from data produced by more than 30 sensors embedded throughout the vehicle, which will added and adjusted to meet passenger needs and local preferences.
While Olli is the first self-driving vehicle to use IBM Watson Internet of Things (IoT), this isn’t Watson’s first foray into the automotive industry. IBM launched its IoT for Automotive unit in September of last year, and in March, IBM and Honda announced a deal for Watson technology and analytics to be used in the automaker’s Formula One (F1) cars and pits.
IBM demonstrated its commitment to IoT in March of last year, when it announced it was spending $3B over four years to establish a separate IoT business unit, whch later became the Watson IoT business unit.
IBM says that starting Thursday, Olli will be used on public roads locally in Washington, D.C. and will be used in Miami-Dade County and Las Vegas later this year. Miami-Dade County is exploring a pilot program that would deploy several autonomous vehicles to shuttle people around Miami.
ORIGINAL: ZDnet
By Stephanie Condon for Between the Lines
June 16, 2016

A Scale-up Synaptic Supercomputer (NS16e): Four Perspectives

By Hugo Angel,

Today, Lawrence Livermore National Lab (LLNL) and IBM announce the development of a new Scale-up Synaptic Supercomputer (NS16e) that highly integrates 16 TrueNorth Chips in a 4×4 array to deliver 16 million neurons and 256 million synapses. LLNL will also receive an end-to-end software ecosystem that consists of a simulator; a programming language; an integrated programming environment; a library of algorithms as well as applications; firmware; tools for composing neural networks for deep learning; a teaching curriculum; and cloud enablement. Also, don’t miss the story in The Wall Street Journal (sign-in required) and the perspective and a video by LLNL’s Brian Van Essen.
To provide insights into what it took to achieve this significant milestone in the history of our project, following are four intertwined perspectives from my colleagues:

  • Filipp Akopyan — First Steps to an Efficient Scalable NeuroSynaptic Supercomputer.
  • Bill Risk and Ben Shaw — Creating an Iconic Enclosure for the NS16e.
  • Jun Sawada — NS16e System as a Neural Network Development Workstation.
  • Brian Taba — How to Program a Synaptic Supercomputer.
The following timeline provides context for today’s milestone in terms of the continued evolution of our project.
Illustration Credit: William Risk

IBM’s SystemML machine learning system becomes Apache Incubator project

By Hugo Angel,

There’s a race between tech giants to open source machine learning systems and become a dominant platform. Apache SystemML has clear enterprise spin.
IBM on Monday said its machine learning system, dubbed SystemML, has been accepted as an open source project by the Apache Incubator.
SPECIAL FEATURE
 
Machine learning, task automation and robotics are already widely used in business. These and other AI technologies are about to multiply, and we look at how organizations can best take advantage of them.
The Apache Incubator is an entry to becoming a project of The Apache Software Foundation. The general idea behind the incubator is to ensure code donations adhere to Apache’s legal guidelines and communities follow guiding principles.
IBM said it would donate SystemML as an open source project in June.
What’s notable about IBM’s SystemML milestone is that open sourcing machine learning systems is becoming a trend. To wit:
For enterprises, the upshot is that there will be a bevy of open source machine learning code bases to consider. Google TensorFlow and Facebook Torch are tools to train neural networks. SystemML is aimed a broadening the ecosystem to business use.
Why are tech giants going open source with their machine learning tools?
The machine learning platform that gets the most data will learn faster and then become more powerful. That cycle will just result in more data to ingest. IBM is looking to work the enterprise angle on machine learning. Microsoft may be another entry on the enterprise side, but may not go the Apache route.
In addition, there are precedents to how open sourcing big analytics ideas can pay off. MapReduce and Hadoop started as open source projects and would be a cousin of whatever Apache machine learning system wins out.
IBM’s SystemML, which is now Apache SystemML, is used to create industry specific machine learning algorithms for enterprise data analysis. IBM created SystemML so it could write one codebase that could apply to multiple industries and platforms. If SystemML can scale, IBM’s Apache move could provide a gateway to its other analytics wares.
The Apache SystemML project has included more than 320 patches for everything from APIs, data ingestion and documentation, more than 90 contributions to Apache Spark and 15 additional organizations adding to the SystemML engine.
Here’s the full definition of the Apache SystemML project:
SystemML provides declarative large-scale machine learning (ML) that aims at flexible specification of ML algorithms and automatic generation of hybrid runtime plans ranging from single node, in-memory computations, to distributed computations on Apache Hadoop and Apache Spark. ML algorithms are expressed in a R or Python syntax, that includes linear algebra primitives, statistical functions, and ML-specific constructs. This high-level language significantly increases the productivity of data scientists as it provides (1) full flexibility in expressing custom analytics, and (2) data independence from the underlying input formats and physical data representations. Automatic optimization according to data characteristics such as distribution on the disk file system, and sparsity as well as processing characteristics in the distributed environment like number of nodes, CPU, memory per node, ensures both efficiency and scalability.
ORIGINAL: ZDNet
November 23, 2015

IBM’s Watson Personality Insights service

By admin,

How it works
Personality Insights extracts and analyzes a spectrum of personality attributes to help discover actionable insights about people and entities, and in turn guides end users to highly personalized interactions
The service outputs personality characteristics that are divided into three dimensions: 
While some services are contextually specific depending on the domain model and content, Personality Insights only requires a minimum of 3500+ words of any text.
Intended UsePersonality Insights is great for brand analytics and can help measure a brand’s personality and compare/contrast with your customers personalities. It can also help with market segmentation and individualizing marketing campaigns or promotions. Personality Insights can also be used to help recruiters or university admissions match candidates to companies or universities. Overall, Personality Insights individualizes customer care and infers personality traits to drive a more tailored response.
YOU INPUT:JSON, or Text or HTML (such as social media, emails, blogs, or other communication) written by one individual
SERVICE OUTPUT:A tree of cognitive and social characteristics in JSON or CSV format
The IBM Watson™ Personality Insights service provides an Application Programming Interface (API) that enables applications to derive insights from social media, enterprise data, or other digital communications. The service uses linguistic analytics to infer personality and social characteristics, including 
from text. 
These insights help businesses to understand their clients’ preferences and improve customer satisfaction by anticipating customer needs and recommending future actions. Businesses can use these insights to improve client acquisition, retention, and engagement, and to strengthen relations with their clients.
You can see a quick demo of the Personality Insights service in action. The demo lets you analyze input text to develop a personality portrait for the author of the text. The applications 
  • Speak Up, 
  • NYC School Finder, and 
  • Your Celebrity Match 

on the Watson Developer Cloud App Gallery also demonstrate the Personality Insights service.

We are always looking to improve and learn from your experience with our services. You can submit comments or ask questions about Personality Insights in the Watson forum. You can also read posts about Watson services that are written by IBM researchers, developers, and other experts on the Watson blog
Specifically, you might want to look at
The Personality Insights service is generally available (GA). For information about the pricing plans available for the service, see thePersonality Insights service in Bluemix.
Developing a Personality Insights application
  • To begin working with the Personality Insights service by creating and running applications that communicate with the service, see the following sections:
  • To create and run an example Node.js application that works with the service from the command line, see Watson Quick Start for Node.js.
  • To create and run a sample Node.js application that works with the service from a web browser, see Developing a Watson application in Node.js. You need the link to the source code for the Node.js application at the personality-insights-nodejs repository in the watson-developer-cloudnamespace on GitHub.
  • To create and run a sample Java application that works with the service from a web browser, see Developing a Watson application in Java. You need the link to the source code for the Java application at thepersonality-insights-java repository in the watson-developer-cloud namespace on GitHub.
  • For the sample applications available from GitHub, you can download a .zipfile that contains the source code or, if you are familiar with Git, fork the repository into your Git namespace or clone it to your local system. To learn about Git or to download Git for your operating system, see git-scm.com/documentation.
  • For a language-independent introduction to working with Watson Developer Cloud services and Bluemix, see Developing Watson applications with Bluemix. That page provides an overview of working with Watson services with the Bluemix web interface, the Eclipse IDE, or the Cloud Foundry command-line tool.

IBM’S ‘Rodent Brain’ Chip Could Make Our Phones Hyper-Smart

By admin,

At a lab near San Jose, IBM has built the digital equivalent of a rodent brain—roughly speaking. It spans 48 of the company’s experimental TrueNorth chips, a new breed of processor that mimics the brain’s biological building blocks. IBM
DHARMENDRA MODHA WALKS me to the front of the room so I can see it up close. About the size of a bathroom medicine cabinet, it rests on a table against the wall, and thanks to the translucent plastic on the outside, I can see the computer chips and the circuit boards and the multi-colored lights on the inside. It looks like a prop from a ’70s sci-fi movie, but Modha describes it differently. “You’re looking at a small rodent,” he says.
He means the brain of a small rodent—or, at least, the digital equivalent. The chips on the inside are designed to behave like neurons—the basic building blocks of biological brains. Modha says the system in front of us spans 48 million of these artificial nerve cells, roughly the number of neurons packed into the head of a rodent.
Modha oversees the cognitive computing group at IBM, the company that created these “neuromorphic” chips. For the first time, he and his team are sharing their unusual creations with the outside world, running a three-week “boot camp” for academics and government researchers at an IBM R&D lab on the far side of Silicon Valley. Plugging their laptops into the digital rodent brain at the front of the room, this eclectic group of computer scientists is exploring the particulars of IBM’s architecture and beginning to build software for the chip dubbed TrueNorth.
We want to get as close to the brain as possible while maintaining flexibility.’DHARMENDRA MODHA, IBM
Some researchers who got their hands on the chip at an engineering workshop in Colorado the previous month have already fashioned software that can identify images, recognize spoken words, and understand natural language. Basically, they’re using the chip to run “deep learning” algorithms, the same algorithms that drive the internet’s latest AI services, including the face recognition on Facebook and the instant language translation on Microsoft’s Skype. But the promise is that IBM’s chip can run these algorithms in smaller spaces with considerably less electrical power, letting us shoehorn more AI onto phones and other tiny devices, including hearing aids and, well, wristwatches.
What does a neuro-synaptic architecture give us? It lets us do things like image classification at a very, very low power consumption,” says Brian Van Essen, a computer scientist at the Lawrence Livermore National Laboratory who’s exploring how deep learning could be applied to national security. “It lets us tackle new problems in new environments.
The TrueNorth is part of a widespread movement to refine the hardware that drives deep learning and other AI services. Companies like Google and Facebook and Microsoft are now running their algorithms on machines backed with GPUs (chips originally built to render computer graphics), and they’re moving towards FPGAs (chips you can program for particular tasks). For Peter Diehl, a PhD student in the cortical computation group at ETH Zurich and University Zurich, TrueNorth outperforms GPUs and FPGAs in certain situations because it consumes so little power.
The main difference, says Jason Mars, a professor of a computer science at the University of Michigan, is that the TrueNorth dovetails so well with deep-learning algorithms. These algorithms mimic neural networks in much the same way IBM’s chips do, recreating the neurons and synapses in the brain. One maps well onto the other. “The chip gives you a highly efficient way of executing neural networks,” says Mars, who declined an invitation to this month’s boot camp but has closely followed the progress of the chip.
That said, the TrueNorth suits only part of the deep learning process—at least as the chip exists today—and some question how big an impact it will have. Though IBM is now sharing the chips with outside researchers, it’s years away from the market. For Modha, however, this is as it should be. As he puts it: “We’re trying to lay the foundation for significant change.
The Brain on a Phone
Peter Diehl recently took a trip to China, where his smartphone didn’t have access to the `net, an experience that cast the limitations of today’s AI in sharp relief. Without the internet, he couldn’t use a service like Google Now, which applies deep learning to speech recognition and natural language processing, because most the computing takes place not on the phone but on Google’s distant servers. “The whole system breaks down,” he says.
Deep learning, you see, requires enormous amounts of processing power—processing power that’s typically provided by the massive data centers that your phone connects to over the `net rather than locally on an individual device. The idea behind TrueNorth is that it can help move at least some of this processing power onto the phone and other personal devices, something that can significantly expand the AI available to everyday people.
To understand this, you have to understand how deep learning works. It operates in two stages. 
  • First, companies like Google and Facebook must train a neural network to perform a particular task. If they want to automatically identify cat photos, for instance, they must feed the neural net lots and lots of cat photos. 
  • Then, once the model is trained, another neural network must actually execute the task. You provide a photo and the system tells you whether it includes a cat. The TrueNorth, as it exists today, aims to facilitate that second stage.
Once a model is trained in a massive computer data center, the chip helps you execute the model. And because it’s small and uses so little power, it can fit onto a handheld device. This lets you do more at a faster speed, since you don’t have to send data over a network. If it becomes widely used, it could take much of the burden off data centers. “This is the future,” Mars says. “We’re going to see more of the processing on the devices.”
Neurons, Axons, Synapses, Spikes
Google recently discussed its efforts to run neural networks on phones, but for Diehl, the TrueNorth could take this concept several steps further. The difference, he explains, is that the chip dovetails so well with deep learning algorithms. Each chip mimics about a million neurons, and these can communicate with each other via something similar to a synapse, the connections between neurons in the brain.
‘Silicon operates in a very different way than the stuff our brains are made of.’
The setup is quite different than what you find in chips on the market today, including GPUs and FPGAs. Whereas these chips are wired to execute particular “instructions,” the TrueNorth juggles “spikes,” much simpler pieces of information analogous to the pulses of electricity in the brain. Spikes, for instance, can show the changes in someone’s voice as they speak—or changes in color from pixel to pixel in a photo. “You can think of it as a one-bit message sent from one neuron to another.” says Rodrigo Alvarez-Icaza, one of the chip’s chief designers.
The upshot is a much simpler architecture that consumes less power. Though the chip contains 5.4 billion transistors, it draws about 70 milliwatts of power. A standard Intel computer processor, by comparison, includes 1.4 billion transistors and consumes about 35 to 140 watts. Even the ARM chips that drive smartphones consume several times more power than the TrueNorth.
Of course, using such a chip also requires a new breed of software. That’s what researchers like Diehl are exploring at the TrueNorth boot camp, which began in early August and runs for another week at IBM’s research lab in San Jose, California. In some cases, researchers are translating existing code into the “spikes” that the chip can read (and back again). But they’re also working to build native code for the chip.
Parting Gift
Like these researchers, Modha discusses the TrueNorth mainly in biological terms. Neurons. Axons. Synapses. Spikes. And certainly, the chip mirrors such wetware in some ways. But the analogy has its limits. “That kind of talk always puts up warning flags,” says Chris Nicholson, the co-founder of deep learning startup Skymind. “Silicon operates in a very different way than the stuff our brains are made of.
Modha admits as much. When he started the project in 2008, backed by $53.5M in funding from Darpa, the research arm for the Department of Defense, the aim was to mimic the brain in a more complete way using an entirely different breed of chip material. But at one point, he realized this wasn’t going to happen anytime soon. “Ambitions must be balanced with reality,” he says.
In 2010, while laid up in bed with the swine flu, he realized that the best way forward was a chip architecture that loosely mimicked the brain—an architecture that could eventually recreate the brain in more complete ways as new hardware materials were developed. “You don’t need to model the fundamental physics and chemistry and biology of the neurons to elicit useful computation,” he says. “We want to get as close to the brain as possible while maintaining flexibility.
This is TrueNorth. It’s not a digital brain. But it is a step toward a digital brain. And with IBM’s boot camp, the project is accelerating. The machine at the front of the room is really 48 separate machines, each built around its own TrueNorth processors. Next week, as the boot camp comes to a close, Modha and his team will separate them and let all those academics and researchers carry them back to their own labs, which span over 30 institutions on five continents. “Humans use technology to transform society,” Modha says, pointing to the room of researchers. “These are the humans..
ORIGINAL: Wired
08.17.15

IBM Announces Computer Chips More Powerful Than Any in Existence

By admin,

A wafer made up of seven-nanometer chips.
A wafer made up of seven-nanometer chips. IBM said it made the advance by using silicon-germanium instead of pure silicon. CreditDarryl Bautista/IBM
IBM said on Thursday that it had made working versions of ultradense computer chips, with roughly four times the capacity of today’s most powerful chips.
The announcement, made on behalf of an international consortium led by IBM, the giant computer company, is part of an effort to manufacture the most advanced computer chips in New York’s Hudson Valley, where IBM is investing $3 billion in a private-public partnership with New York State, GlobalFoundries, Samsung and equipment vendors.
The development lifts a bit of the cloud that has fallen over the semiconductor industry, which has struggled to maintain its legendary pace of doubling transistor density every two years.
Intel, which for decades has been the industry leader, has faced technical challenges in recent years. Moreover, technologists have begun to question whether the longstanding pace of chip improvement, known as Moore’s Law, would continue past the current 14-nanometer generation of chips.
Each generation of chip technology is defined by the minimum size of fundamental components that switch current at nanosecond intervals. Today the industry is making the commercial transition from what the industry generally describes as 14-nanometer manufacturing to 10-nanometer manufacturing.
Michael Liehr of the SUNY College of Nanoscale Science and Engineering, left, and Bala Haranand of IBM examine a wafer comprised of the new chips. They are not yet ready for commercial manufacturing. CreditDarryl Bautista/IBM
Each generation brings roughly a 50 percent reduction in the area required by a given amount of circuitry. IBM’s new chips, though still in a research phase, suggest that semiconductor technology will continue to shrink at least through 2018.
The company said on Thursday that it had working samples of chips with seven-nanometer transistors. It made the research advance by using silicon-germanium instead of pure silicon in key regions of the molecular-size switches.
The new material makes possible faster transistor switching and lower power requirements. The tiny size of these transistors suggests that further advances will require new materials and new manufacturing techniques.
As points of comparison to the size of the seven-nanometer transistors, a strand of DNA is about 2.5 nanometers in diameter and a red blood cell is roughly 7,500 nanometers in diameter. IBM said that would make it possible to build microprocessors with more than 20 billion transistors.
I’m not surprised, because this is exactly what the road map predicted, but this is fantastic,” said Subhashish Mitra, director of the Robust Systems Group in the Electrical Engineering Department at Stanford University.
Even though IBM has shed much of its computer and semiconductor manufacturing capacity, the announcement indicates that the company remains interested in supporting the nation’s high technology manufacturing base.
This puts IBM in the position of being a gentleman gambler as opposed to being a horse owner,” said Richard Doherty, president of Envisioneering, a Seaford, N.Y., consulting firm, referring to the fact that IBM’s chip manufacturing facility was acquired by GlobalFoundries effective last week.
IBM’s seven-nanometer node transistors. A strand of DNA is about 2.5 nanometers in diameter and a red blood cell is roughly 7,500 nanometers in diameter. CreditIBM Research
They still want to be in the race,” he added.
IBM now licenses the technology it is developing to a number of manufacturers and GlobalFoundries, owned by the Emirate of Abu Dhabi, to make chips for companies including Broadcom, Qualcomm and Advanced Micro Devices.
The semiconductor industry must now decide if IBM’s bet on silicon-germanium is the best way forward.
It must also grapple with the shift to using extreme ultraviolet, or EUV, light to etch patterns on chips at a resolution that approaches the diameter of individual atoms. In the past, Intel said it could see its way toward seven-nanometer manufacturing. But it has not said when that generation of chip making might arrive.
IBM also declined to speculate on when it might begin commercial manufacturing of this technology generation. This year, Taiwan Semiconductor Manufacturing Company said that it planned to begin pilot product of seven-nanometer chips in 2017. Unlike IBM, however, it has not demonstrated working chips to meet that goal.
It is uncertain whether the longer exposure times required by the new generation of EUV photolithographic stepper machines would make high-speed manufacturing operations impossible. Even the slightest vibration can undermine the precision of the optics necessary to etch lines of molecular thicknesses, and the semiconductor industry has been forced to build specialized stabilized buildings to try to isolate equipment from vibration.
An IBM official said that the consortium now sees a way to use EUV light in commercial manufacturing operations.
EUV is another game changer,” said Mukesh Khare, vice president for semiconductor research at IBM. To date, he noted, the demonstration has taken place in a research lab, not in a manufacturing plant. Ultimately the goal is to create circuits that have been reduced in area by another 50 percent over the industry’s 10-nanometer technology generation scheduled to be introduced next year.
ORIGINAL: NYTimes
JULY 9, 2015

IBM Watson Language Translation and Speech Services – General Availability

By admin,

As part of the Watson development platform’s continued expansion, IBM is today introducing the latest set of cognitive services to move into General Availability (GA) that will drive new Watson powered applications. They include the GA release of IBM Watson Language Translation (a merger of Language Identification and Machine Translation), IBM Speech to Text, and IBM Text to Speech.

These cognitive speech and language services are open to anyone, enabling application developers and IBM’s growing ecosystem to develop and commercialize new cognitive computing solutions that can do the following:
  • Translate news, patents, or conversational documents across several languages (Language Translation)
  • Produce transcripts from speech in multi-media files or conversational streams, capturing vast information for a myriad of business uses. This Watson cognitive service also benefits from a recent IBM conversational speech transcription breakthrough to advance the accuracy of speech recognition (Speech to Text)
  • Make their web, mobile, and Internet of Things applications speak with a consistent voice across all Representational State Transfer (REST) – compatible platforms (Text to Speech)
  • There are already organizations building applications with these services, since IBM opened them up in beta mode over the past year on the Watson Developer Cloud on IBM Bluemix. Developers have used these APIs to quickly build prototype applications in only two days at IBM hack-a-thons, demonstrating the versatility and ease of use of the services.
Supported Capabilities
We have made several updates since the beta releases which was inspired by feedback from our user community.
Language Translation now supports:
  • Language Identification – identifies the textual input of the language if it is one of the 62 supported languages
  • The News domain – targeted at news articles and transcripts, it translates English to and from French, Spanish, Portuguese or Arabic
  • The Conversational domain – targeted at conversational colloquialisms, it translates English to and from French, Spanish, Portuguese, or Arabic
  • The Patent domain – targeted at technical and legal terminology, it translates Spanish, Portuguese, Chinese, or Korean to English
Speech to Text now supports:
  • New wideband and narrowband telephony language support – U.S. English, Spanish, and Japanese
  • Broader vocabulary coverage, and improved accuracy for U.S. English
Text to Speech now supports:
  • U.S. English, UK English, Spanish, French, Italian, and German
  • A subset of SSML (Speech Synthesis Markup Language) for U.S. English, U.K. English, French, and German (see the documentation for more details)
  • Improved programming support for applications stored outside of Bluemix
  • Pricing and Freemium Tiers
Trial Bluemix accounts remain free. Please visit www.bluemix.net to register, and get free instant access to a 30-day trial without a credit card. Use of the Speech to Text, Text to Speech, and Language Translation services are free during this trial period.
After the trial period, pricing for Language Translation will be:
  • $0.02 per thousand characters. The first million characters per month are free.
  • An add-on charge of $3.00 per thousand characters for usage of the Patent model in Language Translation.
After the trial period, pricing for Speech to Text will be:
  • $0.02 per minute. The first thousand minutes per month are free.
  • An add-on charge of $0.02 per minute for usage of narrowband (telephony) models. The first thousand minutes per month are free.
After the trial period, pricing for Text to Speech will be:
  • $0.02 per thousand characters. The first million characters per month are free.
Transition Plan
We look forward to continuing our partnership with the many clients, business partners, and creative developers that have built innovative applications using the beta version of the four services: Speech to Text, Text to Speech, Machine Translation and Language Identification. If you have used these beta services, please migrate your applications to use the GA services by August 10, 2015. After this date the beta plans for these services will no longer be available. For details about upgrading, see:
IBM is placing the power of Watson in the hands of developers and an ecosystem of partners, entrepreneurs, tech enthusiasts and students with a growing platform of Watson services (APIs) to create an entirely new class of apps and businesses that make cognitive computing systems the new computing standard.
ORIGINAL: IBM
JULY 6, 2015

IBM starts testing AI software that mimics the human brain

By admin,

We haven’t talked about Numenta since an HP exec left to join the company in 2011, because, well, it’s been keeping a pretty low-profile existence. Now, a big name tech corp is reigniting interest in the company and its artificial intelligence software. According toMIT’s Technology Review, IBM has recently started testing Numenta’s algorithms for practical tasks, such as analyzing satellite imagery of crops and spotting early signs of malfunctioning field machinery. Numenta’s technology caught IBM’s eye, because it works more similarly to the human brain than other AI software. The 100-person IBM team that’s testing the algorithms is led by veteran researcher Winfried Wilcke, who had great things to say about the technology during a conference talk back in February.
Tech Review says he praised Numenta for “being closer to biological reality than other machine learning software” — in other words, it’s more brain-like compared to its rivals. For instance, it can make sense of data more quickly than competitors, which have to be fed tons of examples, before they can see patterns and handle their jobs. As such, Numenta’s algorithms can potentially give rise to more intelligent software.
The company has its share of critics, however. Gary Marcus, a New York University psychology professor and a co-founder of another AI startup, told Tech Review that while Numenta’s creation is pretty brain-like, it’s oversimplified. So far, he’s yet to see it “try to handle natural language understanding or even produce state-of-the-art results in image recognition.” It would be interesting to see IBM use the technology to develop, for example, speech-to-text software head and shoulders above the rest or a voice assistant that can understand any accent, as part of its tests. At the moment, though, Numenta’s employees are focusing on teaching the software to control physical equipment to be used in future robots.
[Image credit: Petrovich9/Getty]
ORIGINAL: Engadget

Artificial Intelligence Is Almost Ready for Business

By admin,

Artificial Intelligence (AI) is an idea that has oscillated through many hype cycles over many years, as scientists and sci-fi visionaries have declared the imminent arrival of thinking machines. But it seems we’re now at an actual tipping point. AI, expert systems, and business intelligence have been with us for decades, but this time the reality almost matches the rhetoric, driven by

  • the exponential growth in technology capabilities (e.g., Moore’s Law),
  • smarter analytics engines, and
  • the surge in data.

Most people know the Big Data story by now: the proliferation of sensors (the “Internet of Things”) is accelerating exponential growth in “structured” data. And now on top of that explosion, we can also analyze “unstructured” data, such as text and video, to pick up information on customer sentiment. Companies have been using analytics to mine insights within this newly available data to drive efficiency and effectiveness. For example, companies can now use analytics to decide

  • which sales representatives should get which leads,
  • what time of day to contact a customer, and
  • whether they should e-mail them, text them, or call them.

Such mining of digitized information has become more effective and powerful as more info is “tagged” and as analytics engines have gotten smarter. As Dario Gil, Director of Symbiotic Cognitive Systems at IBM Research, told me:

Data is increasingly tagged and categorized on the Web – as people upload and use data they are also contributing to annotation through their comments and digital footprints. This annotated data is greatly facilitating the training of machine learning algorithms without demanding that the machine-learning experts manually catalogue and index the world. Thanks to computers with massive parallelism, we can use the equivalent of crowdsourcing to learn which algorithms create better answers. For example, when IBM’s Watson computer played ‘Jeopardy!,’ the system used hundreds of scoring engines, and all the hypotheses were fed through the different engines and scored in parallel. It then weighted the algorithms that did a better job to provide a final answer with precision and confidence.”

Beyond the Quants

Interestingly, for a long time, doing detailed analytics has been quite labor- and people-intensive. You need “quants,” the statistically savvy mathematicians and engineers who build models that make sense of the data. As Babson professor and analytics expert Tom Davenport explained to me, humans are traditionally necessary to

  • create a hypothesis,
  • identify relevant variables,
  • build and run a model, and
  • then iterate it.

Quants can typically create one or two good models per week.

However, machine learning tools for quantitative data – perhaps the first line of AI – can create thousands of models a week. For example, in programmatic ad buying on the Web, computers decide which ads should run in which publishers’ locations. Massive volumes of digital ads and a never-ending flow of clickstream data depend on machine learning, not people, to decide which Web ads to place where. Firms like DataXu use machine learning to generate up to 5,000 different models a week, making decisions in under 15 milliseconds, so that they can more accurately place ads that you are likely to click on.

Tom Davenport:

I initially thought that AI and machine learning would be great for augmenting the productivity of human quants. One of the things human quants do, that machine learning doesn’t do, is to understand what goes into a model and to make sense of it. That’s important for convincing managers to act on analytical insights. For example, an early analytics insight at Osco Pharmacy uncovered that people who bought beer also bought diapers. But because this insight was counter-intuitive and discovered by a machine, they didn’t do anything with it. But now companies have needs for greater productivity than human quants can address or fathom. They have models with 50,000 variables. These systems are moving from augmenting humans to automating decisions.”

In business, the explosive growth of complex and time-sensitive data enables decisions that can give you a competitive advantage, but these decisions depend on analyzing at a speed, volume, and complexity that is too great for humans. AI is filling this gap as it becomes ingrained in the analytics technology infrastructure in industries like health care, financial services, and travel.

The Growing Use of AI

IBM is leading the integration of AI in industry. It has made a $1 billion investment in AI through the launch of its IBM Watson Group and has made many advancements and published research touting the rise of “cognitive computing” – the ability of computers like Watson to understand words (“natural language”), not just numbers. Rather than take the cutting edge capabilities developed in its research labs to market as a series of products, IBM has chosen to offer a platform of services under the Watson brand. It is working with an ecosystem of partners who are developing applications leveraging the dynamic learning and cloud computing capabilities of Watson.

The biggest application of Watson has been in health care. Watson excels in situations where you need to bridge between massive amounts of dynamic and complex text information (such as the constantly changing body of medical literature) and another mass of dynamic and complex text information (such as patient records or genomic data), to generate and evaluate hypotheses. With training, Watson can provide recommendations for treatments for specific patients. Many prestigious academic medical centers, such as The Cleveland Clinic, The Mayo Clinic, MD Anderson, and Memorial Sloan-Kettering are working with IBM to develop systems that will help healthcare providers better understand patients’ diseases and recommend personalized courses of treatment. This has provento be a challenging domain to automate and most of the projects are behind schedule.Another large application area for AI is in financial services. Mike Adler, Global Financial Services Leader at The Watson Group, told me they have 45 clients working mostly on three applications:

  • (1) a “digital virtual agent” that enables banks and insurance companies to engage their customers in a new, personalized way,
  • (2) a “wealth advisor” that enables financial planning and wealth management, either for self-service or in combination with a financial advisor, and
  • (3) risk and compliance management.

For example, USAA, the $20 billion provider of financial services to people that serve, or have served, in the United States military, is using Watson to help their members transition from the military to civilian life. Neff Hudson, vice president of emerging channels at USAA, told me, “We’re always looking to help our members, and there’s nothing more critical than helping the 150,000+ people leaving the military every year. Their financial security goes down when they leave the military. We’re trying to use a virtual agent to intervene to be more productive for them.” USAA also uses AI to enhance navigation on their popular mobile app. The Enhanced Virtual Assistant, or Eva, enables members to do 200 transactions by just talking, including transferring money and paying bills. “It makes search better and answers in a Siri-like voice. But this is a 1.0 version. Our next step is to create a virtual agent that is capable of learning. Most of our value is in moving money day-to-day for our members, but there are a lot of unique things we can do that happen less frequently with our 140 products. Our goal is to be our members’ personal financial agent for our full range of services.

In addition to working with large, established companies, IBM is also providing Watson’s capabilities to startups. IBM has set aside $100 million for investments in startups. One of the startups that is leveraging Watson is WayBlazer, a new venture in travel planning that is led by Terry Jones, a founder of Travelocity and Kayak. He told me:

I’ve spent my whole career in travel and IT.

  • I started as a travel agent, and people would come in, and I’d send them a letter in a couple weeks with a plan for their trip. 
  • The Sabre reservation system made the process better by automating the channel between travel agents and travel providers
  • Then with Travelocity we connected travelers directly with travel providers through the Internet. 
  • Then with Kayak we moved up the chain again, providing offers across travel systems
  • Now with WayBlazer we have a system that deals with words. Nobody has helped people with a tool for dreaming and planning their travel. 

Our mission is to make it easy and give people several personalized answers to a complicated trip, rather than the millions of clues that search provides today. This new technology can take data out of all the silos and dark wells that companies don’t even know they have and use it to provide personalized service.
What’s Next

As Moore’s Law marches on, we have more power in our smartphones than the most powerful supercomputers did 30 or 40 years ago. Ray Kurzweil has predicted that the computing power of a $4,000 computer will surpass that of a human brain in 2019 (20 quadrillion calculations per second).

What does it all mean for the future of AI?

To get a sense, I talked to some venture capitalists, whose profession it is to keep their eyes and minds trained on the future. Mark Gorenberg, Managing Director at Zetta Venture Partners, which is focused on investing in analytics and data startups, told me, “AI historically was not ingrained in the technology structure. Now we’re able to build on top of ideas and infrastructure that didn’t exist before. We’ve gone through the change of Big Data. Now we’re adding machine learning. AI is not the be-all and end-all; it’s an embedded technology. It’s like taking an application and putting a brain into it, using machine learning. It’s the use of cognitive computing as part of an application.” Another veteran venture capitalist, Promod Haque, senior managing partner at Norwest Venture Partners, explained to me, “if you can have machines automate the correlations and build the models, you save labor and increase speed. With tools like Watson, lots of companies can do different kinds of analytics automatically.

Manoj Saxena, former head of IBM’s Watson efforts and now a venture capitalist, believes that analytics is moving to the “cognitive cloud” where massive amounts of first- and third-party data will be fused to deliver real-time analysis and learning. Companies often find AI and analytics technology difficult to integrate, especially with the technology moving so fast; thus, he sees collaborations forming where companies will bring their people with domain knowledge, and emerging service providers will bring system and analytics people and technology. Cognitive Scale (a startup that Saxena has invested in) is one of the new service providers adding more intelligence into business processes and applications through a model they are calling “Cognitive Garages.” Using their “10-10-10 method”: they

  • deploy a cognitive cloud in 10 seconds,
  • build a live app in 10 hours, and
  • customize it using their client’s data in 10 days.

Saxena told me that the company is growing extremely rapidly.

I’ve been tracking AI and expert systems for years. What is most striking now is its genuine integration as an important strategic accelerator of Big Data and analytics. Applications such as USAA’s Eva, healthcare systems using IBM’s Watson, and WayBlazer, among others, are having a huge impact and are showing the way to the next generation of AI.
Brad Power has consulted and conducted research on process innovation and business transformation for the last 30 years. His latest research focuses on how top management creates breakthrough business models enabling today’s performance and tomorrow’s innovation, building on work with the Lean Enterprise Institute, Hammer and Company, and FCB Partners.


ORIGINAL:
HBR

Brad PowerMarch 19, 2015

What will happen when the internet of things becomes artificially intelligent?

By admin,

ORIGINAL: The Guardian
Stephen Balkam
Friday 20 February 2015
From Stephen Hawking to Spike Jonze, the existential threat posed by the onset of the ‘conscious web’ is fuelling much debate – but should we be afraid?

Who’s afraid of artificial intelligence? Quite a few notable figures, it turns out. Photograph: Alamy

When Stephen Hawking, Bill Gates and Elon Musk all agree on something, it’s worth paying attention.

All three have warned of the potential dangers that artificial intelligence or AI can bring. The world’s foremost physicist, Hawking said that the full development of artificial intelligence (AI) could spell the end of the human race. Musk, the tech entrepreneur who brought us PayPal, Tesla and SpaceX described artificial intelligence as our biggest existential threatand said that playing around with AI was like “summoning the demon”. Gates, who knows a thing or two about tech, puts himself in the concerned camp when it comes to machines becoming too intelligent for us humans to control.

What are these wise souls afraid of? AI is broadly described as the ability of computer systems to ape or mimic human intelligent behavior. This could be anything from recognizing speech, to visual perception, making decisions and translating languages. Examples run from Deep Blue who beat chess champion Garry Kasparov to supercomputer Watson who outguessed the world’s best Jeopardy player. Fictionally, we have Her, Spike Jonze’s movie that depicts the protagonist, played by Joaquin Phoenix, falling in love with his operating system, seductively voiced by Scarlet Johansson. And coming soon, Chappie stars a stolen police robot who is reprogrammed to make conscious choices and to feel emotions.

An important component of AI, and a key element in the fears it engenders, is the ability of machines to take action on their own without human intervention. This could take the form of a computer reprogramming itself in the face of an obstacle or restriction. In other words, to think for itself and to take action accordingly.

Needless to say, there are those in the tech world who have a more sanguine view of AI and what it could bring. Kevin Kelly, the founding editor of Wired magazine, does not see the future inhabited by HAL’s – the homicidal computer on board the spaceship in 2001: A Space Odyssey. Kelly sees a more prosaic world that looks more like Amazon Web Services: a cheap, smart, utility which is also exceedingly boring simply because it will run in the background of our lives. He says AI will enliven inert objects in the way that electricity did over 100 years ago. “Everything that we formerly electrified, we will now cognitize.” And he sees the business plans of the next 10,000 startups as easy to predict: “Take X and add AI.

While he acknowledges the concerns about artificial intelligence, Kelly writes: “As AI develops, we might have to engineer ways to prevent consciousness in them – our most premium AI services will be advertised as consciousness-free.” (my emphasis).

Running parallel to the extraordinary advances in the field of AI is the even bigger development of what is loosely called, the internet of things (IoT). This can be broadly described as the emergence of countless objects, animals and even people with uniquely identifiable, embedded devices that are wirelessly connected to the internet. These ‘nodes’ can send or receive information without the need for human intervention. There are estimates that there will be 50 billion connected devices by 2020. Current examples of these smart devices include Nest thermostats, wifi-enabled washing machines and the increasingly connected cars with their built-in sensors that can avoid accidents and even park for you.

The US Federal Trade Commission is sufficiently concerned about the security and privacy implications of the Internet of Things, and has conducted a public workshop and released a report urging companies to adopt best practices and “bake in” procedures to minimise data collection and to ensure consumer trust in the new networked environment.


Tim O’Reilly
, coiner of the phrase “Web 2.0” sees the internet of things as the most important online development yet. He thinks the name is misleading – that IoT is “really about human augmentation”. O’Reilly believes that we should “expect our devices to anticipate us in all sorts of ways”. He uses the “intelligent personal assistant”, Google Now, to make his point.

So what happens when these millions of embedded devices connect to artificially intelligent machines? What does AI + IoT = ? Will it mean the end of civilisation as we know it? Will our self-programming computers send out hostile orders to the chips we’ve added to our everyday objects? Or is this just another disruptive moment, similar to the harnessing of steam or the splitting of the atom? An important step in our own evolution as a species, but nothing to be too concerned about?

The answer may lie in some new thinking about consciousness. As a concept, as well as an experience, consciousness has proved remarkably hard to pin down. We all know that we have it (or at least we think we do), but scientists are unable to prove that we have it or, indeed, exactly what it is and how it arises.

Dictionaries describe consciousness as the state of being awake and aware of our own existence. It is an “internal knowledge” characterized by sensation, emotions and thought.

Just over 20 years ago, an obscure Australian philosopher named David Chalmers created controversy in philosophical circles by raising what became known as the Hard Problem of Consciousness. He asked how the grey matter inside our heads gave rise to the mysterious experience of being. What makes us different to, say, a very efficient robot, one with, perhaps, artificial intelligence? And are we humans the only ones with consciousness?

  • Some scientists propose that consciousness is an illusion, a trick of the brain.
  • Still others believe we will never solve the consciousness riddle.
  • But a few neuroscientists think we may finally figure it out, provided we accept the remarkable idea that soon computers or the internet might one day become conscious.

In an extensive Guardian article, the author Oliver Burkeman wrote how Chalmers and others put forth a notion that all things in the universe might be (or potentially be) conscious, “providing the information it contains is sufficiently interconnected and organized.” So could an iPhone or a thermostat be conscious? And, if so, could we in the midst of a ‘Conscious Web’?Back in the mid-1990s, the author Jennifer Cobb Kreisberg wrote an influential piece for Wired, A Globe, Clothing Itself with a Brain. In it she described the work of a little known Jesuit priest and paleontologist, Teilhard de Chardin, who 50 years earlier described a global sphere of thought, the “living unity of a single tissue” containing our collective thoughts, experiences and consciousness.

Teilhard called it the “nooshphere” (noo is Greek for mind). He saw it as the evolutionary step beyond our geosphere (physical world) and biosphere (biological world). The informational wiring of a being, whether it is made up of neurons or electronics, gives birth to consciousness. As the diversification of nervous connections increase, de Chardin argued, evolution is led towards greater consciousness. Or as John Perry Barlow, Grateful Dead lyricist, cyber advocate and Teilhard de Chardin fan said: “With cyberspace, we are, in effect, hard-wiring the collective consciousness.

So, perhaps we shouldn’t be so alarmed. Maybe we are on the cusp of a breakthrough not just in the field of artificial intelligence and the emerging internet of things, but also in our understanding of consciousness itself. If we can resolve the privacy, security and trust issues that both AI and the IoT present, we might make an evolutionary leap of historic proportions. And it’s just possible Teilhard’s remarkable vision of an interconnected “thinking layer” is what the web has been all along.

• Stephen Balkam is CEO of the Family Online Safety Institute in the US