DARPA Project Starts Building Human Memory Prosthetics

ORIGINAL: IEES Spectrum
By Eliza Strickland
Posted 27 Aug 2014
The first memory-enhancing devices could be implanted within four years
Photo: Lawrence Livermore National LaboratoryRemember This? Lawrence Livermore engineer Vanessa Tolosa holds up a silicon wafer containing micromachined implantable neural devices for use in experimental memory prostheses.
They’re trying to do 20 years of research in 4 years,” says Michael Kahana in a tone that’s a mixture of excitement and disbelief. Kahana, director of the Computational Memory Lab at the University of Pennsylvania, is mulling over the tall order from the U.S. Defense Advanced Research Projects Agency (DARPA). In the next four years, he and other researchers are charged with understanding the neuroscience of memory and then building a prosthetic memory device that’s ready for implantation in a human brain.
DARPA’s first contracts under its Restoring Active Memory (RAM) program challenge two research groups to construct implants for veterans with traumatic brain injuries that have impaired their memories. Over 270,000 U.S. military service members have suffered such injuries since 2000, according to DARPA, and there are no truly effective drug treatments. This program builds on an earlier DARPA initiative focused on building a memory prosthesis, under which a different group of researchers had dramatic success in improving recall in mice and monkeys.
Kahana’s team will start by searching for biological markers of memory formation and retrieval. For this early research, the test subjects will be hospitalized epilepsy patients who have already had electrodes implanted to allow doctors to study their seizures. Kahana will record the electrical activity in these patients’ brains while they take memory tests.
The memory is like a search engine,” Kahana says. “In the initial memory encoding, each event has to be tagged. Then in retrieval, you need to be able to search effectively using those tags.” He hopes to find the electric signals associated with these two operations.
Once they’ve found the signals, researchers will try amplifying them using sophisticated neural stimulation devices. Here Kahana is working with the medical device maker Medtronic, in Minneapolis, which has already developed one experimental implant that can both record neural activity and stimulate the brain. Researchers have long wanted such a “closed-loop” device, as it can use real-time signals from the brain to define the stimulation parameters.
Kahana notes that designing such closed-loop systems poses a major engineering challenge. Recording natural neural activity is difficult when stimulation introduces new electrical signals, so the device must have special circuitry that allows it to quickly switch between the two functions. What’s more, the recorded information must be interpreted with blistering speed so it can be translated into a stimulation command. “We need to take analyses that used to occupy a personal computer for several hours and boil them down to a 10-millisecond algorithm,” he says.
In four years’ time, Kahana hopes his team can show that such systems reliably improve memory in patients who are already undergoing brain surgery for epilepsy or Parkinson’s. That, he says, will lay the groundwork for future experiments in which medical researchers can try out the hardware in people with traumatic brain injuries—people who would not normally receive invasive neurosurgery.
The second research team is led by Itzhak Fried, director of the Cognitive Neurophysiology Laboratory at the University of California, Los Angeles. Fried’s team will focus on a part of the brain called the entorhinal cortex, which is the gateway to the hippocampus, the primary brain region associated with memory formation and storage. “Our approach to the RAM program is homing in on this circuit, which is really the golden circuit of memory,” Fried says. In a 2012 experiment, he showed that stimulating the entorhinal regions of patients while they were learning memory tasks improved their performance.
Fried’s group is working with Lawrence Livermore National Laboratory, in California, to develop more closed-loop hardware. At Livermore’s Center for Bioengineering, researchers are leveraging semiconductor manufacturing techniques to make tiny implantable systems. They first print microelectrodes on a polymer that sits atop a silicon wafer, then peel the polymer off and mold it into flexible cylinders about 1 millimeter in diameter. The memory prosthesis will have two of these cylindrical arrays, each studded with up to 64 hair-thin electrodes, which will be capable of both recording the activity of individual neurons and stimulating them. Fried believes his team’s device will be ready for tryout in patients with traumatic brain injuries within the four-year span of the RAM program.
Outside observers say the program’s goals are remarkably ambitious. Yet Steven Hyman, director of psychiatric research at the Broad Institute of MIT and Harvard, applauds its reach. “The kind of hardware that DARPA is interested in developing would be an extraordinary advance for the whole field,” he says. Hyman says DARPA’s funding for device development fills a gap in existing research. Pharmaceutical companies have found few new approaches to treating psychiatric and neurodegenerative disorders in recent years, he notes, and have therefore scaled back drug discovery efforts. “I think that approaches that involve devices and neuromodulation have greater near-term promise,” he says.
This article originally appeared in print as “Making a Human Memory Chip.
Tagged , , , , , , , , ,

Everybody Relax: An MIT Economist Explains Why Robots Won’t Steal Our Jobs

Living together in harmony. Photo by Oli Scarff/Getty Images
If you’ve ever found yourself fretting about the possibility that software and robotics are on the verge of thieving away all our jobs, renowned MIT labor economist David Autor is out with a new paper that might ease your nerves. Presented Friday at the Federal Reserve Bank of Kansas City’s big annual conference in Jackson Hole, Wyoming, the paper argues that humanity still has two big points in its favor: People have “common sense,” and they’re “flexible.
Neil Irwin already has a lovely writeup of the paper at the New York Times, but let’s run down the basics. There’s no question machines are getting smarter, and quickly acquiring the ability to perform work that once seemed uniquely human. Think self-driving cars that might one day threaten cabbies, or computer programs that can handle the basics of legal research.
But artificial intelligence is still just that: artificial. We haven’t untangled all the mysteries of human judgment, and programmers definitely can’t translate the way we think entirely into code. Instead, scientists at the forefront of AI have found workarounds like machine-learning algorithms. As Autor points out, a computer might not have any abstract concept of a chair, but show it enough Ikea catalogs, and it can eventually suss out the physical properties statistically associated with a seat. Fortunately for you and me, this approach still has its limits.
For example, both a toilet and a traffic cone look somewhat like a chair, but a bit of reasoning about their shapes vis-à-vis the human anatomy suggests that a traffic cone is unlikely to make a comfortable seat. Drawing this inference, however, requires reasoning about what an object is “for” not simply what it looks like. Contemporary object recognition programs do not, for the most part, take this reasoning-based approach to identifying objects, likely because the task of developing and generalizing the approach to a large set of objects would be extremely challenging.
That’s what Autor means when he says machines lack for common sense. They don’t think. They just do math.
And that leaves lots of room for human workers in the future.
Technology has already whittled away at middle class jobs, from factory workers replaced by robotic arms to secretaries made redundant by Outlook, over the past few decades. But Autor argues that plenty of today’s middle-skill occupations, such as construction trades and medical technicians, will stick around, because “many of the tasks currently bundled into these jobs cannot readily be unbundled … without a substantial drop in quality.
These aren’t jobs that require performing a single task over and over again, but instead demand that employees handle some technical work while dealing with other human beings and improvising their way through unexpected problems. Machine learning algorithms can’t handle all of that. Human beings, Swiss-army knives that we are, can. We’re flexible.
Just like the dystopian arguments that machines are about to replace a vast swath of the workforce, Autor’s paper is very much speculative. It’s worth highlighting, though, because it cuts through the silly sense of inevitability that sometimes clouds this subject. Predictions about the future of technology and the economy are made to be dashed. And while Noah Smith makes a good point that we might want to be prepared for mass, technology-driven unemployment even if there’s just a slim chance of it happening, there’s also no reason to take it for granted.
Jordan Weissmann is Slate’s senior business and economics correspondent.
ORIGINAL: Slate
Tagged , , , , , ,

It’s Time to Take Artificial Intelligence Seriously

By CHRISTOPHER MIMS
Aug. 24, 2014
No Longer an Academic Curiosity, It Now Has Measurable Impact on Our Lives
A still from “2001: A Space Odyssey” with Keir Dullea reflected in the lens of HAL’s “eye.” MGM / POLARIS / STANLEY KUBRICK
 
The age of intelligent machines has arrived—only they don’t look at all like we expected. Forget what you’ve seen in movies; this is no HAL from “2001: A Space Odyssey,” and it’s certainly not Scarlett Johansson‘s disembodied voice in “Her.It’s more akin to what happens when insects, or even fungi, do when they “think.” (What, you didn’t know that slime molds can solve mazes?)
Artificial intelligence has lately been transformed from an academic curiosity to something that has measurable impact on our lives. Google Inc. used it to increase the accuracy of voice recognition in Android by 25%. The Associated Press is printing business stories written by it. Facebook Inc. is toying with it as a way to improve the relevance of the posts it shows you.
What is especially interesting about this point in the history of AI is that it’s no longer just for technology companies. Startups are beginning to adapt it to problems where, at least to me, its applicability is genuinely surprising.
Take advertising copywriting. Could the “Mad Men” of Don Draper‘s day have predicted that by the beginning of the next century, they would be replaced by machines? Yet a company called Persado aims to do just that.
Persado does one thing, and judging by its client list, which includes Citigroup Inc. and Motorola Mobility, it does it well. It writes advertising emails and “landing pages” (where you end up if you click on a link in one of those emails, or an ad).
Here’s an example: Persado’s engine is being used across all of the types of emails a top U.S. wireless carrier sends out when it wants to convince its customers to renew their contracts, upgrade to a better plan or otherwise spend money.
Traditionally, an advertising copywriter would pen these emails; perhaps the company would test a few variants on a subset of its customers, to see which is best.
But Persado’s software deconstructs advertisements into five components, including emotion words, characteristics of the product, the “call to action” and even the position of text and the images accompanying it. By recombining them in millions of ways and then distilling their essential characteristics into eight or more test emails that are sent to some customers, Persado says it can effectively determine the best possible come-on.
A creative person is good but random,” says Lawrence Whittle, head of sales at Persado. “We’ve taken the randomness out by building an ontology of language.
The results speak for themselves: In the case of emails intended to convince mobile subscribers to renew their plans, initial trials with Persado increased click-through rates by 195%, the company says.
Here’s another example of AI becoming genuinely useful: X.ai is a startup aimed, like Persado, at doing one thing exceptionally well. In this case, it’s scheduling meetings. X.ai’s virtual assistant, Amy, isn’t a website or an app; she’s simply a “person” whom you cc: on emails to anyone with whom you’d like to schedule a meeting. Her sole “interface” is emails she sends and receives—just like a real assistant. Thus, you don’t have to bother with back-and-forth emails trying to find a convenient time and available place for lunch. Amy can correspond fluidly with anyone, but only on the subject of his or her calendar. This sounds like a simple problem to crack, but it isn’t, because Amy must communicate with a human being who might not even know she’s an AI, and she must do it flawlessly, says X.ai founder Dennis Mortensen.
E-mail conversations with Amy are already quite smooth. Mr. Mortensen used her to schedule our meeting, naturally, and it worked even though I purposely threw in some ambiguous language about the times I was available. But that is in part because Amy is still in the “training” stage, where anything she doesn’t understand gets handed to humans employed by X.ai.
It sounds like cheating, but every artificially intelligent system needs a body of data on which to “train” initially. For Persado, that body of data was text messages sent to prepaid cellphone customers in Europe, urging them to re-up their minutes or opt into special plans. For Amy, it’s a race to get a body of 100,000 email meeting requests. Amusingly, engineers at X.ai thought about using one of the biggest public database of emails available, the Enron emails, but there is too much scheming in them to be a good sample.
Both of these systems, and others like them, work precisely because their makers have decided to tackle problems that are as narrowly defined as possible. Amy doesn’t have to have a conversation about the weather—just when and where you’d like to schedule a meeting. And Persado’s system isn’t going to come up with the next “Just Do It” campaign.
This is where some might object that the commercialized vision for AI isn’t intelligent at all. But academics can’t even agree on where the cutoff for “intelligence” is in living things, so the fact that these first steps toward economically useful artificial intelligence lie somewhere near the bottom of the spectrum of things that think shouldn’t bother us.
We’re also at a time when it seems that advances in the sheer power of computers will lead to AI that becomes progressively smarter. So-called deep-learning algorithms allow machines to learn unsupervised, whereas both Persado and X.ai’s systems require training guided by humans.
Last year Google showed that its own deep-learning systems could learn to recognize a cat from millions of images scraped from the Internet, without ever being told what a cat was in the first place. It’s a parlor trick, but it isn’t hard to see where this is going—the enhancement of the effectiveness of knowledge workers. Mr. Mortensen estimates there are 87 million of them in the world already, and they schedule 10 billion meetings a year. As more tools tackling specific portions of their job become available, their days could be filled with the things that only humans can do, like creativity.
I think the next Siri is not Siri; it’s 100 companies like ours mashed into one,” says Mr. Mortensen.
—Follow Christopher Mims on Twitter @Mims or write to him atchristopher.mims@wsj.com.
Tagged , , , , ,

Ray Kurzweil: Get ready for hybrid thinking

ORIGINAL: TED
Jun 2, 2014
Two hundred million years ago, our mammal ancestors developed a new brain feature: the neocortex. This stamp-sized piece of tissue (wrapped around a brain the size of a walnut) is the key to what humanity has become. Now, futurist Ray Kurzweil suggests, we should get ready for the next big leap in brain power, as we tap into the computing power in the cloud.

 

Tagged , , , , ,

Why a deep-learning genius left Google & joined Chinese tech shop Baidu (interview)

ORIGINAL: VentureBeat
July 30, 2014 8:03 AM
Image Credit: Jordan Novet/VentureBeat
SUNNYVALE, California — Chinese tech company Baidu has yet to make its popular search engine and other web services available in English. But consider yourself warned: Baidu could someday wind up becoming a favorite among consumers.
The strength of Baidu lies not in youth-friendly marketing or an enterprise-focused sales team. It lives instead in Baidu’s data centers, where servers run complex algorithms on huge volumes of data and gradually make its applications smarter, including not just Web search but also Baidu’s tools for music, news, pictures, video, and speech recognition.
Despite lacking the visibility (in the U.S., at least) of Google and Microsoft, in recent years Baidu has done a lot of work on deep learning, one of the most promising areas of artificial intelligence (AI) research in recent years. This work involves training systems called artificial neural networks on lots of information derived from audio, images, and other inputs, and then presenting the systems with new information and receiving inferences about it in response.
Two months ago, Baidu hired Andrew Ng away from Google, where he started and led the so-called Google Brain project. Ng, whose move to Baidu follows Hugo Barra’s jump from Google to Chinese company Xiaomi last year, is one of the world’s handful of deep-learning rock stars.
Ng has taught classes on machine learning, robotics, and other topics at Stanford University. He also co-founded massively open online course startup Coursera.
He makes a strong argument for why a person like him would leave Google and join a company with a lower public profile. His argument can leave you feeling like you really ought to keep an eye on Baidu in the next few years.
I thought the best place to advance the AI mission is at Baidu,” Ng said in an interview with VentureBeat.
Baidu’s search engine only runs in a few countries, including China, Brazil, Egypt, and Thailand. The Brazil service was announced just last week. Google’s search engine is far more popular than Baidu’s around the globe, although Baidu has already beaten out Yahoo and Microsoft’s Bing in global popularity, according to comScore figures.
And Baidu co-founder and chief executive Robin Li, a frequent speaker on Stanford’s campus, has said he wants Baidu to become a brand name in more than half of all the world’s countries. Presumably, then, Baidu will one day become something Americans can use.
Above: Baidu co-founder and chief executive Robin Li.
Image Credit: Baidu

 

Now that Ng leads Baidu’s research arm as the company’s chief scientist out of the company’s U.S. R&D Center here, it’s not hard to imagine that Baidu’s tools in English, if and when they become available, will be quite brainy — perhaps even eclipsing similar services from Apple and other tech giants. (Just think of how many people are less than happy with Siri.)

A stable full of AI talent

But this isn’t a story about the difference a single person will make. Baidu has a history in deep learning.
A couple years ago, Baidu hired Kai Yu, a engineer skilled in artificial intelligence. Based in Beijing, he has kept busy.
I think Kai ships deep learning to an incredible number of products across Baidu,” Ng said. Yu also developed a system for providing infrastructure that enables deep learning for different kinds of applications.
That way, Kai personally didn’t have to work on every single application,” Ng said.
In a sense, then, Ng joined a company that had already built momentum in deep learning. He wasn’t starting from scratch.
Above: Baidu’s Kai Yu.
Image Credit: Kai Yu
Only a few companies could have appealed to Ng, given his desire to push artificial intelligence forward. It’s capital-intensive, as it requires lots of data and computation. Baidu, he said, can provide those things.
Baidu is nimble, too. Unlike Silicon Valley’s tech giants, which measure activity in terms of monthly active users, Chinese Internet companies prefer to track usage by the day, Ng said.
It’s a symptom of cadence,” he said. “What are you doing today?” And product cycles in China are short; iteration happens very fast, Ng said.
Plus, Baidu is willing to get infrastructure ready to use on the spot.
Frankly, Kai just made decisions, and it just happened without a lot of committee meetings,” Ng said. “The ability of individuals in the company to make decisions like that and move infrastructure quickly is something I really appreciate about this company.
That might sound like a kind deference to Ng’s new employer, but he was alluding to a clear advantage Baidu has over Google.
He ordered 1,000 GPUs [graphics processing units] and got them within 24 hours,Adam Gibson, co-founder of deep-learning startup Skymind, told VentureBeat. “At Google, it would have taken him weeks or months to get that.
Not that Baidu is buying this type of hardware for the first time. Baidu was the first company to build a GPU cluster for deep learning, Ng said — a few other companies, like Netflix, have found GPUs useful for deep learning — and Baidu also maintains a fleet of servers packing ARM-based chips.
Above: Baidu headquarters in Beijing.
Image Credit: Baidu
Now the Silicon Valley researchers are using the GPU cluster and also looking to add to it and thereby create still bigger artificial neural networks.
But the efforts have long since begun to weigh on Baidu’s books and impact products. “We deepened our investment in advanced technologies like deep learning, which is already yielding near term enhancements in user experience and customer ROI and is expected to drive transformational change over the longer term,” Li said in a statement on the company’s earnings the second quarter of 2014.
Next step: Improving accuracy
What will Ng do at Baidu? The answer will not be limited to any one of the company’s services. Baidu’s neural networks can work behind the scenes for a wide variety of applications, including those that handle text, spoken words, images, and videos. Surely core functions of Baidu like Web search and advertising will benefit, too.
All of these are domains Baidu is looking at using deep learning, actually,” Ng said.
Ng’s focus now might best be summed up by one word: accuracy.
That makes sense from a corporate perspective. Google has the brain trust on image analysis, and Microsoft has the brain trust on speech, said Naveen Rao, co-founder and chief executive of deep-learning startup Nervana. Accuracy could potentially be the area where Ng and his colleagues will make the most substantive progress at Baidu, Rao said.
Matthew Zeiler, founder and chief executive of another deep learning startup, Clarifai, was more certain. “I think you’re going to see a huge boost in accuracy,” said Zeiler, who has worked with Hinton and LeCun and spent two summers on the Google Brain project.
One thing is for sure: Accuracy is on Ng’s mind.
Above: The lobby at Baidu’s office in Sunnyvale, Calif.
Image Credit: Jordan Novet/VentureBeat
Here’s the thing. Sometimes changes in accuracy of a system will cause changes in the way you interact with the device,” Ng said. For instance, more accurate speech recognition could translate into people relying on it much more frequently. Think “Her”-level reliance, where you just talk to your computer as a matter of course rather than using speech recognition in special cases.
Speech recognition today doesn’t really work in noisy environments,” Ng said. But that could change if Baidu’s neural networks become more accurate under Ng.
Ng picked up his smartphone, opened the Baidu Translate app, and told it that he needed a taxi. A female voice said that in Mandarin and displayed Chinese characters on screen. But it wasn’t a difficult test, in some ways: This was no crowded street in Beijing. This was a quiet conference room in a quiet office.
There’s still work to do,” Ng said.
‘The future heroes of deep learning’
Meanwhile, researchers at companies and universities have been hard at work on deep learning for decades.
Google has built up a hefty reputation for applying deep learning to images from YouTube videos, data center energy use, and other areas, partly thanks to Ng’s contributions. And recently Microsoft made headlines for deep-learning advancements with its Project Adam work, although Li Deng of Microsoft Research has been working with neural networks for more than 20 years.
In academia, deep learning research groups all over North America and Europe. Key figures in the past few years include Yoshua Bengio at the University of Montreal, Geoff Hinton of the University of Toronto (Google grabbed him last year through its DNNresearch acquisition), Yann LeCun from New York University (Facebook pulled him aboard late last year), and Ng.
But Ng’s strong points differ from those of his contemporaries. Whereas Bengio made strides in training neural networks, LeCun developed convolutional neural networks, and Hinton popularized restricted Boltzmann machines, Ng takes the best, implements it, and makes improvements.
Andrew is neutral in that he’s just going to use what works,” Gibson said. “He’s very practical, and he’s neutral about the stamp on it.
Not that Ng intends to go it alone. To create larger and more accurate neural networks, Ng needs to look around and find like-minded engineers.
He’s going to be able to bring a lot of talent over,Dave Sullivan, co-founder and chief executive of deep-learning startup Ersatz Labs, told VentureBeat. “This guy is not sitting down and writing mountains of code every day.
And truth be told, Ng has had no trouble building his team.
Hiring for Baidu has been easier than I’d expected,” he said.
A lot of engineers have always wanted to work on AI. … My job is providing the team with the best possible environment for them to do AI, for them to be the future heroes of deep learning.

More information:

Google’s innovative search technologies connect millions of people around the world with information every day. Founded in 1998 by Stanford Ph.D. students Larry Page and Sergey Brin, Google today is a top web property in all major glob… read more »

Powered by VBProfiles

Tagged , , , , , , , , , ,

How Watson Changed IBM

ORIGINAL: HBR
by Brad Power
August 22, 2014

Remember when IBM’s “Watson” computer competed on the TV game show “Jeopardy” and won? Most people probably thought “Wow, that’s cool,” or perhaps were briefly reminded of the legend of John Henry and the ongoing contest between man and machine. Beyond the media splash it caused, though, the event was viewed as a breakthrough on many fronts. Watson demonstrated that machines could understand and interact in a natural language, question-and-answer format and learn from their mistakes. This meant that machines could deal with the exploding growth of non-numeric information that is getting hard for humans to keep track of: to name two prominent and crucially important examples,

  • keeping up with all of the knowledge coming out of human genome research, or 
  • keeping track of all the medical information in patient records.
So IBM asked the question: How could the fullest potential of this breakthrough be realized, and how could IBM create and capture a significant portion of that value? They knew the answer was not by relying on traditional internal processes and practices for R&D and innovation. Advances in technology — especially digital technology and the increasing role of software in products and services — are demanding that large, successful organizations increase their pace of innovation and make greater use of resources outside their boundaries. This means internal R&D activities must increasingly shift towards becoming crowdsourced, taking advantage of the wider ecosystem of customers, suppliers, and entrepreneurs.
IBM, a company with a long and successful tradition of internally-focused R&D activities, is adapting to this new world of creating platforms and enabling open innovation. Case in point, rather than keep Watson locked up in their research labs, they decided to release it to the world as a platform, to run experiments with a variety of organizations to accelerate development of natural language applications and services. In January 2014 IBM announced they were spending $1 billion to launch the Watson Group, including a $100 million venture fund to support start-ups and businesses that are building Watson-powered apps using the “Watson Developers Cloud.” More than 2,500 developers and start-ups have reached out to the IBM Watson Group since the Watson Developers Cloud was launched in November 2013.

So how does it work? First, with multiple business models. Mike Rhodin, IBM’s senior vice president responsible for Watson, told me, “There are three core business models that we will run in parallel. 

  • The first is around industries that we think will go through a big change in “cognitive” [natural language] computing, such as financial services and healthcare. For example, in healthcare we’re working with The Cleveland Clinic on how medical knowledge is taught. 
  • The second is where we see similar patterns across industries, such as how people discover and engage with organizations and how organizations make different kinds of decisions. 
  • The third business model is creating an ecosystem of entrepreneurs. We’re always looking for companies with brilliant ideas that we can partner with or acquire. With the entrepreneur ecosystem, we are behaving more like a Silicon Valley startup. We can provide the entrepreneurs with access to early adopter customers in the 170 countries in which we operate. If entrepreneurs are successful, we keep a piece of the action.
IBM also had to make some bold structural moves in order to create an organization that could both function as a platform as well as collaborate with outsiders for open innovation. They carved out The Watson Group as a new, semi-autonomous, vertically integrated unit, reporting to the CEO. They brought in 2000 people, a dozen projects, a couple of Big Data and content analytics tools, and a consulting unit (outside of IBM Global Services). IBM’s traditional annual budget cycle and business unit financial measures weren’t right for Watson’s fast pace, so, as Mike Rhodin told me, “I threw out the annual planning cycle and replaced it with a looser, more agile management system. In monthly meetings with CEO Ginni Rometty, we’ll talk one time about technology, and another time about customer innovations. I have to balance between strategic intent and tactical, short-term decision-making. Even though we’re able to take the long view, we still have to make tactical decisions.

More and more, organizations will need to make choices in their R&D activities to either create platforms or take advantage of them. 

Those with deep technical and infrastructure skills, like IBM, can shift the focus of their internal R&D activities toward building platforms that can connect with ecosystems of outsiders to collaborate on innovation.

The second and more likely option for most companies is to use platforms like IBM’s or Amazon’s to create their own apps and offerings for customers and partners. In either case, new, semi-autonomous agile units, like IBM’s Watson Group, can help to create and capture huge value from these new customer and entrepreneur ecosystems.

More blog posts by Brad Power
Tagged , , , , , ,

“Brain” In A Dish Acts As Autopilot Living Computer

ORIGINAL: U of Florida
by Jennifer Viegas
Nov 27, 2012
A glass dish contains a “brain” — a living network of 25,000 rat brain cells connected to an array of 60 electrodes.University of Florida/Ray Carson

downloadable pdf

A University of Florida scientist has grown a living “brain” that can fly a simulated plane, giving scientists a novel way to observe how brain cells function as a network.The “brain” — a collection of 25,000 living neurons, or nerve cells, taken from a rat’s brain and cultured inside a glass dish — gives scientists a unique real-time window into the brain at the cellular level. By watching the brain cells interact, scientists hope to understand what causes neural disorders such as epilepsy and to determine noninvasive ways to intervene.


2012 U of Florida - Brain Test

Thomas DeMarse holds a glass dish containing a living network of 25,000 rat brain cells connected to an array of 60 electrodes that can interact with a computer to fly a simulated F-22 fighter plane.

As living computers, they may someday be used to fly small unmanned airplanes or handle tasks that are dangerous for humans, such as search-and-rescue missions or bomb damage assessments.”

We’re interested in studying how brains compute,” said Thomas DeMarse, the UF assistant professor of biomedical engineering who designed the study. “If you think about your brain, and learning and the memory process, I can ask you questions about when you were 5 years old and you can retrieve information. That’s a tremendous capacity for memory. In fact, you perform fairly simple tasks that you would think a computer would easily be able to accomplish, but in fact it can’t.
Continue reading

Tagged , , , , , , , , , ,

Siri’s Inventors Are Building a Radical New AI That Does Anything You Ask

Viv was named after the Latin root meaning live. Its San Jose, California, offices are decorated with tsotchkes bearing the numbers six and five (VI and V in roman numerals). Ariel Zambelich

When Apple announced the iPhone 4S on October 4, 2011, the headlines were not about its speedy A5 chip or improved camera. Instead they focused on an unusual new feature: an intelligent assistant, dubbed Siri. At first Siri, endowed with a female voice, seemed almost human in the way she understood what you said to her and responded, an advance in artificial intelligence that seemed to place us on a fast track to the Singularity. She was brilliant at fulfilling certain requests, like “Can you set the alarm for 6:30?” or “Call Diane’s mobile phone.” And she had a personality: If you asked her if there was a God, she would demur with deft wisdom. “My policy is the separation of spirit and silicon,” she’d say.Over the next few months, however, Siri’s limitations became apparent. Ask her to book a plane trip and she would point to travel websites—but she wouldn’t give flight options, let alone secure you a seat. Ask her to buy a copy of Lee Child’s new book and she would draw a blank, despite the fact that Apple sells it. Though Apple has since extended Siri’s powers—to make an OpenTable restaurant reservation, for example—she still can’t do something as simple as booking a table on the next available night in your schedule. She knows how to check your calendar and she knows how to use Open­Table. But putting those things together is, at the moment, beyond her.
Continue reading

Tagged , , , , , , ,

Joi Ito: Want to innovate? Become a “now-ist”

Remember before the internet?” asks Joi Ito. “Remember when people used
to try to predict the future?
” In this engaging talk, the head of the
MIT Media Lab skips the future predictions and instead shares a new approach to creating in the moment: building quickly and improving constantly, without waiting for permission or for proof that you have the right idea
. This kind of bottom-up innovation is seen in the most fascinating, futuristic projects emerging today, and it starts, he says, with being open and alert to what’s going on around you right now.
Don’t be a futurist, he suggests: be a now-ist.
Tagged , , , , , ,

Preparing Your Students for the Challenges of Tomorrow

ORIGINAL: Edutopia
August 20, 2014

Right now, you have students. Eventually, those students will become the citizens — employers, employees, professionals, educators, and caretakers of our planet in 21st century. Beyond mastery of standards, what can you do to help prepare them? What can you promote to be sure they are equipped with the skill sets they will need to take on challenges and opportunities that we can’t yet even imagine?

Following are six tips to guide you in preparing your students for what they’re likely to face in the years and decades to come.

1. Teach Collaboration as a Value and Skill Set Students of today need new skills for the coming century that will make them ready to collaborate with others on a global level. Whatever they do, we can expect their work to include finding creative solutions to emerging challenges.
2. Evaluate Information Accuracy 
New information is being discovered and disseminated at a phenomenal rate. It is predicted that 50 percent of the facts students are memorizing today will no longer be accurate or complete in the near future. Students need to know
  • how to find accurate information, and
  • how to use critical analysis for
  • assessing the veracity or bias and
  • the current or potential uses of new information.
These are the executive functions that they need to develop and practice in the home and at school today, because without them, students will be unprepared to find, analyze, and use the information of tomorrow.
3. Teach Tolerance 
In order for collaboration to happen within a global community, job applicants of the future will be evaluated by their ability for communication with, openness to, and tolerance for unfamiliar cultures and ideas. To foster these critical skills, today’s students will need open discussions and experiences that can help them learn about and feel comfortable communicating with people of other cultures.
4. Help Students Learn Through Their Strengths 
Children are born with brains that want to learn. They’re also born with different strengths — and they grow best through those strengths. One size does not fit all in assessment and instruction. The current testing system and the curriculum that it has spawned leave behind the majority of students who might not be doing their best with the linear, sequential instruction required for this kind of testing. Look ahead on the curriculum map and help promote each student’s interest in the topic beforehand. Use clever “front-loading” techniques that will pique their curiosity.
5. Use Learning Beyond the Classroom
New “learning” does not become permanent memory unless there is repeated stimulation of the new memory circuits in the brain pathways
. This is the “practice makes permanent” aspect of neuroplasticity where neural networks that are the most stimulated develop more dendrites, synapses, and thicker myelin for more efficient information transmission. These stronger networks are less susceptible to pruning, and they become long-term memory holders. Students need to use what they learn repeatedly and in different, personally meaningful ways for short-term memory to become permanent knowledge that can be retrieved and used in the future. Help your students make memories permanent by providing opportunities for them to “transfer” school learning to real-life situations.
6. Teach Students to Use Their Brain Owner’s Manual
The most important manual that you can share with your students is the owner’s manual to their own brains. When they understand how their brains take in and store information (PDF, 139KB), they hold the keys to successfully operating the most powerful tool they’ll ever own. When your students understand that, through neuroplasticity, they can change their own brains and intelligence, together you can build their resilience and willingness to persevere through the challenges that they will undoubtedly face in the future.How are you preparing your students to thrive in the world they’ll inhabit as adults?

Tagged , , , , , , ,
%d bloggers like this: