Why a deep-learning genius left Google & joined Chinese tech shop Baidu (interview)

ORIGINAL: VentureBeat
July 30, 2014 8:03 AM
Image Credit: Jordan Novet/VentureBeat
SUNNYVALE, California — Chinese tech company Baidu has yet to make its popular search engine and other web services available in English. But consider yourself warned: Baidu could someday wind up becoming a favorite among consumers.
The strength of Baidu lies not in youth-friendly marketing or an enterprise-focused sales team. It lives instead in Baidu’s data centers, where servers run complex algorithms on huge volumes of data and gradually make its applications smarter, including not just Web search but also Baidu’s tools for music, news, pictures, video, and speech recognition.
Despite lacking the visibility (in the U.S., at least) of Google and Microsoft, in recent years Baidu has done a lot of work on deep learning, one of the most promising areas of artificial intelligence (AI) research in recent years. This work involves training systems called artificial neural networks on lots of information derived from audio, images, and other inputs, and then presenting the systems with new information and receiving inferences about it in response.
Two months ago, Baidu hired Andrew Ng away from Google, where he started and led the so-called Google Brain project. Ng, whose move to Baidu follows Hugo Barra’s jump from Google to Chinese company Xiaomi last year, is one of the world’s handful of deep-learning rock stars.
Ng has taught classes on machine learning, robotics, and other topics at Stanford University. He also co-founded massively open online course startup Coursera.
He makes a strong argument for why a person like him would leave Google and join a company with a lower public profile. His argument can leave you feeling like you really ought to keep an eye on Baidu in the next few years.
I thought the best place to advance the AI mission is at Baidu,” Ng said in an interview with VentureBeat.
Baidu’s search engine only runs in a few countries, including China, Brazil, Egypt, and Thailand. The Brazil service was announced just last week. Google’s search engine is far more popular than Baidu’s around the globe, although Baidu has already beaten out Yahoo and Microsoft’s Bing in global popularity, according to comScore figures.
And Baidu co-founder and chief executive Robin Li, a frequent speaker on Stanford’s campus, has said he wants Baidu to become a brand name in more than half of all the world’s countries. Presumably, then, Baidu will one day become something Americans can use.
Above: Baidu co-founder and chief executive Robin Li.
Image Credit: Baidu

 

Now that Ng leads Baidu’s research arm as the company’s chief scientist out of the company’s U.S. R&D Center here, it’s not hard to imagine that Baidu’s tools in English, if and when they become available, will be quite brainy — perhaps even eclipsing similar services from Apple and other tech giants. (Just think of how many people are less than happy with Siri.)

A stable full of AI talent

But this isn’t a story about the difference a single person will make. Baidu has a history in deep learning.
A couple years ago, Baidu hired Kai Yu, a engineer skilled in artificial intelligence. Based in Beijing, he has kept busy.
I think Kai ships deep learning to an incredible number of products across Baidu,” Ng said. Yu also developed a system for providing infrastructure that enables deep learning for different kinds of applications.
That way, Kai personally didn’t have to work on every single application,” Ng said.
In a sense, then, Ng joined a company that had already built momentum in deep learning. He wasn’t starting from scratch.
Above: Baidu’s Kai Yu.
Image Credit: Kai Yu
Only a few companies could have appealed to Ng, given his desire to push artificial intelligence forward. It’s capital-intensive, as it requires lots of data and computation. Baidu, he said, can provide those things.
Baidu is nimble, too. Unlike Silicon Valley’s tech giants, which measure activity in terms of monthly active users, Chinese Internet companies prefer to track usage by the day, Ng said.
It’s a symptom of cadence,” he said. “What are you doing today?” And product cycles in China are short; iteration happens very fast, Ng said.
Plus, Baidu is willing to get infrastructure ready to use on the spot.
Frankly, Kai just made decisions, and it just happened without a lot of committee meetings,” Ng said. “The ability of individuals in the company to make decisions like that and move infrastructure quickly is something I really appreciate about this company.
That might sound like a kind deference to Ng’s new employer, but he was alluding to a clear advantage Baidu has over Google.
He ordered 1,000 GPUs [graphics processing units] and got them within 24 hours,Adam Gibson, co-founder of deep-learning startup Skymind, told VentureBeat. “At Google, it would have taken him weeks or months to get that.
Not that Baidu is buying this type of hardware for the first time. Baidu was the first company to build a GPU cluster for deep learning, Ng said — a few other companies, like Netflix, have found GPUs useful for deep learning — and Baidu also maintains a fleet of servers packing ARM-based chips.
Above: Baidu headquarters in Beijing.
Image Credit: Baidu
Now the Silicon Valley researchers are using the GPU cluster and also looking to add to it and thereby create still bigger artificial neural networks.
But the efforts have long since begun to weigh on Baidu’s books and impact products. “We deepened our investment in advanced technologies like deep learning, which is already yielding near term enhancements in user experience and customer ROI and is expected to drive transformational change over the longer term,” Li said in a statement on the company’s earnings the second quarter of 2014.
Next step: Improving accuracy
What will Ng do at Baidu? The answer will not be limited to any one of the company’s services. Baidu’s neural networks can work behind the scenes for a wide variety of applications, including those that handle text, spoken words, images, and videos. Surely core functions of Baidu like Web search and advertising will benefit, too.
All of these are domains Baidu is looking at using deep learning, actually,” Ng said.
Ng’s focus now might best be summed up by one word: accuracy.
That makes sense from a corporate perspective. Google has the brain trust on image analysis, and Microsoft has the brain trust on speech, said Naveen Rao, co-founder and chief executive of deep-learning startup Nervana. Accuracy could potentially be the area where Ng and his colleagues will make the most substantive progress at Baidu, Rao said.
Matthew Zeiler, founder and chief executive of another deep learning startup, Clarifai, was more certain. “I think you’re going to see a huge boost in accuracy,” said Zeiler, who has worked with Hinton and LeCun and spent two summers on the Google Brain project.
One thing is for sure: Accuracy is on Ng’s mind.
Above: The lobby at Baidu’s office in Sunnyvale, Calif.
Image Credit: Jordan Novet/VentureBeat
Here’s the thing. Sometimes changes in accuracy of a system will cause changes in the way you interact with the device,” Ng said. For instance, more accurate speech recognition could translate into people relying on it much more frequently. Think “Her”-level reliance, where you just talk to your computer as a matter of course rather than using speech recognition in special cases.
Speech recognition today doesn’t really work in noisy environments,” Ng said. But that could change if Baidu’s neural networks become more accurate under Ng.
Ng picked up his smartphone, opened the Baidu Translate app, and told it that he needed a taxi. A female voice said that in Mandarin and displayed Chinese characters on screen. But it wasn’t a difficult test, in some ways: This was no crowded street in Beijing. This was a quiet conference room in a quiet office.
There’s still work to do,” Ng said.
‘The future heroes of deep learning’
Meanwhile, researchers at companies and universities have been hard at work on deep learning for decades.
Google has built up a hefty reputation for applying deep learning to images from YouTube videos, data center energy use, and other areas, partly thanks to Ng’s contributions. And recently Microsoft made headlines for deep-learning advancements with its Project Adam work, although Li Deng of Microsoft Research has been working with neural networks for more than 20 years.
In academia, deep learning research groups all over North America and Europe. Key figures in the past few years include Yoshua Bengio at the University of Montreal, Geoff Hinton of the University of Toronto (Google grabbed him last year through its DNNresearch acquisition), Yann LeCun from New York University (Facebook pulled him aboard late last year), and Ng.
But Ng’s strong points differ from those of his contemporaries. Whereas Bengio made strides in training neural networks, LeCun developed convolutional neural networks, and Hinton popularized restricted Boltzmann machines, Ng takes the best, implements it, and makes improvements.
Andrew is neutral in that he’s just going to use what works,” Gibson said. “He’s very practical, and he’s neutral about the stamp on it.
Not that Ng intends to go it alone. To create larger and more accurate neural networks, Ng needs to look around and find like-minded engineers.
He’s going to be able to bring a lot of talent over,Dave Sullivan, co-founder and chief executive of deep-learning startup Ersatz Labs, told VentureBeat. “This guy is not sitting down and writing mountains of code every day.
And truth be told, Ng has had no trouble building his team.
Hiring for Baidu has been easier than I’d expected,” he said.
A lot of engineers have always wanted to work on AI. … My job is providing the team with the best possible environment for them to do AI, for them to be the future heroes of deep learning.

More information:

Google’s innovative search technologies connect millions of people around the world with information every day. Founded in 1998 by Stanford Ph.D. students Larry Page and Sergey Brin, Google today is a top web property in all major glob… read more »

Powered by VBProfiles

Tagged , , , , , , , , , ,

How Watson Changed IBM

ORIGINAL: HBR
by Brad Power
August 22, 2014

Remember when IBM’s “Watson” computer competed on the TV game show “Jeopardy” and won? Most people probably thought “Wow, that’s cool,” or perhaps were briefly reminded of the legend of John Henry and the ongoing contest between man and machine. Beyond the media splash it caused, though, the event was viewed as a breakthrough on many fronts. Watson demonstrated that machines could understand and interact in a natural language, question-and-answer format and learn from their mistakes. This meant that machines could deal with the exploding growth of non-numeric information that is getting hard for humans to keep track of: to name two prominent and crucially important examples,

  • keeping up with all of the knowledge coming out of human genome research, or 
  • keeping track of all the medical information in patient records.
So IBM asked the question: How could the fullest potential of this breakthrough be realized, and how could IBM create and capture a significant portion of that value? They knew the answer was not by relying on traditional internal processes and practices for R&D and innovation. Advances in technology — especially digital technology and the increasing role of software in products and services — are demanding that large, successful organizations increase their pace of innovation and make greater use of resources outside their boundaries. This means internal R&D activities must increasingly shift towards becoming crowdsourced, taking advantage of the wider ecosystem of customers, suppliers, and entrepreneurs.
IBM, a company with a long and successful tradition of internally-focused R&D activities, is adapting to this new world of creating platforms and enabling open innovation. Case in point, rather than keep Watson locked up in their research labs, they decided to release it to the world as a platform, to run experiments with a variety of organizations to accelerate development of natural language applications and services. In January 2014 IBM announced they were spending $1 billion to launch the Watson Group, including a $100 million venture fund to support start-ups and businesses that are building Watson-powered apps using the “Watson Developers Cloud.” More than 2,500 developers and start-ups have reached out to the IBM Watson Group since the Watson Developers Cloud was launched in November 2013.

So how does it work? First, with multiple business models. Mike Rhodin, IBM’s senior vice president responsible for Watson, told me, “There are three core business models that we will run in parallel. 

  • The first is around industries that we think will go through a big change in “cognitive” [natural language] computing, such as financial services and healthcare. For example, in healthcare we’re working with The Cleveland Clinic on how medical knowledge is taught. 
  • The second is where we see similar patterns across industries, such as how people discover and engage with organizations and how organizations make different kinds of decisions. 
  • The third business model is creating an ecosystem of entrepreneurs. We’re always looking for companies with brilliant ideas that we can partner with or acquire. With the entrepreneur ecosystem, we are behaving more like a Silicon Valley startup. We can provide the entrepreneurs with access to early adopter customers in the 170 countries in which we operate. If entrepreneurs are successful, we keep a piece of the action.
IBM also had to make some bold structural moves in order to create an organization that could both function as a platform as well as collaborate with outsiders for open innovation. They carved out The Watson Group as a new, semi-autonomous, vertically integrated unit, reporting to the CEO. They brought in 2000 people, a dozen projects, a couple of Big Data and content analytics tools, and a consulting unit (outside of IBM Global Services). IBM’s traditional annual budget cycle and business unit financial measures weren’t right for Watson’s fast pace, so, as Mike Rhodin told me, “I threw out the annual planning cycle and replaced it with a looser, more agile management system. In monthly meetings with CEO Ginni Rometty, we’ll talk one time about technology, and another time about customer innovations. I have to balance between strategic intent and tactical, short-term decision-making. Even though we’re able to take the long view, we still have to make tactical decisions.

More and more, organizations will need to make choices in their R&D activities to either create platforms or take advantage of them. 

Those with deep technical and infrastructure skills, like IBM, can shift the focus of their internal R&D activities toward building platforms that can connect with ecosystems of outsiders to collaborate on innovation.

The second and more likely option for most companies is to use platforms like IBM’s or Amazon’s to create their own apps and offerings for customers and partners. In either case, new, semi-autonomous agile units, like IBM’s Watson Group, can help to create and capture huge value from these new customer and entrepreneur ecosystems.

More blog posts by Brad Power
Tagged , , , , , ,

“Brain” In A Dish Acts As Autopilot Living Computer

ORIGINAL: U of Florida
by Jennifer Viegas
Nov 27, 2012
A glass dish contains a “brain” — a living network of 25,000 rat brain cells connected to an array of 60 electrodes.University of Florida/Ray Carson

downloadable pdf

A University of Florida scientist has grown a living “brain” that can fly a simulated plane, giving scientists a novel way to observe how brain cells function as a network.The “brain” — a collection of 25,000 living neurons, or nerve cells, taken from a rat’s brain and cultured inside a glass dish — gives scientists a unique real-time window into the brain at the cellular level. By watching the brain cells interact, scientists hope to understand what causes neural disorders such as epilepsy and to determine noninvasive ways to intervene.


2012 U of Florida - Brain Test

Thomas DeMarse holds a glass dish containing a living network of 25,000 rat brain cells connected to an array of 60 electrodes that can interact with a computer to fly a simulated F-22 fighter plane.

As living computers, they may someday be used to fly small unmanned airplanes or handle tasks that are dangerous for humans, such as search-and-rescue missions or bomb damage assessments.”

We’re interested in studying how brains compute,” said Thomas DeMarse, the UF assistant professor of biomedical engineering who designed the study. “If you think about your brain, and learning and the memory process, I can ask you questions about when you were 5 years old and you can retrieve information. That’s a tremendous capacity for memory. In fact, you perform fairly simple tasks that you would think a computer would easily be able to accomplish, but in fact it can’t.
Continue reading

Tagged , , , , , , , , , ,

Siri’s Inventors Are Building a Radical New AI That Does Anything You Ask

Viv was named after the Latin root meaning live. Its San Jose, California, offices are decorated with tsotchkes bearing the numbers six and five (VI and V in roman numerals). Ariel Zambelich

When Apple announced the iPhone 4S on October 4, 2011, the headlines were not about its speedy A5 chip or improved camera. Instead they focused on an unusual new feature: an intelligent assistant, dubbed Siri. At first Siri, endowed with a female voice, seemed almost human in the way she understood what you said to her and responded, an advance in artificial intelligence that seemed to place us on a fast track to the Singularity. She was brilliant at fulfilling certain requests, like “Can you set the alarm for 6:30?” or “Call Diane’s mobile phone.” And she had a personality: If you asked her if there was a God, she would demur with deft wisdom. “My policy is the separation of spirit and silicon,” she’d say.Over the next few months, however, Siri’s limitations became apparent. Ask her to book a plane trip and she would point to travel websites—but she wouldn’t give flight options, let alone secure you a seat. Ask her to buy a copy of Lee Child’s new book and she would draw a blank, despite the fact that Apple sells it. Though Apple has since extended Siri’s powers—to make an OpenTable restaurant reservation, for example—she still can’t do something as simple as booking a table on the next available night in your schedule. She knows how to check your calendar and she knows how to use Open­Table. But putting those things together is, at the moment, beyond her.
Continue reading

Tagged , , , , , , ,

Joi Ito: Want to innovate? Become a “now-ist”

Remember before the internet?” asks Joi Ito. “Remember when people used
to try to predict the future?
” In this engaging talk, the head of the
MIT Media Lab skips the future predictions and instead shares a new approach to creating in the moment: building quickly and improving constantly, without waiting for permission or for proof that you have the right idea
. This kind of bottom-up innovation is seen in the most fascinating, futuristic projects emerging today, and it starts, he says, with being open and alert to what’s going on around you right now.
Don’t be a futurist, he suggests: be a now-ist.
Tagged , , , , , ,

Preparing Your Students for the Challenges of Tomorrow

ORIGINAL: Edutopia
August 20, 2014

Right now, you have students. Eventually, those students will become the citizens — employers, employees, professionals, educators, and caretakers of our planet in 21st century. Beyond mastery of standards, what can you do to help prepare them? What can you promote to be sure they are equipped with the skill sets they will need to take on challenges and opportunities that we can’t yet even imagine?

Following are six tips to guide you in preparing your students for what they’re likely to face in the years and decades to come.

1. Teach Collaboration as a Value and Skill Set Students of today need new skills for the coming century that will make them ready to collaborate with others on a global level. Whatever they do, we can expect their work to include finding creative solutions to emerging challenges.
2. Evaluate Information Accuracy 
New information is being discovered and disseminated at a phenomenal rate. It is predicted that 50 percent of the facts students are memorizing today will no longer be accurate or complete in the near future. Students need to know
  • how to find accurate information, and
  • how to use critical analysis for
  • assessing the veracity or bias and
  • the current or potential uses of new information.
These are the executive functions that they need to develop and practice in the home and at school today, because without them, students will be unprepared to find, analyze, and use the information of tomorrow.
3. Teach Tolerance 
In order for collaboration to happen within a global community, job applicants of the future will be evaluated by their ability for communication with, openness to, and tolerance for unfamiliar cultures and ideas. To foster these critical skills, today’s students will need open discussions and experiences that can help them learn about and feel comfortable communicating with people of other cultures.
4. Help Students Learn Through Their Strengths 
Children are born with brains that want to learn. They’re also born with different strengths — and they grow best through those strengths. One size does not fit all in assessment and instruction. The current testing system and the curriculum that it has spawned leave behind the majority of students who might not be doing their best with the linear, sequential instruction required for this kind of testing. Look ahead on the curriculum map and help promote each student’s interest in the topic beforehand. Use clever “front-loading” techniques that will pique their curiosity.
5. Use Learning Beyond the Classroom
New “learning” does not become permanent memory unless there is repeated stimulation of the new memory circuits in the brain pathways
. This is the “practice makes permanent” aspect of neuroplasticity where neural networks that are the most stimulated develop more dendrites, synapses, and thicker myelin for more efficient information transmission. These stronger networks are less susceptible to pruning, and they become long-term memory holders. Students need to use what they learn repeatedly and in different, personally meaningful ways for short-term memory to become permanent knowledge that can be retrieved and used in the future. Help your students make memories permanent by providing opportunities for them to “transfer” school learning to real-life situations.
6. Teach Students to Use Their Brain Owner’s Manual
The most important manual that you can share with your students is the owner’s manual to their own brains. When they understand how their brains take in and store information (PDF, 139KB), they hold the keys to successfully operating the most powerful tool they’ll ever own. When your students understand that, through neuroplasticity, they can change their own brains and intelligence, together you can build their resilience and willingness to persevere through the challenges that they will undoubtedly face in the future.How are you preparing your students to thrive in the world they’ll inhabit as adults?

Tagged , , , , , , ,

Brainstorming Doesn’t Work; Try This Technique Instead

ORIGINAL: FastCompany
Ever been in a meeting where one loudmouth’s mediocre idea dominates?
Then you know brainstorming needs an overhaul.

 

Brainstorming, in its current form and by manymetrics, doesn’t work as well as the frequency of “team brainstorming meetings” would suggests it does. Early ideas tend to have disproportionate influence over the rest of the conversation.
Sharing ideas in groups isn’t the problem, it’s the “out-loud” part that, ironically, leads to groupthink, instead of unique ideas. “As sexy as brainstorming is, with people popping like champagne with ideas, what actually happens is when one person is talking you’re not thinking of your own ideas,Leigh Thompson, a management professor at the Kellogg School, told Fast Company. “Sub-consciously you’re already assimilating to my ideas.”
That process is called “anchoring,” and it crushes originality. “Early ideas tend to have disproportionate influence over the rest of the conversation,Loran Nordgren, also a professor at Kellogg, explained. “They establish the kinds of norms, or cement the idea of what are appropriate examples or potential solutions for the problem.

Continue reading

Tagged , , , ,

A Thousand Kilobots Self-Assemble Into Complex Shapes

ORIGINAL: IEEE Spectrum
By Evan Ackerman
14 Aug 2014
 Photo: Michael Rubenstein/Harvard Universit

When Harvard roboticists first introduced their Kilobots in 2011, they’d only made 25 of them. When we next saw the robots in 2013, they’d made 100. Now the researchers have built one thousand of them. That’s a whole kilo of Kilobots, and probably the most robots that have ever been in the same place at the same time, ever.

The researchers—Michael Rubenstein, Alejandro Cornejo, and Professor Radhika Nagpal of Harvard’s Self-Organizing Systems Research Group—describe their thousand-robot swarm in a paper published today in Science (they actually built 1024 robots, apparently following the computer science definition of “kilo”).

Despite their menacing name (KILL-O-BOTS!) and the robot swarm nightmares they may induce in some people, these little guys are harmless. Each Kilobot [pictured below] is a small, cheap-ish ($14) device that can move around by vibrating their legs and communicate with other robots with infrared transmitters and receivers.

Continue reading

Tagged , , , , , , , ,

IBM Chip Processes Data Similar to the Way Your Brain Does

A chip that uses a million digital neurons and 256 million synapses may signal the beginning of a new era of more intelligent computers.
WHY IT MATTERS

Computers that can comprehend messy data such as images could revolutionize what technology can do for us.

New thinking: IBM has built a processor designed using principles at work in your brain.
A new kind of computer chip, unveiled by IBM today, takes design cues from the wrinkled outer layer of the human brain. Though it is no match for a conventional microprocessor at crunching numbers, the chip consumes significantly less power, and is vastly better suited to processing images, sound, and other sensory data.
IBM’s SyNapse chip, as it is called, processes information using a network of just over one million “neurons,” which communicate with one another using electrical spikes—as actual neurons do. The chip uses the same basic components as today’s commercial chips—silicon transistors. But its transistors are configured to mimic the behavior of both neurons and the connections—synapses—between them.
The SyNapse chip breaks with a design known as the Von Neuman architecture that has underpinned computer chips for decades. Although researchers have been experimenting with chips modeled on brains—known as neuromorphic chips—since the late 1980s, until now all have been many times less complex, and not powerful enough to be practical (see “Thinking in Silicon”). Details of the chip were published today in the journal Science.
The new chip is not yet a product, but it is powerful enough to work on real-world problems. In a demonstration at IBM’s Almaden research center, MIT Technology Review saw one recognize cars, people, and bicycles in video of a road intersection. A nearby laptop that had been programed to do the same task processed the footage 100 times slower than real time, and it consumed 100,000 times as much power as the IBM chip. IBM researchers are now experimenting with connecting multiple SyNapse chips together, and they hope to build a supercomputer using thousands.
When data is fed into a SyNapse chip it causes a stream of spikes, and its neurons react with a storm of further spikes. The just over one million neurons on the chip are organized into 4,096 identical blocks of 250, an arrangement inspired by the structure of mammalian brains, which appear to be built out of repeating circuits of 100 to 250 neurons, says Dharmendra Modha, chief scientist for brain-inspired computing at IBM. Programming the chip involves choosing which neurons are connected, and how strongly they influence one another. To recognize cars in video, for example, a programmer would work out the necessary settings on a simulated version of the chip, which would then be transferred over to the real thing.
In recent years, major breakthroughs in image analysis and speech recognition have come from using large, simulated neural networks to work on data (see “Deep Learning”). But those networks require giant clusters of conventional computers. As an example, Google’s famous neural network capable of recognizing cat and human faces required 1,000 computers with 16 processors apiece (see “Self-Taught Software”).
Although the new SyNapse chip has more transistors than most desktop processors, or any chip IBM has ever made, with over five billion, it consumes strikingly little power. When running the traffic video recognition demo, it consumed just 63 milliwatts of power. Server chips with similar numbers of transistors consume tens of watts of power—around 10,000 times more.
The efficiency of conventional computers is limited because they store data and program instructions in a block of memory that’s separate from the processor that carries out instructions. As the processor works through its instructions in a linear sequence, it has to constantly shuttle information back and forth from the memory store—a bottleneck that slows things down and wastes energy.
IBM’s new chip doesn’t have separate memory and processing blocks, because its neurons and synapses intertwine the two functions. And it doesn’t work on data in a linear sequence of operations; individual neurons simply fire when the spikes they receive from other neurons cause them to.
Horst Simon, the deputy director of Lawrence Berkeley National Lab and an expert in supercomputing, says that until now the industry has focused on tinkering with the Von Neuman approach rather than replacing it, for example by using multiple processors in parallel, or using graphics processors to speed up certain types of calculations. The new chip “may be a historic development,” he says. “The very low power consumption and scalability of this architecture are really unique.”
One downside is that IBM’s chip requires an entirely new approach to programming. Although the company announced a suite of tools geared toward writing code for its forthcoming chip last year (see “IBM Scientists Show Blueprints for Brainlike Computing”), even the best programmers find learning to work with the chip bruising, says Modha: “It’s almost always a frustrating experience.” His team is working to create a library of ready-made blocks of code to make the process easier.
Asking the industry to adopt an entirely new kind of chip and way of coding may seem audacious. But IBM may find a receptive audience because it is becoming clear that current computers won’t be able to deliver much more in the way of performance gains. “This chip is coming at the right time,” says Simon.
ORIGINAL: Tech Review
August 7, 2014
Tagged , , , , , , , , ,

Google buys city guides app Jetpac, support to end on September 15

ORIGINAL: The Next Web
By Josh Ong

Google has acquired the team behind Jetpac, an iPhone app for crowdsourcing city guides from public Instagram photos. The app will be pulled from the App Store in coming days, and support for the service will be discontinued on September 15.

Jetpac’s deep learning software used a nifty trick of scanning our photos to evaluate businesses and venues around town. As MIT Technology Review notes, the app could tell whether visitors were tourists, whether a bar is dog-friendly and how fancy a place was.

It even employed humans to find hipster spots by training the system to count the number of mustaches and plaid shirts.

Interestingly, Jetpac’s technology was inspired by Google researcher Geoffrey Hinton, so it makes perfect sense for Google to bring the startup into its fold. If this means that Google Now will gain the ability to automatically alert me when I’m entering a hipster-infested area, then I’m an instant fan.

Jetpac also built two iOS apps that tapped into its Deep Belief neural network to offer users object recognition.

Imagine all photos tagged automatically, the ability to search the world by knowing what is in the world’s shared photos, and robots that can see like humans,” the App Store description for its Spotter app reads. If that’s not a Googly description, I don’t know what is.

Jetpac

(h/t Ouriel Ohayon)

Thumbnail image credit: GEORGES GOBET/AFP/Getty Images

Tagged , , , , ,
%d bloggers like this: