The Challenge at Hand

By admin,

  Filed under: AI, Artificial Intelligence, Biology, Biomimicry, Google, Machine Learning, Robotics
  Comments: Comments Off on The Challenge at Hand

ORIGINAL: Stanford
By Erin Biba
January/February 2014

Erin Biba is a Wired magazine correspondent and Popular Science columnist.   Brawny robots are nice; brainy ones would be better. Stanford engineers are on the case.

Images by Morgan Rockhill

Consider for a moment the complexities involved in buying a cup of coffee. It’s the type of errand most people do on autopilot. But for a robot—or, at least, for the engineer designing one—it presents a multitude of hardware and software challenges.

  • First the robot must visually identify the door.
  • Then locate the handle and grasp it.
  • Next pull the door open and maneuver through it.
  • Then navigate a path to the cafe, scanning for obstacles along the way (particularly humans) and avoiding collisions.
  • Finally, the bot must give its beverage order to a barista who has no experience or training with its ilk.

The deceptively simple task requires coordination of multiple sensory, navigation, motion and communication systems. And there are countless ways it can go utterly wrong. Which means not only does the machine have to know how to perform all the steps correctly, it has to detect when it has failed.

Last spring, researchers working in Ken Salisbury‘s lab at Stanford sent a stocky, wheeled robot known as the PR2 on such a coffee run. The PR2’s success in fetching a cup of joe from the Peet‘s on the third floor of the Clark Center was a demonstration of more than a decade of work involving at least a dozen collaborators on and off campus. Salisbury, ’74, MS ’78, PhD ’82, a professor of computer science and of surgery, started the University‘s personal robotics program. He initiated development of the PR1 prototype with Keenan Wyrobek, MS ’05, and Eric Berger, ’04, MS ’05, who went on to continue the work at Willow Garage.


  • a telescoping spine, 
  • articulated arms with swappable attachments, 
  • omnidirectional rolling base and 
  • sensors from head to toe, 

the PR2 is pretty advanced. Still, it’s hardly the shiny metal sidekick science fiction has conditioned us to expect. (Think: less ILM, more MST3K.) What those creators of futuristic fantasies didn’t quite appreciate—and any roboticist will tell you—is that getting a machine to perform even the most rudimentary of skills involves years, sometimes decades, of false starts and setbacks. The highest of high-tech bots are only now mastering 

  • how to move through the world without bumping into things,
  • manipulate or retrieve objects and
  • work near humans without endangering them.

The level of difficulty only underscores the elegance of biology’s solutions to the same problems.We have a lot of things to learn from nature about how to make structures that are compliant, inherently stable and relatively easy to control, so they can deal with what the real world throws at them,” says Mark Cutkosky, a professor in the mechanical engineering design group.

Take locomotion, for example. Getting around on wheels, as the PR2 does, is fine so long as the ground is fairly smooth and even. Walking—either upright or on all fours—is more versatile, and there are research groups perfecting bi- and quadrupedal robots that can clamber over tricky terrain.

But why remain earthbound at all when you could fly?

Assistant professor David Lentink is a bit unusual in his field. As both a biologist and an engineer, he is interested in applying insights gained from observing avian flight to building more efficient flying robots. In his lab, trained hummingbirds and parrotlets flit from point A to point B as members of Lentink’s team film them with a high-speed camera at up to 3,000 frames per second. Additionally, they use a pair of lasers that emit 10,000 flashes per second to image how air moves around the birds’ wings.

“They move their wings really fast,” explains Lentink. “To see how they manipulate the air we need to have many recordings within a single wingbeat. With this system we can easily make 50 recordings within one wingbeat.” Thus far, their observations have inspired a prototype, morphing wing that will mimic the overlapping feathers of bird wings, which can change shape during flight. Currently, they are testing the dynamics of how it will fold in the air.

Next, Lentink plans to investigate how birds use their eyes to navigate in order to improve visual guidance systems for robots. By studying optical flow, or the way images move across the retina, his team is hoping to determine how different image intensities aid birds in deciding which direction to go. “It’s fundamental research,” he says, “but it’s essential if we want to fly robots like birds can“—in turbulent conditions, for example, or through narrow gaps.

Landing is also a challenge. “Small, unmanned flying vehicles have become increasingly popular for applications such as surveillance or environmental monitoring,” notes Cutkosky. “But they don’t interact physically with the world. They’re all about flying and not bumping into anything.” Keeping the machines aloft is also a huge power suck. “If instead you have these things that can perch like a bird or bat or insect and land on walls or ceilings, then you can shut down the propeller and they can hang out there for days gathering info.

The trick is getting them to alight securely, while still being able to detach easily. Cutkosky’s team looked to the gecko for a biological template. The properties of the lizard’s grip have long been targeted for replication in the lab. In 2010, Cutkosky used a scanning electron microscope to visualize how nanoscale structures on the gecko’s toes deform so they can reversibly adhere to virtually any surface.

His team was able to create a synthetic material with similar properties, albeit on a larger scale. Since then, Cutkosky has attached that adhesive material to a flying robot, which can now perch on the sides of buildings or even upside down. The UAV isn’t “sticky” until it lands, because the momentum is what creates the suction. When it’s ready to take off, it releases a latch, which relaxes the internal forces and causes it to pop off.

Another theme driving robotics research is improving human-robot collaboration. Since the first industrial robot, the Unimate arm, was put into service at a General Motors plant in New Jersey in the 1960s, machines have largely supplanted people on production lines. They can be programmed to perform the same task, with precision, over and over again. They never get tired or bored, or develop repetitive strain injuries. But they can’t think on their feet, make decisions or learn from mistakes.

Courtesy University of California Berkeley NO R2-D2: The state-of-the-art PR2 can perform simple tasks such as buying coffee, folding laundry and tying knots.

On average, there are now about 58 robots operating for every 10,000 human employees in the manufacturing sector worldwide, according to the International Federation of Robotics. Technological improvements aside, though, they’re still mindless automatons with no sense of what’s going on around them. “On a current line, all robots are in cages,” a safety measure to prevent them from injuring people working in their vicinity, notes Aaron Edsinger.

With a couple robotics startups already under his belt, Edsinger, ’94, aims to change that through his latest venture, Redwood Robotics(Acquried by Google). Still somewhat in stealth mode, Redwood is working on a lower-cost, roughly human-sized replacement for the “dumb” arms that exist now. Equipped with sensors and software, “our new robot can detect when it makes contact with a person and react—so you can put it shoulder to shoulder with a human,” Edsinger says.

In addition to reducing the robots’ physical footprint and overhead costs, the developers want to make it safe for people to work in close proximity to, and perhaps even collaborate with, these new, smarter arms. “Manufacturers want to have people do more interesting work that takes advantage of their skills rather than just pulling something off a line to inspect it.

Medicine also stands to benefit from improved integration between robotic arms and the physicians who operate them. According to the Wall Street Journal, 450,000 robot-assisted surgeries were performed in 2012—a 450-fold increase from the year 2000, when the FDA approved the da Vinci Surgical System.

For patients, the upsides can include

  • smaller incisions,
  • less blood loss and
  • shorter recovery time.

For doctors, there’s

  • better visualization of the surgical field,
  • finer control over instruments and
  • less fatigue.

Unfortunately, these machines aren’t yet as good as they might be. As the same WSJ article points out, injuries and deaths resulting from robotic surgeries occurred at a rate of 50 per 100,000 procedures in 2012. (It’s unclear how that figure compares to the rate of adverse events resulting from standard surgeries.) “It’s a bit of a mystery” why that number is as high as it is, says associate professor of mechanical engineering Allison Okamura, MS ’96, PhD ’00. “We think it might have to do with humans learning to use the robot.

There is no universal training protocol or agreed upon criteria for determining when a doctor is ready to use a robotic surgical device on a patient. One of the projects Okamura is pursuing looks at how people adapt their movements when watching a surgical robot react to their commands. A better understanding of how doctors learn to work with the machines could lead to better training methods or even improved designs that make surgical robots more user-friendly.

A second avenue of inquiry Okamura is exploring has to do with giving the physician operating the robot near-real-time haptic feedback—whether he or she is sitting in the OR with the patient or controlling the device remotely. Her team is developing interfaces that simulate kinesthetic (force, position) and cutaneous (tactile) sensations. When you hold a pencil, for example, you feel the force feedback of the solid, cylindrical object and the tactile feedback of your skin stretching over its smooth surface. And when you press down, the changes in these sensations tell you whether the surface is hard or soft.

For doctors controlling surgical robots, such feedback would allow them to use touch as well as visual information to differentiate among tissue textures. “We want to make it feel like their hands are actually inside the [person’s] body,” Okamura says.

There may be a ways to go yet before we have fully autonomous, general-purpose robots walking (or rolling, or flying) among us. But across the Bay, Pieter Abbeel, MS ’02, PhD ’08, is taking another big step in the right direction. An assistant professor of electrical engineering and computer sciences at UC-Berkeley, he is teaching the PR2 to adapt to unexpected situations.

Applications well beyond the reach of current capabilities are our target,” he says. “We pick things that are pretty far out and see what happens if we use current techniques to see how far we can get.” The goal is to create a robot that won’t have to be told how to buy a latte, but will learn how to do so on its own, either by trial and error, or by watching someone first.

In Abbeel’s Robot Learning Lab, the PR2 watches a human demonstrate how to tie a knot, and then replicates the sequence of movements on its own. At the moment, the bot can successfully perform the task when the rope has been moved up to two inches outside of its original position. However incremental, such advances are far from trivial. They are essential if we ever want robots to fight our wars, protect and serve our cities, tend to our housework or care for our infirm.

After all, R2-D2 wasn’t built in a day.

Comments are closed for this post.