Inside Google’s Rebooted Robotics Program

In 2013, the company started an ambitious, flashy effort to create robots. Now, its goals are more modest, but the technology is subtly more advanced.

By Cade Metz

Videos by Brian Dawson and Meg Felling

March 26, 2019

MOUNTAIN VIEW, Calif. — Google has quietly been retooling an ambitious but troubled robotics program that was once led by an executive who left the company amid accusations of sexual harassment.

Starting in 2013, the internet company spent tens of millions of dollars buying six robotics start-ups in the United States and Japan. The project included two teams specializing in machines that looked and moved like humans. In a nod to Google’s grand ambitions, Andy Rubin, the vice president of engineering who ran the effort, called it Replicant. (The term was originally used in the science-fiction movie “Blade Runner.”)

Little came of it. Over the next few years, Google either sold off the companies it had acquired or shut them down. The best known of the bunch, Boston Dynamics, was bought by the Japanese conglomerate SoftBank and is still working on robots that move like humans or animals. Mr. Rubin left Google in 2014 after the harassment allegations.

Google regrouped, and reconsidered its focus on the mechanics of complex robots. It has been rebuilding its program for the last few years, with robots that are much more simple than the humanoid-shaped machines that hung on the walls inside Mr. Rubin’s lab.

The new effort is called Robotics at Google. It includes many of the engineers and researchers who worked under Mr. Rubin, and it is led by Vincent Vanhoucke, a principal scientist at Google. Mr. Vanhoucke, a French-born researcher, was a key figure in the development of Google Brain, the company’s central artificial intelligence lab. His team recently moved into a new lab on Google’s main campus in Mountain View.

The New York Times was recently provided a first look at some of the technology the company has been working on.

While the machines may not be as eye-catching as humanoid robots, Google researchers believe that the subtly more advanced technology inside them gives them more potential in the real world. The company is developing ways for these robots to learn skills on their own, like sorting through a bin of unfamiliar objects or navigating a warehouse filled with unexpected obstacles.

Google’s new lab is indicative of a broader effort to bring so-called machine learning to robotics. Researchers are exploring similar techniques at places like the University of California, Berkeley, and OpenAI, the artificial intelligence lab founded by the Silicon Valley kingpins Elon Musk and Sam Altman. In recent months, both places have spawned start-ups trying to commercialize their work.

Many believe that machine learning — not extravagant new devices — will be the key to developing robotics for manufacturing, warehouse automation, transportation and many other tasks.

“Robotics has long held the popular imagination, but what is easily the most important change is the application of machine learning,” said Sunil Dhaliwal, a general partner with Amplify Partners, a Silicon Valley venture capital firm. “The utility is in the software.”

Robots are already used in warehouses and on factory floors, but they can handle only specific tasks, like picking up a particular object or turning a screw. Google wants the machines it is working with to learn on their own.

On a recent afternoon inside Google’s new lab, a robotic arm hovered over a bin filled with Ping-Pong balls, wooden blocks, plastic bananas and other random objects. Reaching into this pile of clutter, the arm grabbed a banana between two fingers and, with a gentle flick of the wrist, tossed it into a smaller bin several feet away.

For a robot, it was a remarkable trick. When first presented with that pile of clutter, the arm did not know how to pick up a single object. But equipped with a camera that looked down into the bin, the Google system analyzed its own progress during about 14 hours of trial and error.

The arm eventually learned to toss items into the right bins about 85 percent of the time. When the researchers tried the same task, their accuracy rate was about 80 percent.

It may sound simple enough, but writing computer code to tell a machine how to do that would be extremely difficult. “It is learning more complicated things than I could ever think about,” said Shuran Song, one the primary researchers on the project.

Researchers believe these machines could work in warehouses and distribution centers run by companies like Amazon and UPS. Today, humans sort through items that move in and out of distribution centers. A system like Google’s could automate at least part of the process, though it is unclear when it will be ready for commercial use. Amazon, which has already deployed other kinds of robotics in its distribution centers, is interested in this kind of technology.

But many robotics experts warn that moving this kind of machine learning into the real world will be difficult. Technology that does well in the lab often breaks down inside a distribution center because it can’t deal with unexpected objects it hasn’t seen before or tasks that require movements it has never tried.

“This is not the right solution for all problems,” said Leif Jentoft, the chief executive of the Massachusetts company RightHand Robotics and a seasoned robotics researcher. “These technologies can sometimes seem more powerful than they are.

In another corner of Google’s lab, researchers are training robotic hands to manipulate objects — push, pull and spin them in subtle ways.

The three-fingered hands are hardly complex, at least in the physical sense. The software helping them learn is the breakthrough, and researchers hope the hands can eventually learn to use tools and other equipment.

Google is taking a similar approach with all its robotic hardware. The arm that tosses objects into a bin is not an elaborate machine designed by Google engineers. Built by Universal Robots, it is commonly used for manufacturing and other tasks. Google is training it to do things it couldn’t otherwise do.

“Learning is actually helping us overcome the challenges of low-cost robots,” said Vikash Kumar, the Google researcher who oversees this project.

In a third part of the lab, researchers are training a mobile robot sold by a Silicon Valley start-up, Fetch. This rolling machine is learning to navigate unfamiliar spaces, which can help in places like warehouses and factories.

Google is tight-lipped about how it hopes to deploy the technologies it is working on, but as with other forms of automation, there is an obvious question of whether it will take away jobs.

“It is hard to imagine a future where that is not the case,” said Michael Chui, a partner with the McKinsey Global Institute, a business research organization.

But other researchers believe the robots will complement human labor instead of replacing it.

“There are still so many jobs in the warehouse that robots can’t do,” said Ken Goldberg, a robotics professor at Berkeley and one of the researchers behind Ambidexterous Robotics, a new start-up. “What they can do is assist with some of the drudgery.”

Be the first to comment

Leave a Reply

Your email address will not be published.