Robots that can learn? Watson professor is working on it

AI will help service robots gain knowledge and develop reasoning abilities.

Assistant Professor Shiqi Zhang, right, runs the Computer Science Department's Autonomous Intelligent Robotics Lab at the Watson School. Pictured with Zhang is PhD student Kishan Chandan. Image Credit: Jonathan Cohen.
Assistant Professor Shiqi Zhang, right, runs the Computer Science Department's Autonomous Intelligent Robotics Lab at the Watson School. Pictured with Zhang is PhD student Kishan Chandan.
Assistant Professor Shiqi Zhang, right, runs the Computer Science Department's Autonomous Intelligent Robotics Lab at the Watson School. Pictured with Zhang is PhD student Kishan Chandan. Photography: Jonathan Cohen.

When Shiqi Zhang wants to test his latest theories about artificial intelligence (AI), he and his students dispatch their army of robots to explore the Engineering Building.

Knee-high with laptop computers mounted on the top, the tiny automatons may not look that smart. Slowly but surely, however, they are learning to wheel their way around the building’s third floor and figuring out how to get back to Zhang’s lab. When they arrive, they sit patiently at the door like obedient dogs waiting to be let inside.

Zhang — an assistant professor of computer science — wants to develop AI algorithms that will help service robots in everyday environments to gain knowledge from the humans around them and develop their own reasoning abilities: “The whole lab is about robots that can communicate with people, provide services to people and learn from this experience. We don’t want the robot to make the same mistake again and again.”

Robotics programming, he says, has evolved in roughly three stages. First came predetermined tasks, such as a robotic arm in a car factory repeatedly welding the same section of different vehicle frames. Simple, logical and unchangeable.

Next came probabilistic robotics, which relies on statistical techniques for repre- senting information and making decisions — but that also has its limitations.

In the last 10 years, researchers like Zhang have been taking advantage of the new possibilities for “deep learning” and artificial neural networks. Because the AI is making connections on its own, though, how the decisions are made sometimes puzzles even the programmers. “If the AI is making a wrong choice,” Zhang says, “we want to know what’s going on and how we can avoid such mistakes in the future. We definitely need to know what’s happening under the hood so that we can justify things and we can have humans in the loop, but it’s challenging.”

If you ever dreamed about having your own Rosie the robot maid like on The Jetsons, this is where it starts — but Zhang believes that time is still a ways off.

“We see delivery robots in hospitals, in airports, these kinds of places. But it’s not happening in our homes or offices,” he says. “We can see that the mobility capabilities, the interaction capabilities, learning capabilities, all of these pieces are there already, but to put them together into one robot and let it be robust enough and with good learning capabilities, there’s still a lot of work to do.”