UW News

October 5, 2006

Thinking can make it so: Computer scientist talks about ‘The Mind Body Connection’

News and Information

The idea that a person could move an object just by thinking about moving it used to be the province of science fiction or tracts on extrasensory perception.

But that’s no longer true. Computer scientists and engineers, in collaboration with neuroscientists, have proven that, with a little bit of training and what they call “machine learning,” a human can remotely control the motions of a robot, directing it to perform a series of tasks that would be mundane for a human but are nonetheless remarkable for a machine.

Associate Professor of Computer Science and Engineering Rajesh Rao will discuss the potentials and challenges of this field in a lecture, The Mind-Body Connection, which kicks off the Engineering Lecture Series at 7 p.m. Thursday, Oct. 12 in 110 Kane. To register and for more information, go to www.UWalum.com or call 206-543-0540.

Scientists have demonstrated in animals and in a handful of studies involving disabled humans that, if sensors are connected to certain parts of the brain, the animal or human can be trained to use thoughts to move a cursor to specific locations on a computer screen — and, in at least one case, to control the movements of a prosthetic limb.

Rao’s own work has focused on interpreting the brain’s signals — extracting the “meaning” from an individual’s thoughts — to control robotic devices. Two types of brain signals are being investigated. The first type involves signals from the surface of the brain in patients being monitored prior to brain surgery, a collaborative effort with neurosurgeon Jeff Ojemann, associate professor of neurological surgery, who works at Harborview Medical Center. The second type, which is non-invasive and poses minimal health risks, involves “noisy” signals obtained in much the same way that “brain waves” are measured by connecting sensors to the scalp.

“We are using signals from the scalp to allow an individual to give an autonomous robot high level commands through thought, such as ‘go to this location,’ ‘pick up this object, and return it to me,'” Rao says.

As society moves toward more routine use of robots in this way, they can become the eyes, ears and fingers for people who are disabled. Rao envisions a time when people who are disabled by diseases such as ALS (Lou Gehrig disease) and cannot regularly control any body movements are fitted with scalp sensors and use robots to retrieve objects, dispense medications, and perhaps do light cleaning.

Rao cautions that we’re a long way from the Jetsons’ do-everything robot maid. Precision control of robots with thought will require major advances in physics and engineering — the ability to precisely and non-invasively detect signals that originate deep in the brain. But major advances in brain imaging as well as in the number-crunching ability of computers, coupled with advances in signal processing, have made such notions a real possibility at some time in the future.

The other major development that has moved the field, Rao says, is machine learning. “This is one of the hottest areas in computer science,” he says. “It is used, for example, in data mining, in which scientists use statistical methods to extract useful information from masses of data.” (One of the more celebrated but also controversial recent uses of data mining has been to detect hints of terrorist activity from masses of telephone calls.)

One use of machine learning is to teach a robot how to perform desired actions. It’s second nature for adults to maintain their balance when they lift a leg off the ground. A robot, told to undertake the same movement for the first time, is likely to fall over. “With machine learning, we train robots to imitate a human teacher, in much the same way as a child learns by imitating adults,” Rao says.

Scientists put an individual in what is called a motion capture suit, which contains an array of reflective markers, and using cameras, recover a map of the entire body and how it moves when the human performs an action. This map is then projected onto the robot’s skeleton and used to guide its motions. Gradually, through repetition and feedback from its sensors, the robot learns how to imitate the movement of the human.

“It’s much easier to program a robot to learn an action from demonstration than it is to solve complex physics-based equations to precisely determine each step for achieving the same movement,” Rao says. Over time, the robot can be trained to learn a repertoire of movements, which then can be linked together to solve complex tasks in response to thought-commands.

Rao, who has studied neuroscience as well as computer science, believes that the learning experiments with robots may tell scientists something about how humans learn, and vice versa.

Today’s generation of robots is fragile, he says, and too expensive for everyday consumer use. “Companies are starting to look for the ‘killer application,'” he says. “It’s possible that the first consumer use could be as companion robots for the elderly or disabled, but more likely we’ll see the first applications of these technologies in the military or in the entertainment industry, as part of video games and robotic toys.”

The Engineering Lecture Series, “Engineering Our Quality of Life,” continues on Oct. 26 with Making the Right Choices by Joyce Cooper, associate professor of mechanical engineering. It concludes with Professor Emeritus of Electrical Engineering Sinclair Yee talking about Not a Drop to Drink on Nov. 9.