banner

Friday, August 1, 2008

TEACHING AI TO BE SOCIABLE

Teaching ai to be sociable

Humans can already form social bonds with robots, but the real trick may be getting AI equally interested in us


In the recent superhero film Iron Man, there’s a scene where Robert Downey Jr.’s character struggles to reach a device to power his failing heart. He stretches an arm up to the device, but collapses before he can grab it. Lucky for him, his trusty robot is nearby—it manages to anticipate what he wants and hand him the device just in time.

In the real world, we’ve yet to create artificial intelligences that can respond so intuitively to our needs. The quest to do so has pushed two groups of researchers in nearly opposite directions. One group, at Rensselaer Polytechnic Institute (RPI), in Troy, N.Y., has built Eddie, an AI that resides in the virtual world of Second Life and harnesses the power of a supercomputer to analyze a library of rules about human thinking. The other, MIT Media Lab’s Personal Robots Group, has built Leonardo, a furry, animatronic robot that learns as a child does, by interacting with people in the physical world. Within the last two years both Eddie and Leonardo have demonstrated a basic social ability that is the first step toward AI that understands how humans think.

“We’re not there yet, but a major turning point for AI is working out logic that can do justice to your views of another person’s mind,” says Selmer Bringsjord, an AI expert who heads the cognitive science department at RPI. For an artificial intelligence to fully interact and cooperate with people, it has to understand the concept of a mind separate from its own, he explains. Bringsjord and his team created Eddie with this goal in mind, and in March 2008, showed off some of its social skills in Second Life.

Eddie’s avatar met two other avatars, CrispyNoodle and BinDistrib, both controlled by humans. A red briefcase and a green briefcase lay open on a table, with the red briefcase containing a gun. While Eddie watched, CrispyNoodle asked BinDistrib to leave, then moved the gun from the red briefcase to the green one, and closed them both. When BinDistrib returned, CrispyNoodle asked Eddie to predict where BinDistrib would look for the gun. Eddie was able to correctly predict that BinDistrib would look for the gun in the red briefcase, even though it was no longer there.

The correct answer may seem obvious, but most children under 5 years old get it wrong, because they don’t understand how the other person can believe something that is untrue. Cognitive scientists use such false-belief tests to determine if a child can understand another person’s point of view—the beginning of social awareness.

Bringsjord’s team helps Eddie understand other people by translating human mental states into logic-based rules and theorems—“If Bob appears happy at a particular time, and nothing happens to change that, then he will still be happy at a later time”—in what researchers refer to as a top-down approach. This type of AI can only reason about human mental states insofar as Bringsjord’s team has included them in its knowledge database.

At MIT, Cynthia Breazeal’s Personal Robots Group has created social AI through the opposite approach—nurturing robotic intelligence through bottom-up learning, where simple imitation behaviors lead to social interaction. In a 2007 demonstration, Leonardo—which can’t walk but has 32 degrees of freedom in his expressive face alone—watched Matt Berlin, a MIT researcher, struggling to open a box that he mistakenly believed contained potato chips (it was really full of cookies). The robot responded by pulling a lever that opened the box that actually held chips.

Unlike Eddie, Leonardo has no preprogrammed knowledge of human thoughts. The robot started with a set of basic learning abilities and built-in social skills, and gradually, through imitation, learned to map certain human facial expressions or gestures to rudimentary intentions and goals. “The core systems and learning algorithms are very well known and nothing fancy, but then we get more bang for your buck,” said Berlin. Leonardo requires relatively simple programming and little computing power to perform the same tasks compared with Eddie.

But the RPI group has ambitious plans—in the fall they want to attempt the holy grail of AI research: a Turing test. Right now, Bringsjord believes that his team has created AI capable of second- and third-order beliefs about other minds—in other words the AI can consider what a second intelligence believes about a third mind’s beliefs. Bringsjord’s team plans to combine Eddie’s AI program with the biographical background of a grad student and the power of IBM’s Blue Gene supercomputer to carry on a conversation with a human avatar. If a human judge can’t tell the difference between another human and the AI through online conversation, then the system will have passed the test.

Futurologist predicts brain to computer transfer by mid 21st century


British futurologist Ian Pearson, head of the futurology unit at BT, predicts humans will be able to download the contents of their brain into computers by the mid 21st century. Pearson also believes machines will also be capable of feeling emotion in the future, and that the next computing goal is replicating consciousness.

"If you draw the timelines, realistically by 2050 we would expect to be able to download your mind into a machine, so when you die it's not a major career problem," Pearson told the Observer newspaper in an interview.

"If you're rich enough then by 2050 it's feasible. If you're poor you'll probably have to wait until 2075 or 2080 when it's routine.

"We are very serious about it. That's how fast this technology is moving: 45 years is a hell of a long time in IT."

Extrapolating computing power is not a difficult task: transistor density on semiconductors has increased at the rate predicted by Intel co-founder Gordon Moo

BOX OFFICE COLLECTION

52 week High Stock in BSE

CLOSE TO 52 WEEKS LOW STOCKS IN BSE