Wednesday, May 09, 2007

The New Human Body...

Today, I spent my morning over at a conference at MIT called "Human 2.0" or h2.O. It was hosted by the MIT Media Lab. I had hoped to do more stuff like this while I was out here. But it hasn't worked out that way, and that's fine. Still, I really wanted to check out the Media Lab. I've interviewed lots of folks at the Media Lab over the years, and have always been fascinated by the work over there.

When I called the press office, they invited me to attend this conference, which frankly I didn't know much about. But the two speakers I saw were both fascinating.

The conference looks at how to seamlessly move "technology into our bodies and minds in ways that will truly expand human capability." So what does that mean?

The first speaker I saw was Rosalind Picard who works on "affective computing." This includes things like trying to break down how people express emotions and teaching computers how to interpret that. It was a fascinating speech. But in particular, because it turns out that one of the potential applications was for people with autism. Picard's group is developing wearable computers that help autistic people respond to other people's emotions. It's called an "Emotional Social Intelligence Prosthesis."

I couldn't help but think of Liam during the speech. While he's not autistic, he faces many of the same social challenges. He can't make eye contact with people, which it makes it hard for him to receive all the signals they send and process them. Inevitably with other kids, this means he ends up walking away or getting left behind.

Picard made the analogy of a chess game. There are 20 possible opening moves in a chess game. By the fifth move, there are almost 5 million possible combinations of moves. Compare that to the human face, which can register 44 expressions. In just a simple back and forth, there are already millions of combinations. And rather than having minutes to process like a chess game, a person has milliseconds. If a person takes just a few milliseconds too long to process, like an autistic person, or like Liam, then they get lost very quickly.

I'm not sure if wearable computers are the answer. But it's an interesting emerging field.

The second speaker was Deb Roy who works with the Cognitive Machines group. What does that mean?: "Our goal is to create machines that learn to communicate in human- like ways by grasping the meaning of words in context and to understand how children acquire language through physical and social interaction."

How are they doing that? By trying to understand how kids acquire language. And how is Roy doing that? Well, he just happens to have a son who is just under two years of age. And since the kid was born, Roy and his wife have had video cameras installed in every room of their house and have recorded virtually every moment of their son's life. Now they are trying to analyze the millions of hours of digital video to better understand the mechanics of language acquisition. He calls it the "Human Speechome Project."
At the conference, Roy showed video collages of every time his son said "ball" over the course of a year. It was both sweet, and a bit disturbing. I wonder how it will affect that kids when he's older?

No comments: