Johnny Chung Lee is one of the world’s top experts on gesture interfaces in video games. You will find his videos on YouTube where he exploits the Wii gesture interface to the limit. These have made him a bit of a cult.
Now he is a researcher in the Applied Sciences group at Microsoft working on Natal. And he has a blog, on which he reveals all sorts of great inside information. Here is an extract:
Speaking as someone who has been working in interface and sensing technology for nearly 10 years, this is an astonishing combination of hardware and software. The few times I’ve been able to show researchers the underlying components, their jaws drop with amazement… and with good reason.
The 3D sensor itself is a pretty incredible piece of equipment providing detailed 3D information about the environment similar to very expensive laser range finding systems but at a tiny fraction of the cost. Depth cameras provide you with a point cloud of the surface of objects that is fairly insensitive to various lighting conditions allowing you to do things that are simply impossible with a normal camera.
But once you have the 3D information, you then have to interpret that cloud of points as “people”. This is where the researcher jaws stay dropped. The human tracking algorithms that the teams have developed are well ahead of the state of the art in computer vision in this domain. The sophistication and performance of the algorithms rival or exceed anything that I’ve seen in academic research, never mind a consumer product. At times, working on this project has felt like a miniature “Manhattan project” with developers and researchers from around the world coming together to make this happen.
We would all love to one day have our own personal holodeck. This is a pretty measurable step in that direction.
I can’t wait to get my hands on this!