I’d first noticed this book was being written a few months ago, when (like everyone else, it seems) there was quite a lot of activity creating various Kinect mash-ups. You’ve probably already seen, for example, videos on YouTube of people controlling Bing Maps or SQL server using gestures recognised by Kinect. I’d fiddled around with Kinect programming a bit, but then more pressing matters over recent months caused my interest to fall by the wayside somewhat. I’d actually forgotten about this book completely, but I’m hoping that its arrival will prompt me to find some time to look at the SDK again.
I don’t imagine I’m going to come up with anything that will push the boundaries of what’s already been done with the Kinect by others, but I’m hoping that this book will at least teach me how to emulate (i.e. blatantly rip-off) some of the existing Kinect mash-ups. A quick flick-through seems encouraging – I’ll let you know when I find the time to follow through the examples in more detail. (Incidentally, if anybody has any suggestions for a novel natural user interface that I could try to build for a spatial application I’d love to hear it).
Currently, I’m probably thinking about something using the Bing Maps WPF control, but I don’t want to just re-use the common “Minority Report” gestures as in the video above – I’ve got an idea that would require full skeletal-tracking, but I need to think it through a bit more. The other thing I’m wondering about is making use of speech recognition – I don’t know what the capacity of the Kinect dictionary is but could you, for example, load (a subset of) the geonames database to create a speech-navigable map of the world?
Full Disclosure: You don’t get paid much for being an author, but you do benefit from heavily-subsidised rates on your publisher’s other books. I have neither been asked nor paid by APress to mention this book, but I didn’t pay full retail price for it either. There you go.