Thursday, October 20, 2011

Interpolative Dance




I wrote a paper about six months ago on, ahem, "RBF Interpolation as a General Architecture for Expressive Middleware". It was written great haste -- I had 12 days to 1) write it, 2) write the software demoing the concept, 3) find a dancer to work with and 4) move house from New Zealand to Canada. Oh, and I had a bad cold as well. And all for nought: the paper was rejected -- a rookie effort, after all, and, as it turned out, I could only offer a tantalizing but null-ish result. Basically, if you have good mocap data, the approach works a treat; but the OpenNI pose tracking isn't quite ready for this type of application (there has been another release of the SDK since I wrote the paper, and it's probably worth taking another kick at the can). I was invited to resubmit it as a two-pager, but I ran out of time.

But whatever. I still think it's a very good idea, and I really think someone should make a system that works like this, and I think RBF interpolation is an intuitive and powerful tool for media arts applications. If anyone is interested in the source code, drop me a line. If I have time to clean it up a bit (not likely any time soon, since I'm up to my eyeballs in a very tricky consulting gig), I'll post it here on the blog.

So here's the paper. Let me know what you think. I'll write more later to explain the concept, which is simple and elegant. I can say so since it's not particularly original -- it just adapts J.P. Lewis' notion of pose space deformation to media arts applications. The pun in the title is mine, though. I'm not sure JP would want to take credit for that one.

I should point out what this approach is not: it is not simply mapping individual inputs to individual outputs. That is exactly the sort of crabbed, awkward, inflexible approach that I am proposing an elegant alternative to. Instead it is keyframing an arbitrary number of inputs to an arbitrary number of outputs all at once. Read the paper for details.