Skinning Roger Rabbit AKA Are we there yet?
I was lucky enough to grow up in the era when home computer and consoles were new, and movies using CG was something worth talking about if anybody actually used managed to actually used it (i’m looking at you Max Headroom and Tron, both designed to look like CG but mostly used old fashioned effects as computers weren’t fast enough). One of my favourite movies didn’t (AFAIK) use CG but now would use it a lot, Who Framed Roger Rabbit.
Roger Rabbit is a clever comedy/who dun it about a murder with the main suspect being the eponymous Roger Rabbit. The twist is that Roger Rabbit is a rabbit, a cartoon rabbit. The world is set up so that Hollywood has a section called Toon Town, where cartoons actually exist. Bugs Bunny, Micky Mouse actually exist and act in TV shows and movies just like other actors. The star (apart from Roger) is Eddie Valiant played by Bob Hoskins who is a classic 20/30s grizzled private eye who hates Toons after one kills his partner. </img>
The part that stands out for me (even today) was the way the Toons were able to exist in our world and we could go into there world. So a movie set has Dumbo walking around a real world just doing what a cartoon elephant would do if placed in the real world. Mixing cartoons with real footage is not a new technique, but the world Roger Rabbit presents feels so much more interesting than our normal boring all reality world.
As a 13 year old and a computer nerd, I realised that one day we should be able to do that. Capture reality, edit it and show it to player/viewer in real-time, bringing Toon Town to our real world. Of course technology and my ability to execute were anywhere near the quality of the film. However now nearly 30 years on, I feel it might be possible.
Today we would call this AR - Augmented Reality, mixing reality with CG in a realtime view of the world. Holo-lens from MS appears to be the obvious machine to do it as it comes with depth cameras, visual cameras and a display that mixes reality and CG together.
It would be an interesting mix of advanced realtime CG and AI, but there are several stages before a complete ‘Roger Rabbit’ like world could exist.
The simplest path towards ‘Toon Town’ is skinning the real world, visually morphing the real world view into an non reality based view. This is probably the easiest part, as its mostly a visual problem and less about understanding the physical properties of the world. Whilst cool to be able to walk down the street seeing it as a Cartoon world, or other art style, it would be just that an illusion without the other portions able to understand and insert action into it.
</img> The next is phase is to use a Toon avatar to represent someone you are remote chatting with. Its likely it should animated and appear to match the other persons emotional state/facial expressions and also intelligently apply itself into the real world, having someone stuck inside a wall would destroy the illusion.
Additional complexity is how to deal with dynamism of the real world, luckily Toons don’t have to respond in a normal manner. If a door opens into a Toon, then its okay to play a comedic reaction of a flatten Toon that pops back to 3D in a volume that can hold the Toon. In terms of programming complexity, most of the hard work is in understand 3D space and given the Toon body logic enough brains to be able to create the illusion of interaction.
Understanding in real-time the physical world is going to be a tricky and complex problem that games haven’t really had to tackle before. Physics sims we are used to are about applying semi-realistic motion to CG worlds, good AR we need to understand the physical properties gathered through the cameras and inject them into the physics sim first, then run the sim and then show the results.
Another stage would just add basic AI Toons to the world, find free space and insert avatars into it. As an approach I think of as “3D Clippy”, Clippy the bane of a thousand jokes was a cartoon paper clip that popped up when using Office 95 and gave you ‘handy’ advise on doing things.
It would also need to identify things in real world and react to it. 3D Clippy might a helper like its 2D counterpart and might suggest you eat more fruit, so had to identify fruit in the world, point out a banana (for example) and tell you that you haven’t had your’ ‘5 a day’
Beyond this we start getting into philosphym Turin AIs and all that will it be possible to run a AI Toon 24/7 existing in 3D space, just only visible if you use the AR set? At this level we start to blur the lines between reality, AR and VR. If you throw robots into the real world controlled by AIs, real and virtual become more and more arbitrary labels rather than useful distinctions.
Toon Town and the connected technologies and ideas you can derive from it, seem to be the AR killer app. Exist in a place in the middle of winter, simply load up ‘Summer in California’ app and boom instant sunny days are here. Need to do some plumbing? load up a Plumber Toon app and get advise and instruction how to do it, whilst being entertained as your assistant goes through all the fun a Toon would working around water pipes! :)
Its clear that good AR, is going to be about extracting data from the real-world. I’ve been reading a lot on AI and data mining trying to figure our how the current state of the art can be converted into the form I want. Unfortunately I don’t have access to a Holo-lens yet to start really exploring the idea.
The early things will just use basic collision with the real-world but expect the next decade (or longer) of Siggraph paper to have a lot about turning data from depth and visual camera into physics sim, it will also see a lot more ‘AI’ coming into rendering and physics simulation. Our physics and rendering sims are about to get a lot more fuzzy, if AR takes off :)