Advertisement

Your Ad Here
 
Paris GDC - Part 5 - Camera Based Gaming Print E-mail
Click to Vote
(1 vote)
Written by Christophor "SuperGuido" Rick   
Saturday, 28 June 2008

[OpEd] [PS3] [PS2]  

This was by far one of the most interesting of the sessions in regards to interfaces. If you read the Future of 3D article you know I’m rather keen on new interfaces for games. The EyeToy for the Playstation is a unique controller and while not new, it does look to still have some tricks to propel it into the future. Mr Campbell put more time into the physical setup of this presentation than I think I have ever seen. He had a PC, PS3 several cameras and basically had lighting akin to your living room. All so that he could demonstrate to us what he believes is possible. It was so interesting that on a scale of 1-5 I gave it a 7 in feedback.

The thing about cameras is that they can generate a lot of information. The EyeToy is 640x480 RGB and captures 60 frames per second. That translates into about 17 Megabytes per second of data. The major difference between camera-based visuals and computer generated visuals is that the edges are softer in the camera visual and there is a lot of background ‘noise’ which is shown by the constant variation is brightness of pixels similar to hiss on an audio recording. 

Mr. Campbell (who reminded me of a taller, cooler Will Wheaton) went over various techniques for using the camera input to determine motion and use it as an input interface for gaming. While I’m not sure everyone will find it interesting I certainly did. But I will attempt to compress the info as much as I can here.

In relation to 3D geometry, lighting and optics we need to basically focus on the information that we can extract from the input images. The first way he described to do this is background subtraction. Basically you take a reference frame without the target (in this case the gamer) and then you have the target enter, everything that’s different is the target. Makes sense right? Well there are some limitations in this process. There’s no way to discern color, lighting changes mean the reference changed and the whole image may become the target forcing you to take another reference image. Finally if the camera moves the reference becomes useless. In an action game where you are jumping of otherwise might jar the camera out of position this can be a real issue.

A second way to proceed is what he called ‘motion buffering’ which basically analyzes each frame and averages the difference in the pixels from frame to frame. This is good for user interface motion buttons, you know, the ones where you have to shake your hand until the button fills up and activates. The accumulated movement is the trigger for the button so that they are not accidentally activated while playing the game. This also means they must be placed around the edges of the game screen to keep them out of the way. It basically reduces robustness of the buttons for responsiveness so they function only when you want them to do so. Another type of button that can be generated is the vector button which is activated based on motion vectoring. This takes into account optical flow (where the pixels are going from frame to frame) and the Lucas-Kanade process (see more at Wikipedia ). Some games that utilized this are Creature Feature which Diarmid played much to the amusement of the audience and the Trials of Topoq which I have personally played and enjoyed. Your movement interacts with the game elements and causes them to change, in Topoq you must push a ball around the screen.

Mr. Campbell believes that cameras are great input devices based on many independent parameters which create flexibility and richness. The SIXAXIS for example has only 25 independent parameters while a 640x480 camera could be around 300,000. Of course we can’t control every single pixel but you can see the difference in possibilities.

He went on to demonstrate a series of things that are possible with a camera-based input system. In one case he had a ‘light pen’ which was essentially a pocket LED lamp with a Styrofoam ball. The camera was able to track this and he was able to ‘draw’ with it. The demonstration was a ball falling from the top of the screen and in Line Rider fashion he was able to draw lines that the ball would then bounce off of and interact with. A very cool demo to say the least.

Another demo was the ‘sketch tech’ demo which showed that the camera can be shown a drawing and then the drawing becomes the elements in a game. He drew a rudimentary (nothing against his artistic skills) spaceship and landscape and then basically played ‘Lunar Lander’ with the elements he drew. How freaking cool does that sound? Well regardless it was truly inspiring. Think of the possibilities…I mean you could change that spaceship into say…a cow (which he did) and then you have Lunar Lander with cattle! OK so that’s the most basic of possibilities but something like this could allow children to draw their own games. It could help them better their confidence and creativity and be a whole new genre of game.

He also demonstrated head tracking and the capabilities and possibilities where he showed a demo of having to balance an upturned pendulum on his nose by moving his head in the camera’s field of view so that the pendulum could balance. It was rather interesting to say the least. One of his final demonstrations was the augmented reality demo. In this case the PS3 was able to discern a particular pattern on a card and replace that card in the display with a multi-colored 3D cube. I instantly thought of having two cards one being a shield and another being a sword and doing battle with virtual opponents by actually swinging the ‘sword’ and blocking with the ‘shield.’ This would go beyond even the present capabilities of the Wii remote. You could ‘wear armor’ in a game, physically switch weapons by dropping one card and pulling another. You could generate ‘magical creatures’ by displaying a card the ‘summons’ them. You can think of this as sort of a reverse Eye of Judgment. In that game the card triggers a virtual response. In this case it augments the reality you see and allows you to interact with the virtual environment in new ways.

Finally Mr. Campbell concluded his talk to a rousing round of applause and several questions. The most memorable quote from him was that “the physical gaming space isn’t getting crowded, just bigger.” I love innovation in gaming and feel it is what will move us into the future (see the Future of 3D article for more on my thoughts about input devices). Camera based games are going to be a major part of the future as everyone has a camera with them at all times in their mobiles, gaming consoles and PCs.

 I will be in the UK in July and again later in the year and I’m hoping to arrange a tour of the Sony studios there and have more time to chat with him about his work. Diarmid Campbell works at the EyeToy Research & Development Group for Sony in London. Hopefully a game of Pompon Party won't be on the agenda then...

Be the first to comment!
Please login or register to post comments.



Did you enjoy this article? Please bookmark it onto:
Reddit!Del.icio.us!Spurl!Fark!Yahoo!
Last Updated ( Saturday, 28 June 2008 )

< Newer - Atlus-O-Weenie 2008   Paris GDC - Part 6 - Who wants to be a Game Tester? - Older >
SEO by Artio
 
 

Search

Friends of G:G

Advertisement

Recent Comments



Advertisements

© 2008 Generation: Gamerz
GamerPrime robot artwork by Micah Z.