One Kinect Motion Capture Test and 2 Kinects Motion Capture Test
There are many factors to consider when investing in a motion capture system for animation or game development. The basic objective, of course, is to get great looking Animation quickly, easily and in the lowest cost possible.
Technology is been getting developed to replace what human eye and brain can do easily which is identifying objects and tracking its motion. Mark less motion capture doesn’t need any type of special suit or markers on body for tracking, more advantages of mark less motion capture is that subject is easily available, simple to capture and track it, this means that animators and game developers can easily capture motion is very less time and with less cost, the main challenge is to implement the captured data on to a 3d character.
The most notable motion capture system is IpiMoCap Software and Microsoft Kinect. This is easily installable in any room or living area with connections. Microsoft has developed Kinect design for tracking motion for Microsoft games. Taking the advantage of this technology we can create motion capture easily.
Kinect is built by Microsoft to track motion for games, it has a sensor which projects infrared dots to calculate the depth of the area and Kinect software is capable of automatically calibrate the sensor based games.
Below is the breakdown of Kinect and explanation on how it works.
IR Projector emits infrared points which calculates the depth of the object and differentiate the background from character
it is use for background calibration
Colour camera is to detect the actual motion of the character in colour output
this camera will detect the infrared points emitted from IR projector,
these two points’ acts like stereo camera.
It has a microphone which collects sound, everything gets edited inside a flash chip and exported as a camera depth output.
iPi motion capture software is made for doing basic motion capture animations which can be converted in 3d space.
Minimum one or more than one Kinect sensor or PS-eye is used to track the motion.
Kinect sensor will calculate the depth using IR Projector and will be returned to the iPi Soft in form of point cloud. This point cloud will be used to drive the Actor. This software has options which controls the collision of foot with the ground, for flexible spine and for head tracking. After setting size and height of the actors it has to be fitted in centre of the point cloud mesh.
iPi Soft has limitations when used with one Kinect.
1. Overlapping of Limbs
2. Fast Motion
3.distance between character and background
4.Sensor’s capability to detect the depth
6. No natural sunlight
7. Sensor doesn’t detect the motion if the actor goes out of frame
More the number if Kinect you install, more the depth you get for tracking.
Windows 7 is required to support motion capture software’s,
iPi Recorder: This software is used to record motion, it has options to record motion using Dept., colour, both depth and colour. It helps to understand the motion.
iPi MoCap Studio: This it used to solve motion capture, the recording from iPi recorder is used to solve motion capture. With few tweaking in the software you can capture motion and export it in desire 3D software.
Developer toolkit v1.80: This software is used to run Kinect on computer system. Developer kit has Microsoft NET Framework 3.5 SP1 which allows Kinect to access all its functions.
Maya: Maya for Cleaning up animation and rendering.
Calibration in simple works is to setup and represent stage, camera and character in 3D world. In this method calibration is done by using flat rectangular plywood/cardboard/anything flat and wide. It should be properly seen through both the cameras. When both the cameras get the same point on the cardboard it measures the distance from subject to camera and subject to ground. Thus it replicates and creates 2 cameras in 3d world along with character placement. While calibrating it shows, red, yellow and green dots. More the green dots best the calibration. More the red dots bad is the calibration.
Once calibration is done save the file. This file is later used in all the recordings.
Make sure that once the calibration is done do not move the cameras or camera angles or stage, as rest of the animation is based on this calibration.
Recording actors performance, Actor should be is skinny cloths because the software shouldn’t get wrong information about the parts of body.
The most important thing in recording is start your action with T-Pose. If T-Pose is good then you can match it with your character when transferred into desire 3d software.
Make sure you have storyboard or chorography ready. Every take should be delayed with storyboard or cinematography, it is good to have multiple takes of the same action.
Once the action is recorded bring it in MoCap studio. Before loading the animation software will ask about the calibration video, the cameras and the stage gets imported into the action file as calibration file. Actor, background and the cameras are on different layers, Depth of the actor is recorded from both the cameras and shown in this software.
Once the calibration scene is imported adjust the actor’s height. Align the actor in alignment to the depth as the depth which is recorded in the video will drive the character, more the depth accurate the animation will be.
Once the actor is aligned start tracking, software has three simple steps for tracing the action.
Forwarded tracking – it tracks the action by predicting the previous and next frame
Refining Track- it tracks backward the animation and refines the in between key frames.
Jitter Removal- This step will remove the jitters from the action, it runs over all the animation calculating previous and next frames along with in-betweens and tries to smooth the animation.
Once tracking is done Export the animation. This software allows you to export animation in Motionbilder, max, Maya, blender, clone etc. To export in Maya use general FBX export as all the joints with animation will be exported in Maya. Furthermore tweaking and cleaning of data is done in Maya. The rig which we get from Ipi MoCap software has key on every frame. Using simple clean-up script all the unwanted keys are deleted and left with few keys for tweaking. Make sure the graph is cleaned before rendering.