Content Beyond Syllabus

advertisement

Content beyond Syllabus

Computer graphics

MOTION CAPTURE

Motion capture (Mo-cap for short) is the process of recording the movement of objects or people. It is used in military,entertainment, sports, medical applications, and for validation of computer vision and robotics.

In filmmaking and video game development, it refers to recording actions of human actors, and using that information to animate digital character models in 2D or 3D computer animation. When it includes face and fingers or captures subtle expressions, it is often referred to asperformance capture. In many fields, motion capture is sometimes called motion tracking, but in filmmaking and games, motion tracking usually refers more to match moving.

In motion capture sessions, movements of one or more actors are sampled many times per second. Whereas early techniques used images from multiple cameras to calculate 3D positions, often the purpose of motion capture is to record only the movements of the actor, not his or her visual appearance. This animation data is mapped to a 3D model so that the model performs the same actions as the actor. This process may be contrasted to the older technique of rotoscope, such as theRalph Bakshi 1978 The Lord of the Rings and 1981 American Pop animated films where the motion of an actor was filmed, then the film used as a guide for the frame-by-frame motion of a hand-drawn animated character.

Camera movements can also be motion captured so that a virtual camera in the scene will pan, tilt, or dolly around the stage driven by a camera operator while the actor is performing, and the motion capture system can capture the camera and props as well as the actor's performance. This allows the computer-generated characters, images and sets to have the same perspective as the video images from the camera. A computer processes the data and displays the movements of the actor, providing the desired camera positions in terms of objects in the set. Retroactively obtaining camera movement data from the captured footage is known as match moving or camera tracking.

Methods and systems

Motion tracking or motion capture started as a photogrammetric analysis tool in biomechanics research in the 1970s and 1980s, and expanded into education, training, sports and recently computer animation for television, cinema, and video games as the technology matured. A performer wears markers near each joint to identify the motion by the positions or angles between the markers. Acoustic, inertial, LED, magnetic or reflective markers, or combinations of any of these, are tracked, optimally at least two times the frequency rate of the desired motion, to submillimeter positions. The resolution of the system is important in both the spatial resolution and temporal resolution as motion blur causes almost the same problems as low resolution.

Optical systems

Optical systems utilize data captured from image sensors to triangulate the 3D position of a subject between one or more cameras calibrated to provide overlapping projections. Data acquisition is traditionally implemented using special markers attached to an actor; however, more recent systems are able to generate accurate data by tracking surface features identified dynamically for each particular subject. Tracking a large number of performers or expanding the capture area is accomplished by the addition of more cameras. These systems produce data with 3 degrees of freedom for each marker, and rotational information must be inferred from the relative orientation of three or more markers; for instance shoulder, elbow and wrist markers providing the angle of the

elbow. Newer hybrid systems are combining inertial sensors with optical sensors to reduce occlusion, increase the number of users and improve the ability to track without having to manually clean up data.

Passive makers

A dancer wearing a suit used in an optical motion capture system

Several markers are placed at specific points on an actor's face during facial optical motion capture

Passive optical system use markers coated with a retroreflective material to reflect light that is generated near the cameras lens. The camera's threshold can be adjusted so only the bright reflective markers will be sampled, ignoring skin and fabric.

The centroid of the marker is estimated as a position within the two-dimensional image that is captured. The grayscale value of each pixel can be used to provide sub-pixel accuracy by finding the centroid of the Gaussian .

An object with markers attached at known positions is used to calibrate the cameras and obtain their positions and the lens distortion of each camera is measured. If two calibrated cameras see a marker, a three-dimensional fix can be obtained. Typically a system will consist of around 2 to 48 cameras. Systems of over three hundred cameras exist to try to reduce marker swap. Extra cameras are required for full coverage around the capture subject and multiple subjects.

Vendors have constraint software to reduce the problem of marker swapping since all passive markers appear identical. Unlike active marker systems and magnetic systems, passive systems do not require the user to wear wires or electronic equipment. Instead, hundreds of rubber balls are attached with reflective tape, which needs to be replaced periodically. The markers are usually attached directly to the skin (as in biomechanics), or they are velcroed to a performer wearing a full body spandex/lycra suit designed specifically for motion capture. This

type of system can capture large numbers of markers at frame rates usually around 120 to 160 fps although by lowering the resolution and tracking a smaller region of interest they can track as high as 10000 fps.

Active marker

Active optical systems triangulate positions by illuminating one LED at a time very quickly or multiple LEDs with software to identify them by their relative positions, somewhat akin to celestial navigation. Rather than reflecting light back that is generated externally, the markers themselves are powered to emit their own light. Since Inverse

Square law provides 1/4 the power at 2 times the distance, this can increase the distances and volume for capture.

The TV series ("Stargate SG1") produced episodes using an active optical system for the VFX allowing the actor to walk around props that would make motion capture difficult for other non-active optical systems.

ILM used active Markers in Van Helsing to allow capture of Dracula's flying brides on very large sets similar to

Weta's use of active markers in "Rise of the Planet of the Apes". The power to each marker can be provided sequentially in phase with the capture system providing a unique identification of each marker for a given capture frame at a cost to the resultant frame rate. The ability to identify each marker in this manner is useful in realtime applications. The alternative method of identifying markers is to do it algorithmically requiring extra processing of the data.

Time modulated active marker

A high-resolution uniquely identified active marker system with 3,600 × 3,600 resolution at 960 hertz providing real time submillimeter positions.

Active marker systems can further be refined by strobing one marker on at a time, or tracking multiple markers over time and modulating the amplitude or pulse width to provide marker ID. 12 megapixel spatial resolution modulated systems show more subtle movements than 4 megapixel optical systems by having both higher spatial and temporal resolution. Directors can see the actors performance in real time, and watch the results on the motion capture driven

CG character. The unique marker IDs reduce the turnaround, by eliminating marker swapping and providing much cleaner data than other technologies. LEDs with onboard processing and a radio synchronization allow motion capture outdoors in direct sunlight, while capturing at 120 to 960 frames per second due to a high speed electronic shutter. Computer processing of modulated IDs allows less hand cleanup or filtered results for lower operational costs. This higher accuracy and resolution requires more processing than passive technologies, but the additional processing is done at the camera to improve resolution via a subpixel or centroid processing, providing both high

resolution and high speed. These motion capture systems are typically $20,000 for an eight camera, 12 megapixel spatial resolution 120 hertz system with one actor.

Skeletal animation

Skeletal animation is a technique in computer animation in which a character is represented in two parts: a surface representation used to draw the character and a hierarchical set of interconnected bones used to animate (pose and keyframe) the mesh.

[1] While this technique is often used to animate humans or more generally for organic modeling, it only serves to make the animation process more intuitive and the same technique can be used to control the deformation of any object a door, a spoon, a building, or a galaxy.

This technique is used in virtually all animation systems where simplified user interfaces allows animators to control often complex algorithms and a huge amount of geometry; most notably through inverse kinematics and other "goal-oriented" techniques. In principle, however, the intention of the technique is never to imitate real anatomy or physical processes, but only to control the deformation of the mesh data.

Technique

This technique is used by constructing a series of 'bones,' sometimes referred to as rigging.

Each bone has a three-dimensional transformation (which includes its position, scale and orientation), and an optional parent bone. The bones therefore form a hierarchy. The full transform of a child node is the product of its parent transform and its own transform. So moving a thigh-bone will move the lower leg too. As the character is animated, the bones change their transformation over time, under the influence of some animation controller. A rig is generally composed of both forward kinematics and inverse kinematics parts that may interact with each other. Skeletal animation is referring to the forward kinematics part of the rig, where a complete set of bones configurations identifies a unique pose.

Each bone in the skeleton is associated with some portion of the character's visual representation. Skinning is the process of creating this association. In the most common case of a polygonal mesh character, the bone is associated with a group of vertices; for example, in a model of a human being, the 'thigh' bone would be associated with the vertices making up the polygons in the model's thigh. Portions of the character's skin can normally be associated with multiple bones, each one having a scaling factors called vertex weights, or blend weights. The movement of skin near the

joints of two bones, can therefore be influenced by both bones. In most state-of-the-art graphical engines, the skinning process is done on the GPU thanks to a shaded program.

For a polygonal mesh, each vertex can have a blend weight for each bone. To calculate the final position of the vertex, each bone transformation is applied to the vertex position, scaled by its corresponding weight. This algorithm is called matrix palette skinning, because the set of bone transformations (stored as transform matrices) form a palette for the skin vertex to choose from.

Benefits and drawbacks

Strengths

Bone represent set of vertices (or some other objects, which represent for example a leg).

Animator controls fewer characteristics of the model

Animator can focus on the large scale motion.

Bones are independently movable.

An animation can be defined by simple movements of the bones, instead of vertex by vertex

Weaknesses

Bone represents set of vertices (or some other object). o Does not provide realistic muscle movement and skin motion o Possible solutions to this problem:

Special muscle controllers attached to the bones

Consultation with physiology experts

APPLICATIONS

Skeletal animation is the standard way to animate characters or mechanical objects for a prolonged period of time .It is commonly used by video game artists and in the movie industry, and can also be applied to mechanical objects and any other object made up of rigid elements and joints.

Performance capture can speed up development time of skeletal animation, as well as increasing the level of realism.

For motion that is too dangerous for performance capture, there are computer simulations that automatically calculate physics of motion and resistance with skeletal frames.

Virtual anatomy properties such as weight of limbs, muscle reaction, bone strength and joint constraints may be added for realistic bouncing, buckling, fracture and tumbling effects known as virtual stunts. Virtual stunts are controversial due to their potential to replace stunt performers.

However, there are other applications of virtual anatomy simulations such as military and emergency response. Virtual soldiers, rescue workers, patients, passengers and pedestrians can be used for training, virtual engineering and virtual testing of equipment. Virtual anatomy

technology may be combined with artificial intelligence for further enhancement of animation and simulation technology.

Download