Advanced Animation - University of Brighton

advertisement
Advanced Animation
This section will discuss two advanced animation techniques. They address issues
involving character animation. The first technique will discuss motion capture; prerecorded key frames of an actors movement. The second technique will discuss lipsync; a technique involving synchronising a character’s mouth movement to an audio
recording of a spoken dialogue.
Motion Capture
Animating a character is a very complex and time consuming job using traditional key
frame techniques. Key frames need to be created for all elements of a characters
movement i.e. hands, arms, torso, legs and feet. Further, these elements have to be
key framed simultaneously. For character animation to be believable it has to
accurately reflect movement. Motion capture provides a mechanism to capture an
actor’s physical motion or movement. Movement is captured as data. This data can be
imported into 3D application and mapped to character models. The result of this
process is accurate character animation.
Systems
Motion Capture systems can capture an actor’s movements in several ways. The most
common system is optical motion capture. These systems employ infra-red cameras
and reflective markers. Infra –red cameras are located around a room. Typically eight
or more cameras are employed. Reflective markers are placed strategically over an
actor. When an actors moves the infra red cameras record the position of the markers.
When the recording has stopped the positional data is compiled. To compile
positional data requires considerable computer processing power. Visit
http://www.vicon.com for further information regarding optical motion capture
systems.
Other motion capture systems employ gyroscopes. These system employ suits which
the actors where during motion capture. Gyroscopes, within the suit, are placed about
an actors key joints. When an actor moves rotational data is captured. This data can be
captured in real-time. Consequently actor’s movement can be monitored ‘on screen’
rigged to a character during motion capture. Visit http://www.animazoo.com for
further information regarding gyroscopic motion capture.
Data
Motion capture systems generate a number of data formats. CSM data files contain
marker name conventions typically yielded from optical motion capture systems.
Marker naming conventions describe the placement of the marker and are defined by
the 3D application using the data. 3D Max’s biped employs the following marker
convention (see image below). BVH data files contain rotational data and can be
mapped automatically to 3D Max’s biped. BVH and CSM files have been provided
with this tutorial. These can also be found in the samples folder on the 3D Max
installation disk.
Biped & Motion Capture Data
The biped is a 3D Max component that facilitates character animation using key
frames (freeform), footsteps and motion capture. It represents a bipedal skeleton or
armature that can be employed for animating human motion. It can also be employed
to animate non-human physiology for example four legged creatures or animals that
naturally lean forward such as dinosaurs. The biped is a linked hierarchy of objects
that, by default, resemble those of a human. The root object labelled Bip01 is the
parent object and represent the centre of mass (COM).
The biped is typically used with the Physique modifier. The Physique modifier
attaches a character model to a biped. It describes how the character model’s mesh
will deform when the biped is animated. Before you employ the biped you should
already have character model. Typically this is modelled in a neutral or ‘T’ pose. A
low polygon character model has been provided to facilitate this tutorial.
Part A explains how to create a biped and implement motion capture data to animate
the biped. This is supplemented with an overview of the biped’s parameters and
caveats regarding working with motion capture data. Parts B to D explain how to
import a .3DS character model, position it inside a character model and add the
Physique modifier.
A. Create Biped and Implement Motion Capture Data
1. Open 3D Max and maximise the front viewport.
2. Create the biped by selecting Create > Systems > Biped.
Figure 1: Systems roll out for creating a biped.
3. Hold down the left mouse button and drag the cursor in the front view port;
the biped will appear.
Figure 2: Biped
4. Use the Select by Name button to select Bip01.
Figure 3: Select by Name button
5. Under the Motion panel select Motion Capture > Load Motion Capture File:
Figure 4: The Motion panel. These parameters display when the biped’s root
object, Bip01, is selected. When you select ‘Load Motion Capture File’ you
will be presented with the Open window illustrated in Figure 2, below:
Figure 5: The Open window for choosing motion capture data.
6. Select File Type *.CSM
7. Browse the csm folder provided with this tutorial or in the Samples folder on
the 3D Max installation disk and select frantic.csm.
Figure 6: Motion Capture Conversion Parameters window. This window allows
you to specify exactly how you wish to import the motion capture data. The
sample files we are employing require no further specification and you can simply
click OK. However, these parameters facilitate the process of ‘cleaning’ motion
capture data. See Motion Capture Caveats below.
8. Click OK.
Figure 7: Biped with motion capture data imported. You will notice in the
timeline that corresponding key frames have been created. These represent the
animation.
9. Press play; the biped will animate defined by the key frames and the motion
capture data therein.
Biped Creation Parameters
You will notice that there is a number of different parameters in the Create > Systems
panel that describe the biped:
a. Creation Method
b. Structure Source
c. Root Name
d. Body Type
e. Twist links
Creation Method describes how the biped was created either by dragging its height, as
above, or dragging its position. Structure source determines the source of the
structure; either through the user interface (U/I), as above, or from a .fig file (a .fig
file can be saved during biped animation through the Motion panel). Root name
describes the root node of the biped; by default Bip01. The root node is the parent of
all the biped’s limbs. To move the entire biped you select the root node, Bip01. Body
type describes the type of skeleton or armature represented by the biped and its links
therein. There are four types:
a. Skeleton (default)
b. Male
c. Female
d. Classic
Twist links describes what links can be twisted that are otherwise constrained by the
default biped. Body type and twist links are useful if you are animating a non human
physiology that does not conform to human constraints.
Two zip files have been provided with this tutorial; csm.zip and bvh.zip. These files
can also be found on the samples folder on 3D Studio Max installed disk. Unzip these
files.
Motion Capture Caveats
Employing motion capture systems to facilitate character animation helps an animator
up to a point. Motion capture data has to be ‘cleaned’. This involves removing or
modifying positional or rotational data that has been captured incorrectly. Data can be
captured incorrectly for a number of reasons. Optical systems often confuse one
marker with another. For example, if an actor crosses hands during capture the optical
system has to guess which marker to associate the corresponding data. This might
result in motion data being attributed to the wrong hand. Rotational systems often
yield motion data with “sliding feet”. As an actor moves a gyroscopic system has to
ascertain core positional information extrapolated from rotational data. Often, over
time, the actors position extrapolated early in the capture will be different later on in
the capture. The net result is that the actor’s feet appear to “slide” out of position.
Therefore, before motion capture data can be employed it will nearly always need to
be cleaned.
The Motion Capture Conversion Parameters, illustrated above, facilitate the cleaning
process. For example, ‘Footstep Extraction: On’ helps eliminate “sliding feet”. It
allows you to edit or change the toe structure of a biped after import. This will help
maintain correct foot-toe-ground relationships through out the animation.
If a high frame has been employed during motion capture the Key Reduction Settings
can be employed to intelligently filter out up to 80% of the keys in the motion capture
data without altering the biped animation. Consequently, this drastically reduces the
amount of data to clean.
The amount of time this process takes typically depends on the number of anomalies
in the data file, the frame rate at which the motion was captured and the total time /
length of the motion capture. Gyroscopic systems can, with reasonable reliability,
capture motion up to 120 frames per second. Optical systems can capture motion at
over 250 frames per second. However, animations rarely employ a frame rate over
and above 30 frames per second. And, the animation industry, aware of the draw
backs of motion capture, might capture data at 12 frames per second and compliment
this data with the skill of a professional key frame animator. Visit
http://www.kongisking.net for video clips following the production of King Kong and
their use motion capture.
B. Import Character Model and Position with Biped.
Creating a biped and adding motion capture data is relatively straight forward. Next
we have to import our character model and position it with the biped. This is a little
more complicated as we have to work with the biped in Figure Mode.
Figure Mode ignores key frames in the time line allowing you to position the biped
correctly within the character. In the first instance it is important to move the
character model over the biped and then position the biped within the character
model.
When the character model is placed over the biped we then have to position the
biped’s arms and legs to fit neatly inside.
1. Select Bip01.
2. Select Motion > Figure.
Figure 1: The Motion panel with Figure Mode selected. You will notice the
biped has turned 90 degrees within the front view port. This is a function of
the motion capture data. Do not re-orientate the biped. Simply work in the
Left viewport here on in.
3. Select File > Import to import Select the character model.
4. Use the Select and Rotate button to orientate the character model so it is facing
the same direction as the biped.
5. Drag it over the biped.
6. Check both the front and top views to ensure the character model is located
correctly over the biped.
Figure 2: Character model correctly orientated over the biped.
7. Working in the Left viewport position the biped’s arms and legs so they are
directly inside those of the character model; this is achieved by dragging the
hands and feet, respectively.
Figure 3: Biped positioned correctly inside the character model.
Note: When position the biped’s arms they will bend at the elbow. This is
because of the viewport you are working in. Using the gizmo, ensure they are
fully outstretched inside the character model (see Figure 3). The biped fits
quite neatly inside the character model. However, often you may need to
lengthen or shorten arms and legs to fit the character model. During this
tutorial it is not necessary. However, the following steps how to use the
Symmetry function if and when you decide to modify a biped’s limbs.
8. Select the biped’s left forearm and under the Motion panel select Symmetry:
Note: Symmetry selects the biped’s right forearm while keeping the left arm
selected. Further, any operation performed on the left forearm will be reflected
in the right.
9. Select Uniform Scale and scale the left forearm along the red X axis until the
hand extends just beyond the character models. You will notice that the right
forearm scales simultaneously to the same proportions as the left forearm.
C. Add the Physique Modifier
Having positioned and scaled the biped neatly inside the character model it is time
to add the physique modifier. The Physique modifier attaches the character model
the biped and describes how the character mesh will deform. The Physique
modifier controls mesh deformation using a number of parameters. We are
primarily interested in the Envelope parameter. There is an Envelope parameter
for each object of the biped. The Envelope parameter describes which vertices are
associated to which bipedal object.
1. Select the character model and add the Physique modifier from the modifier
panel.
2. Select Attach to Node:
Figure 1: Physique modifier with Attach to Node selected.
3. Select Bip01. The Physique Initialise window is displayed:
Figure 2: Physique Initialisation. The Physique Initialisation window provides
a number of parameters for describing how a mesh will deform with the biped.
For example, Joint Intersections describe how a mesh will overlap itself at the
elbow or knee joints. These are only useful if you can anticipate how geometry
will deform prior to initialisation. This can only be attained through
experience. Therefore, these parameters can be modified after initialisation
using the modify panel.
4. Select Initialise.
Note: If you drag the biped’s hand the character’s arm will also move.
However, it is very likely that not all the arm’s vertices move with the hand.
This is because envelopes associated with the arm do not encapsulate all
adjacent vertices.
5. Select the character model and under the Modify tab select Physique >
Envelope.
6. Select the left forearm:
Figure 3: Envelope parameter. You will notice a red lattice appears about the
left forearm. This visually defines the vertices associated to the left forearm by
the envelope. The envelope needs to be scaled up to include all vertices. This
process is trial and error. Simply scale the envelope and then move the biped’s
hand to ensure all vertices move as expected. This process needs to be
repeated for all bipedal objects that, when moved, do not move associated
vertices. This process is often referred to as ‘skinning’. To achieve good
results a reasonable amount of time has to be invested. Further, manipulation
of other Physique parameters may be required for example, Bulge. This
describes how the mesh bulges as a consequence of being deformed.
7. Select Scale Uniform and scale the envelope about all axes. This is achieved
by dragging the inner yellow portion of the gizmo. Ensure you do not include
vertices that should be associated to near by bipedal objects. For example,
when scaling an envelope of the left thigh ensure it does include vertices on
the right thigh. This is identified by a vertex turning purple.
Figure 4: Scaling envelope to include all adjacent vertices. The application of
the Physique modifier can be considered complete when, at the very least, all
envelopes encapsulate all adjacent vertices. The process can be further refined
by manipulating other Physique parameters as described above. However, the
level of refinement achievable is a function of the complexity of the character
mesh. We will yield little extra visual fidelity by manipulating other Physique
parameters because our character model is made of few polygons;
deformations will always appear angular. If, however, you model a complex
character mesh then it may be to your advantage to manipulate both the Bulge
and Tendon parameters to enhance mesh deformation. This will increase the
visual fidelity of the animation.
8. Repeat this procedure for all bipedal objects. If a bipedal object is moved and
vertices you expect to move do not, then refine the associated envelope.
9. When all the envelopes have been applied correctly select Bip01 and de-select
Figure Mode.
10. The biped, with the character model, jump back to initial animated pose.
Figure 4: Biped with Figure Mode de-selected and character model with
Physique modifier applied.
11. Press play. You should see the character model animate successfully.
Figure 5: Frame 2 of the animated character model with the Physique modifier
applied successfully.
12. You may notice that not all vertices move as expected when animated. This is
because they have not been encapsulated by their associated envelope. If this
happens do the following:
a. Put the biped in Figure Mode by selecting Bip01 > Motion > Figure Mode.
b. Select the character model and under the Modify tab select Physique >
Envelope.
c. Adjust the envelopes where vertices have not been included
d. De-select figure mode by selecting Bip01 > Motion > Figure Mode.
e. Press Play and test the changes have corrected the problem.
f. Repeat this process until all vertices animate as expected.
g. See the max file ‘motioncapture’ associated with this tutorial for a complete
example.
Lip-sync
Lip-sync is a technique involving synchronising a character’s mouth movement to an
audio recording of a spoken dialogue. When a person speaks the shape of their mouth
is largely defined by the sound of the spoken word. The sounds that define a spoken
word are known as phonemes.
To synchronise a character’s mouth movement to an audio recording of a spoken
dialogue we model phonetic targets. A phonetic target is a character model including
the shape of its mouth for a given phoneme. The number of phonetic targets depends
on the number of phonemes. Autodesk’s 3D Max 8 identifies 9 phonemes including
the mouth being closed or ‘at rest’:
1. A, I
2. E
3. F. V
4. C, D, G, J ,K, N, S, T, Y, Z
5. L, T
6. O
7. U
8. W, Q
9. M, B, P - This target can be the same shape as the "at rest" base object.
During this exercise we will model 9 phonetic targets, identified above, and employ
the Morpher modifier to facilitate animating between phonetic targets. A model of a
characters head has been provided to facilitate this tutorial (lipsyncStart.max created
by Chris Padmore an undergraduate at the University of Brighton). It is also
recommended that you situate a mirror close to your monitor. The mirror will help
you see the shape of your mouth when you sound a phoneme. This can be reflected in
your model.
Creating Phonetic Targets
During this step we will make 8 copies of the character head provided in the file
lipsyncStart.max. We will have a total of 9 heads. Each head will represent a phonetic
target as defined on the previous page. For each phonetic target we will shape the
polygons around the character’s mouth.
1. Open lipsyncStart.max; you will see the character head named ‘REST (MBP).
This represents phoneme 9.
2. Right click over the character head and select clone form the quad menu.
3. From the clone option dialog select copy and name it ‘AI’ (phoneme 1):
Figure 1: Clone Options. We create a copy because we want to shape each
clone’s mouth differently. If we create an instance the changes made in one
clone will be automatically reflected in the other.
4. Click OK.
5. Hide character head ‘REST (MBP).
Note: Working in the Front Viewport, hide with clone ‘AI’ and Polygon
selected in the Modify panel:
6. Select the polygons that make up the top lip and cheek:
Figure 2: Front view of character head with the top lip and cheek polygons
selected.
7. Working in the Left Viewport drag the selected polygons up along the y axis
roughly 0.5 units.
Figure 3: Left view of the character head with the top lip and cheek bones
polygons selected. These have been positioned up the y axis roughly 0.5 units.
8. Working in the Front Viewport select the polygons that make up the bottom
lip and the chin.
Figure 4: Front view of character head with the bottom lip and chin polygons
selected.
9. Working in the Left Viewport drag the selected polygons down along the y
axis roughly 0.5 units.
Figure 5: Left view of character head with the bottom lip and chin polygons
selected. These have been positioned down the y axis roughly 0.5 units.
10. Unhide the character head named ‘REST (MBP).
11. Drag the character head named ‘AI’ to the right of it.
Figure 6: The Perspective Viewport showing both character heads; REST
(MBP) and AI. These two heads represents the shape of the mouth when
sounding phonemes M,B & P and A & I respectively.
Note: The goal of this activity is to best represent the shape of the mouth when
sounding a given phoneme. The shape of the mouth representing phonemes A
& I can be refined. You may decide that the selected polygons can be rotated
illustrating curvature of the mouth. This activity is not an exact science. It is a
craft and the results are as good as the time, effort and accuracy you afford.
12. Repeat steps 2 to 9 for the remaining phonetic targets. Ensure you name each
phonetic target with a corresponding phonetic name from the above phonetic
list e.g. ‘E’, ‘F.V’ etc.
Note: How you transform the mouth polygons for the remaining phonemes is
based on your observations of how you see the mouth is shaped for each
phonetic sound. Expect this process to take at least two hours.
Figure 7: 9 phonetic targets. These can be found in the file lipsyncEnd.max.
Audio Dialog
This step describes how to import an audio file. An audio file has been provided to
facilitate this tutorial (lipsync.wav). The audio file represents the spoken dialog that
we will synchronise to our phonetic targets to later in the exercise.
1. Select Graph Editors > Track View – Curve Editor:
Figure 1: Track View – Curve Editor. The Sound channel is highlited in
yellow.
2.
Right click over the Sound channel and select Properties from the quad menu:
Figure 2: Sound Options window.
3. Select Choose Sound and browse for the file named ‘lipsync.wav’ provided
with this tutorial and click OK.
Figure 3: Track View – Curve Editor including the audio file ‘lipsync.wav’.
4. Close the Track View – Curve Editor. You will notice that the audio file now
appears underneath the time line at the bottom of the screen.
Figure 4: The Perspective View showing all the phonetic targets. You will also
notice the audio file underneath the time line. This is a very important view of
the audio file as it facilitates us synchronising phonetics targets with the
spoken dialog contained within the audio file.
Morpher Modifier
During this step we will add the Morpher modifier. It allows us to change the shape of
a mesh over time. It works by identifying a base object and target objects. Our base
object will be the character head named ‘REST (MBP)’. Our target objects will be the
remaining character heads (phonetic targets).
Note: It is essential that the base object has exactly the same number of vertices or
polygons as the target objects. When we created the phonetic targets we made a clone.
Further, we only modified polygons. We did not create or delete polygons. If the base
object has a different number of polygons from a target object then the morph
modifier simply will not work. It will not throw an error screen identifying the there is
a problem.
1. Within the Front Viewport select the character head named ‘REST (MBP).
This represents out base object and therefore the object to which we will add
the Morpher modifier.
2. Within the Modify tab select Morpher from the modifier list.
Figure 1: Morpher modifier. While it boasts a notable number of parameters
we are primarily interested in the Channel List and the associated spinners.
3. Under Channel List ensure the first ‘empty’ channel is selected.
4. Select Pick Object from Scene from under Channel Parameters.
5. Pick character head named ‘CDetc.’.
6. Repeat steps 3 to 5 until all character heads have been given a channel with
the Morpher modifier.
Figure2. The Morpher modifier associated with the character head named
‘REST (MBP). This represents the base object for the Morpher modifier and
the ‘rest’ phonetic target. You will notice that the remaining character heads
have been given a channel. The order, or hierarchy, in which they appear in
the channel list, is not important. However, the name of the character head is
(as stated in Modelling Phonetic Targets step 12). The name of a character
head represents the phonetic target thus the associated sound. This is essential
for the remaining part of this tutorial, Synchronising Phonetic Targets.
Synchronise Phonetic Targets
During this final step of the tutorial we will synchronise our phonetic targets, using
the Morpher modifier, with the spoken dialog in the audio file. Simple key frame
animation is employed. Each channel within the Morpher modifier has a spinner. The
spinners determine the extent the base target will be ‘morphed’ into a phonetic target.
If a spinner has a value of ‘0’ then no morphing will take place. If a spinner has a
value of a ‘100’ then the base target is morphed completely into the phonetic target.
The spinner values therefore represent percentages. If you decide that the lip sync will
look better with the base target being partially morphed then a spinner can be set to a
percentage value between 0 and 100.
Key frames are used to capture the value of a spinner at any given point in time. The
phonetic target and the value of its associated spinner are determined by the spoken
dialog in the audio file. Using the Time Ruler, you can ‘scrub’ through the timeline to
listen to the phonetic sounds of the spoken dialog. Scrub is a term used in the music
recording industry. It is the process of manually dragging the play head of an audio
device. This allows you to accurately pin-point the position, in time, of a specific
sound. Each time you identify a phonetic sound you set the spinner’s value of the
corresponding phonetic target. Each time you set the value of a spinner you create a
key frame. Key frames store the value of the spinners throughout the duration of the
audio file.
1. Working in the Front Viewport, ensure the base target (REST (MBP)) and the
Modify panel are selected.
2. Select Auto Key; the Front Viewport will show a red border indicating that we
are in animation mode.
Note: Any changes made to Morpher parameters will be automatically
captured in a key frame.
3.
4.
5.
6.
Scrub the time ruler to frame 27; just before the audio file plays.
Select Set Keys; this captures a value of ‘0’ for the Morpher channel spinners.
Scrub the time ruler fame 29.
Set the AI spinner value to 25; you will notice that the base target (REST
(MBP)) morphs into the phonetic target AI by a factor of 25%.
Figure 1: Front Viewport showing the target object, REST (MBP) in
animation mode. The time ruler is positioned over frame 27. The channel
spinners are all set to zero.
Figure 2: Front Viewport showing the target object, REST (MBP) in
animation mode. The time ruler is positioned over frame 29. The channel
spinner for phonetic target AI is set to 25. You will notice the base target
mouth has morphed to 25% of the phonetic target AI. If you scrub the play
head between frames 27 and 29 you will see the morph animated i.e. the
mouth opens.
7. Scrub through the time line stopping the time ruler at the beginning of each
phonetic sound.
8. Set the spinner of the corresponding phonetic target to an appropriate value; it
is likely you will find that a value between 25% and 50% is a reasonable.
Note: Each time you set a value for a spinner it is automatically captured in a
key frame. Consequently, you will need to set value of the spinner, in the
preceding key frame, to zero (see lipsyncEnd.max). The value you set a
spinner is a function of how you modelled your phonetic targets in the first
instance. You may decide to modify a phonetic target to better represent its
associated sound. Further, a combination of phonetic targets might also yield
better animation. This process, like modelling phonetic targets, is not an exact
science. It is a craft that may require a number of iterations to yield the best
results.
9. This tutorial is complete when you have set key frames, thus spinner values,
for all the phonetic sounds in the spoken dialog.
Note: Lip-sync can be greatly enhanced if the phonetic targets are extended
with facial expression targets. For example, working the character head named
‘REST (MBP)’, you could modify the facial expression to indicate surprise by
rising the eye brows or anger by lowering the eye browse. New facial
expression targets can be assigned to the Morpher modifier and key framed in
the same way as the phonetic targets.
Figure 3: Front Viewport showing the target object, REST (MBP) in
animation mode. All the key frames in the time line represent spinner values
for phonetic targets. Please refer to ‘lipsyncEnd.max for the completed
tutorial.
Download