Three-Dimensional Determination of Femoral-Tibial Contact Positions Under In Clinical Biomechanics Award 1996

advertisement
Clinical Biomechanics Award 1996
Three-Dimensional Determination of Femoral-Tibial Contact Positions Under
“In Vivo” Conditions Using Fluoroscopy†
Authors:
William A. Hoff, Ph.D.1
Richard D. Komistek, Ph.D.2
Douglas A. Dennis, M.D.2
Stefan M. Gabriel, Ph.D.3
Scott A. Walker 1,2
1.
Colorado School of Mines
Engineering Division
Golden, CO
2.
USA
Rose Musculoskeletal Research Laboratory
2425 S. Colorado Blvd., Suite 280
Denver, CO
3.
USA
Johnson & Johnson Professional, Inc.
325 Paramount Dr.
Raynham, MA
USA
Correspondence and reprint requests to: R.D. Komistek, Rose Biomedical Research, 2425 South Colorado
Boulevard, Suite 280, Denver, CO 80222, USA.
†
This work was presented at the European Society of Biomechanics, Leuven, August 28-31, 1996, and received the
Clinical Biomechanics Award
1
Abstract
Objective. A method has been developed to accurately measure three-dimensional (3-D) femoral-tibial contact
positions of artificial knee implants “in vivo” from X-ray fluoroscopy images using interactive 3-D computer vision
algorithms.
Design. A computerized graphical (CAD) model of an implant component is displayed as an overlay on the
original X-ray image. An image matching algorithm matches the silhouette of the implant component against a
library of images, in order to estimate the position and orientation (pose) of the component. The operator further
adjusts the pose of the graphical model to improve the accuracy of the match.
Background. Previous methods for “in vivo” measurement of joint kinematics make only indirect measurements of
joint kinematics, require invasive procedures such as markers or pins, or make simplifying assumptions about
imaging geometry which can reduce the accuracy of the resulting measurements.
Methods. Fluoroscopic videos are taken of implanted knees in subjects performing weight-bearing motion. Images
from the videos are digitized and stored on a computer workstation. Using computerized model matching, the
relative pose of the two knee implant components can be determined in each image. The resulting information can
be used to determine where the two components are contacting, the area of the contact region, liftoff angle, and
other kinematic data.
Results. Accuracy tests done on simulated imagery and “in vitro” real imagery show that the pose estimation
method is accurate to less than 0.5 mm of error (RMS) for translations parallel to the image plane. Orientation
error is less than or equal to 0.35° about any axis. Errors are larger for translations perpendicular to the image
plane (up to 2.25 mm). In a clinical study, the method was used to measure “in vivo” contact points, and
characterize the kinematic patterns of two different knee implant designs.
Conclusions. The ability to accurately measure knee kinematics “in vivo” is critical for the understanding of the
behavior of knee implant designs and the ultimate development of new, longer lasting implants.
Relevance
This work shows that it is possible to accurately measure the three-dimensional position and orientation (pose) of
artificial knee implants “in vivo” from X-ray fluoroscopy images using interactive 3-D computer graphics. The
method can be applied to any joint when accurate CAD models are available. The resulting data can be used to
characterize the kinematics of current knee implant designs.
Key words: fluoroscopy, pose estimation, kinematics, artificial knee implants, total knee replacements, model
based object recognition, interactive graphics
2
Introduction
Motivation
More than 100 types of arthritis now afflict millions of Americans, often resulting in progressive joint destruction
in which the articular cartilage (joint cushion) is worn away causing friction between the aburnated (uncovered)
bone ends. This painful and crippling condition frequently requires total joint replacement using implants with
polyethylene inserts (e.g., see Figure 1).
Although artificial knee joints are expected to last over 15 years, research indicates that some implants have
prematurely failed within 10 years [1]. Failure in the early years of total knee replacements was most commonly
due to aseptic loosening, as a result of component misalignment, soft tissue imbalance, or use of prostheses with
constrained degrees of freedom [2-7]. With improved implant design and surgical technique, occurrences through
these modes of failure have been greatly reduced. More recently, failures subsequent to catastrophic polyethylene
wear have been observed, thought to have been caused by less conforming articular geometries [1, 8-12], thin or
poor quality polyethylene [1, 8, 9, 12], or disturbed knee kinematics [12]. It has been shown that the kinematics of
artificial knees are different than in normal knees, and involves excessive sliding and rotational motions which
may lead to high shear stresses at the joint interface. If the kinematics of artificial knee joints can be measured “in
vivo”, this information could be used to help design implants that better replicate normal knee kinematics. It is
important to perform these measurements under weightbearing conditions to accurately measure knee kinematics
“in vivo”.
Previous work
Most previous methods of measuring “in vivo” knee kinematics have used external linkages and skin markers that
are susceptible to error, due to undesired motion between markers and the underlying bone [13]. Some researchers
have achieved accurate measurements by attaching external markers directly to the bone using trans-cutaneous
pins [14] [15] [16] [17], but this procedure is extremely invasive and risks infection. Using the bone pin technique,
errors due to skin motion of up to 10 mm of translation and 8 degrees of rotation have been observed [16]. Other
methods have been recently proposed that use special marker-attachment systems to clamp non-invasively to the
underlying bone, and thus reduce relative skin-bone movement [18].
X-ray imaging avoids the problem of skin motion error, and is relatively safe, non-invasive, and accurate. Some
researchers have used roentgen stereophotogrammetric analysis (RSA) to determine knee kinematics [19]. In this
technique, small tantalum beads were implanted into the tibia and femur, and the bead positions were measured in
subsequent X-ray images. However, this work had the disadvantage that it required the implantation of beads.
Also, still X-ray images were used, which limited the analysis to quasi-static conditions.
To analyze dynamic motion “in vivo”, X-ray fluoroscopy can be used. In fluoroscopy, X-rays are emitted from a
tube, pass through the knee and strike a fluorescent screen where the images are intensified and recorded via
videotape [20]. The result is a perspective projection of the knee, recorded as a continuous series of images (Figure
2). Although a continuous X-ray “movie” is recorded, one can go back and analyze individual frames on the
videotape.
Several researchers have used X-ray fluoroscopy to measure knee kinematics. Stiehl and Komistek et al [21]
analyzed still photographs of the fluoroscopic video to measure rotations and translations of the knee joint
members. They measured translations and rotations within the plane of the image (two translational degrees of
freedom (DOF) and one rotational DOF for each joint member). However, the actual motion of the components
also includes rotations and translations out of the plane of the image.
Since knee prosthetic components have a known geometrical shape and the fluoroscopic image can be
approximated by a perspective projection, it is possible to recover all six DOF’s — three translational and three
3
rotational — of each component. In the field of computer vision, the problem of estimating the pose of a known
rigid object from 2-D images has been widely studied. In our application, the primary feature that can be extracted
from the image is the extremal contour, or silhouette, of the object. There have been a number of algorithms
described in the literature that make use of the silhouette.
Lowe [22] describes an iterative least-squares algorithm for aligning the projected extreme contours of the model
with edges found in the image. However, this technique assumes a polyhedral model; and would be difficult to
apply to the knee implants, which have smooth, complex surfaces. Kriegman and Ponce [23] used rational surface
patches, implicit algebraic equations, and elimination theory to obtain analytic expressions for the projected
contours. However, this method is restricted to objects with only a few patches, and would be difficult to apply to
knee components, which have highly complex surfaces. Lavallee, et. al. [24] describe an algorithm which
minimizes the 3-D distances between the rays (corresponding to the points on the contour) and the closest point on
the surface of the object. A 3-D distance map is pre-computed that stores the distance from any point in the
neighborhood of the object to the closest point on the surface. Lavallee developed an octree-spline technique to
speed up the construction of the distance map, which otherwise would be prohibitively slow.
Another, perhaps simpler, approach for pose estimation is to use a template matching technique. If the complete
silhouette of the object is visible, an algorithm can match the entire silhouette of the object with a pre-computed
template. For two-dimensional applications such as character recognition, the object and the template differ only
by a translation and a rotation within the plane of the image. For three-dimensional applications such as automatic
target recognition, the silhouette of the object changes shape as it rotates out of the image plane. A solution to this
problem is to pre-calculate and store a complete set of templates of the object, representing the object over a range
of possible orientations.
Banks and Hodge [25] used this approach to measure the full six degree of freedom motion of knee prostheses by
matching the projected silhouette contour of the prosthesis against a library of shapes representing the contour of
the object over a range of possible orientations. CAD models of the components were developed and a library of
silhouette, or template images was created. These silhouette images were matched against the extracted silhouettes
from the fluoroscopy images. The pose of each component, as well as their relative pose, was then derived. They
measured the accuracy of the technique by comparing it to known translations and rotations of prostheses “in
vitro”. They report approximately one degree of rotational accuracy and 0.5 mm of translational accuracy parallel
to the image plane. Advantages of their work over previous reported work are that (a) the full six DOF poses of the
knee components can be measured “in vivo”, and (b) no invasive or special fixturing (such as implanted beads) are
necessary.
In the above referenced work, a simplified perspective model was used for the sensor imaging model to reduce the
complexity of the matching process. Specifically, the object is assumed to be a certain fixed distance from the Xray source for the purpose of predicting the shape of the projected silhouette of the object in the image. The overall
silhouette is then scaled by a magnification factor to account for the change in size as the object moves toward or
away from the X-ray source. The authors state, “This simplifying assumption is equivalent to the assertion that the
shape of an object’s silhouette is constant for a small range of translations normal to the image plane, even though
[25].
The nature of the simplified perspective projection model can be better explained by comparing it to the ideal
perspective projection model. The ideal perspective projection model treats the fluoroscope sensor as consisting of
a X-ray point source and a planar phosphor screen upon which the image is formed. There are additional
components which intensify the image and convert it to a video signal, but for the moment we will concentrate on
the geometrical relationship between the object points and image points. As shown in Figure 5, X-rays are emitted
by a source at point F, pass through the object, and strike the image plane. We take the coordinate system of the
fluoroscope sensor to be centered at F, with its Z axis perpendicular to the image plane. The point C where the Z
axis intersects the image plane is called the principal point [26]. If an object point G has coordinates (X,Y,Z)
relative to the coordinate system of the fluoroscope, then it projects to a point Q(uideal,videal) on the image plane,
according to the following equations:
4
u ideal = D
X
Y
, v ideal = D
Z
Z
where D is the distance from the X-ray source to the image plane (in millimeters) and (u,v) are the coordinates of
the projected point on the image plane (in millimeters). Alternatively, we can express the image point coordinates
in terms of the location of the origin of the object (e.g., it’s centroid) and the relative position of the object point
from the object origin. Assume that the object origin is located at coordinates (Xobj,Yobj,Zobj) relative to the
fluoroscope, and point G is located (∆X,∆Y,∆Z) relative to the object origin, so that X = Xobj+ ∆X, Y = Yobj+ ∆Y,
Z = Zobj+ ∆Z. Then the image coordinates of point G are:
u ideal = D
X obj + ∆X
Z obj + ∆Z
, videal = D
Yobj + ∆Y
Z obj + ∆Z
assuming no rotation of the object. In the simplified perspective projection model, the object is first treated as if it
were located at a nominal distance Znom, in order to compute the locations of the projected image points. Then the
image coordinates are scaled by an overall magnification factor given by (Znom / Zobj), where (Xobj,Yobj,Zobj) are the
actual coordinates of the object center:
 Z nom   X obj + ∆X 
,
D
 Z obj   Z nom + ∆ Z 
u simplified = 
 Znom   Yobj + ∆Y 

D
 Z obj   Z nom + ∆ Z 
v simplified = 
Thus, there is an error in the image coordinates predicted by the simplified perspective model, if ∆Z0 (which is
the case for any object points that do not have the same Z as the object origin). This error is small if the object’s
dimensions are small compared to its distance from the fluoroscope, so that ∆Z << Zobj. However, Banks and
Hodge[25] go on to state that in some cases, the simplified perspective model is inadequate to account for the
actual changes in silhouette shape due to certain motions, and can cause the system to make large matching errors
or “blunders”. Our own work also uses the same simplified perspective imaging model in the initial step of the
method, but then goes on to use the ideal perspective imaging model in the final step of the method.
A separate problem, common to any method which relies on image silhouettes as the source of data, is that certain
orientations can result in relatively ambiguous silhouettes. This may also cause the system to choose an incorrect
pose. Our own work incorporates an interactive step to allow the human analyst to view the resulting graphical
models and correct any incorrect pose caused by an ambiguous silhouette.
Overview and Contribution of this work
This paper describes a new technique to determine the relative pose (position and orientation) of the two knee
implant components (tibial and femoral) with respect to each other, under “in vivo” conditions, from X-ray
fluoroscopy images. We use an interactive model-based computer vision technique. Figure 3 shows an image of
the CAD models overlaid on top of the original X-ray fluoroscopy image. Our technique has been used to analyze
the kinematics of several different knee types and to quantify the effect of several kinematic phenomena, including
sliding and edge lift-off [21, 27-29].
The method reported in this paper is similar to that of Banks and Hodge [25], but with several new contributions
that improve on its accuracy:
1.
Full perspective imaging model. Our method uses a simplified perspective imaging model, but only in order
to automatically calculate an initial estimate for the pose of the implant component. Final pose estimation is
done using a full perspective model, to achieve improved accuracy.
5
2.
Interactive system. Our method is fully interactive to allow the human analyst to supervise the process,
eliminate any ambiguities, and make final adjustments on the pose parameters to improve the accuracy.
3.
Multiple checks. Our method contains multiple checks to avoid any possibility of mismatches or gross error.
We simultaneously view the component models from three different viewpoints (side, front, and top) to verify
that the relative pose is consistent between the two models. We also examine the images before and after to
verify that the pose is consistent with the motion of the implants over time.
Accuracy tests of our method have been performed using computer simulations and also “in vitro” imagery, and
show an accuracy of better than 0.35 degrees of rotational accuracy and 0.5 mm of translational accuracy parallel
to the image plane.
In addition, we report the results of a study in which our method was used to analyze and compare the kinematics
of 10 subjects with implanted knees. The results show a distinct difference in kinematics between two knee
designs.
Methods
The goal of our method is to determine the relative position and orientation (pose) between the femoral implant
component and the tibial implant component. In order to do this, we first determine the pose of the femoral
component with respect to the fluoroscope sensor, and the tibial component with respect to the fluoroscope sensor.
The method has the following steps: (1) image acquisition, (2) initial pose estimation, (3) final pose estimation,
and (4) determination of relative pose. These steps are discussed in the following sections.
Image Acquisition
A videofluoroscopic system was used to examine subjects with implanted knees (Figure 4). Subjects flexed in the
sagittal plane and were imaged from the lateral direction. The video output was stored on 8mm videotape.
Selected images from the videotape were subsequently digitized (640 x 480 x 8 bits) using a Silicon Graphics
Indigo2 workstation equipped with a Galileo frame grabber board.
As described earlier, the image formation process performed by a fluoroscope sensor can be represented by a
perspective projection model [25], as shown in Figure 5. If an object point G has coordinates (X,Y,Z) relative to
the coordinate system of the fluoroscope, then it projects to a point Q(u,v) on the image plane, according to the
following equations:
X
Y
u=D , v= D
Z
Z
where D is the distance from the X-ray source to the image plane (mm) and (u,v) are the coordinates of the
projected point on the image plane (mm). The image that is formed on the image plane is then digitized to an
array of discrete picture elements, or pixels. The mapping function from image plane coordinates (u,v) to pixel
coordinates (x,y) is:
u
v
+ px , y = + py
x =
sx
sy
where (px, py) is the location of the principal point in the digitized image and sx, sy are the pixel spacing
horizontally and vertically, respectively. The entire set of parameters of the perspective imaging model are {D, px,
py, sx, sy}. These parameters were calibrated by imaging a rectilinear grid of metal beads with known spacing
(Figure 6). In our work, we assumed (px, py) were located at the center of the image, which does not significantly
influence the accuracy of 3D measurements as long as the object is close to the plane for which the system was
6
originally calibrated [30]. We also assumed no lens distortion, which we have found to be negligible for the
fluoroscope used in our work.
Initial Pose Estimation
After an image has been digitized, the next step is to determine an initial pose estimate for the femoral and tibial
knee components. In general, six parameters are necessary to define the pose of one rigid body relative to another:
three for translation (e.g., X,Y,Z) and three for orientation (e.g., angles θX, θY , θZ).
The central idea in the process is that the pose of the object (i.e., a knee implant component) with respect to the
sensor can be determined from a single perspective image by measuring the size and shape of its projection in the
image. The approach is to match the silhouette of the object against a library of synthetic images of the object,
each rendered at a known position and orientation. These synthetic images can be created from exact geometric
computer models of the implant components. To reduce the size of the required library, a simplified perspective
model is used [25]. This model assumes that the shape of an object’s silhouette remains unchanged as the object is
moved towards or away from the imaging sensor. This is not strictly true because the fluoroscope is a true
perspective projection. However, it is a reasonable approximation which can give a good initial estimate of the
pose.
With the simplified perspective assumption, the shape of the object is independent of its distance from the imaging
sensor (although its size is dependent on the distance). Therefore, a library of images of the object can be created
to account for all possible observable shapes. The library of images consists of views of the object rendered at
different out-of-plane rotations, all rendered at a constant (nominal) distance from the sensor. Thus, the library is
only two dimensional rather than 6 dimensional. Essentially, the library images are templates that can be matched
against the input image of the object, in order to determine its out-of-plane rotation angles. Details about the
creation of the library are given in the next section.
Library Creation
The software modeling program AutoCADTM was used to create and render (in perspective projection) 3-D models
of the implant components. The parameters for the perspective projection were set to be identical to those of the
fluoroscope imaging model. To generate the library images, the object was rotated at 1° increments about the x
axis, and at 1° increments about the y axis. The range of rotation was ±15 degrees about the
degrees about the y axis, for a total of 31 x 41 = 1271 images in each library.
At each orientation, the object was rendered as a binary (black on white) image, so that no internal details and only
the silhouette was visible. The resulting image was then normalized to put it into a standard, or “canonical” form,
in preparation for use in template matching. This was achieved by first uniformly scaling the image (using
bilinear interpolation [31]) until the area of the silhouette was a constant area (15000 pixels). The silhouette was
translated so that its centroid was located at the center of the image. The principal axes of the silhouette area were
then computed as the eigenvectors of the matrix [32]:

M =
m 20
 m 11
m 11 
m 02 
where mij is the i,jth central moment of the image of the silhouette region. The image was then rotated about its
center until the major axis of the silhouette (the one with the largest eigenvalue) was aligned with the horizontal
axis of the image. The amount of scaling and rotation was recorded, for later use by the pose estimation algorithm.
Finally, the library images were converted to binary form (one bit per pixel), and stacked to form a single multiframe image. The size of a library for a single component was about 7 Mbytes of data. Figure 7 shows a portion of
the library of images for a femoral component.
7
The “canonization” process described above is functionally equivalent to the normalization process performed by
Banks and Hodge [25] to create the template library. Contours in their method are represented by Fourier
descriptors [32] instead of using explicit template images as in our method. This results in a more compact
representation of shape information, which in turn permits faster shape comparisons, at the expense of more
complex algorithms. However, the process performs the same steps of normalizing the contour position, size, and
in-plane rotation.
Template matching
Fluoroscopy images were analyzed using a public domain image processing software package called Khoros, from
Khoral Research Inc [33]. Khoros is extensible and many new programs were developed and integrated during the
course of this work. Figure 8 shows the visual programming workspace that performs the silhouette extraction and
pose estimation. Each box within the workspace performs an operation on an image (or images), and the lines
between the boxes show the transfer of data.
The process begins by inputting a fluoroscopy image to be analyzed. Figure 9 shows an input image that will be
used as an example to describe the method. The user manually extracts a rough region of interest from the input
fluoroscopy image, that contains the component of interest. This is done to reduce the size of the image to be
processed and speed up the computation time. The reduced region of interest image is passed through a median
filter to reduce the effect of noise.
Next, the contour (or silhouette) of the implant component is extracted (Figure 10). Currently, this is done
manually, by having the user designate points around the boundary of the implant with the mouse. We have found
it difficult to reliably extract the contour automatically, due to the presence of nearby objects such as bone cement
that are nearly the same intensity as the implant. The resulting binary image is passed to a process called
“Canonize”, which automatically converts the silhouette image to a canonical form. As described earlier, the
canonization process centers the silhouette, scales it to a constant area, and rotates it so that its principal axis is
aligned with the horizontal axis. The resulting canonical image can be directly matched with the library.
A library is then chosen, based on clinical knowledge of the type of implant used in the patient, and which knee is
being analyzed (right or left). There are different libraries for each type of implant. Also, since our convention is
to always image knees from the lateral side, the orientation of the implant with respect to the fluoroscope is
different depending on whether it is a left knee or a right knee. Therefore, there are different libraries for an
implant depending on whether it is for a left or right knee.
The next step finds the best match of the input canonical image with a library image. This is done by
systematically subtracting the input canonical image with each of the library images and generating a “score” for
each, which is the number of unmatched pixels. Figure 11 shows the matching results for a particular image. The
black areas indicate the unmatched pixels. Ideally, since the input image and the library are both in canonical
form, there should be an exact match between them with no unmatched pixels. In practice, however, there are
unmatched pixels due to errors in contour extraction and the fact that the orientation of the implant component will
not exactly correspond with one of the orientations in the library (due to the 1° resolution of the library). The best
match is the library image with the smallest number of unmatched pixels.
Calculation of pose parameters
The library image with the best match directly determines the two out-of-plane rotation angles of the object (θX,
θY). The remaining degrees of freedom of the object are then found. The in-plane rotation angle θZ is determined
by taking the difference between the input image’s in-plane rotation angle and the library image’s in-plane rotation
angle:
θ Z = θZ
input
−θ Z
library
8
The Z position of the object is determined by dividing the scale of the fluoroscopy image by the scale of the library
image and multiplying that by the initial Z distance that the library image was rendered:
(
Z = Z library ⋅ s input s library
)
To determine the X,Y position of the object, we first compute the 2D image vector from the projected origin of the
object to its image centroid. In the library image, this vector is given by:
(c
where
library
library
x
(r
library
x
, cy
library
, ry
)= (r
library
x
library
− p library
− p library
, ry
y
x
)
) is the image location of the object centroid in the library image and (p
library
x
library
, py
) is the
principal point in the library image. Next, the vector must be transformed to its representation in the input image,
by rotating it in the plane of this image by the angle θ Z , and then scaling:
input
cx
c
input
y
[
= [c
][
⋅ cos (θ )]⋅ [s
= c library
⋅ cos (θ Z ) − c library
⋅ sin (θ Z ) ⋅ s library s input
x
y
library
x
⋅ sin (θ Z ) + c
library
y
Z
library
s input
]
]
The origin of the object in the input image is thus located at:
(q
where
input
x
(r
input
x
, q input
y
input
, ry
)= (r
input
x
− c input
, ryinput − c input
x
y
)
) = image location of the object centroid in the input image.
To calculate the (X,Y) location of
the object relative to the sensor’s coordinate frame, we use similar triangles to scale the 2D image vector by the
known distance to the object:
(
− p input
X = Z ⋅ q input
x
x
(
− p input
Y = Z ⋅ q input
y
y
)D
)D
where:
Z = object position (already calculated)
(q
input
x
(p
input
x
input
, qy
input
, py
)=
object origin location in the image
)=
input image principal point
D = calibrated principal distance
Finally, we correct the θ X and θ Y rotation angles to take into account the effect of X,Y translation on the
apparent rotation. Figure 12 shows how a translation parallel to the image plane can affect the apparent
orientation of the object. The following equations are used to correct these angles:
 rxinput   cos (θ Z ) − sin (θ Z )  q input

x
 input  = 
  input 
 ry   sin (θ Z ) cos (θ Z )   q y 
(
θ X′ = θ X − tan −1 (r xinput D ), θ Y′ = θY − tan −1 ryinput D
9
)
The result is the full 6 DOF pose of the object (X, Y, Z, θ'X, θ'Y, θZ) relative to the sensor frame.
Final Pose Estimation
As previously mentioned, inaccuracies in the pose can result from the use of the simplified perspective model, as
well as from the incorrect choice of a match in ambiguous situations. Therefore, a final pose estimation step is
performed to improve the accuracy. The approach for this step is to display the graphical model of the object as an
overlay onto the original image. If the pose is correct, the outline of the graphical model should match the image
contour exactly. If it is not correct, the user can adjust the pose of the object until its rendered projection matches
the image (Figure 13).
The program AutoCADTM was used to perform the rendering of the objects. A full perspective model was used,
with its parameters (focal length, principal point, and field of view) set to be identical to that of the fluoroscope.
Figure 14 shows the original image (upper left) and the rendered models overlaid on the original image (upper
right). The user can now make small adjustments in the pose parameters (typically between 0 to 1 mm and
between 0 to 1 degree) to make the rendered model fit the image more closely. The models are simultaneously
viewed from any simulated viewpoint such as front (lower left) and side (lower right), which is useful to visualize
the phenomenon of condylar edge liftoff.
In certain images, it is possible to choose an incorrect match for a component. A match may be ambiguous because
components may have symmetrical shapes. For example, the silhouette for a negative rotation about the Y axis can
be very similar to the silhouette for a positive rotation of the same magnitude. These matching errors, or
“blunders” can result in a large error in pose.
Matching errors are easily detected by viewing the rendered models from other viewpoints. Figure 15 shows the
same example image where an incorrect pose has been chosen for the femoral component. The overlay is still a
good fit to the image (upper right), with only minor deviation between the contour of the overlay and the contour in
the image. However, when the models are viewed from the front (lower left) and side (lower right), it is obvious
that there is an error - the two components are constrained by the ligaments of the knee and could not possibly
achieve the relative pose that is shown. Figure 16 shows the same example image where an incorrect pose has
been chosen for the tibial component. Viewing the models from the front (lower left) shows interference between
the two models – a physical impossibility.
Determination of Relative Pose
Once the poses of the femoral component and the tibial component with respect to the fluoroscope sensor have
been determined, the last step is to determine the relative pose between the components. The 6 DOF pose between
two rigid bodies A and B can be represented by a 4x4 homogeneous transformation matrix [34]:
 r11
r
B
 21
H
=
A
 r31

 0
X
r12
r13
r22
r23
Y
r32
r33
Z
0
0
1


where (X,Y,Z)T is the origin of body A in the coordinate system of B, and the {rij} are the elements of a 3x3
rotation matrix given by:
B
A
 r11

R = r21

 r31
r21
r22
r32
 cos (θ Z ) − sin (θ Z ) 0   1
 cos (θ Y ) 0
0
0




r23 =  sin (θ Z ) cos (θ Z ) 0   0 cos (θ X ) − sin (θ X )
0
1
 




r33  
0
0
1   0 sin (θ X ) cos (θ X )  − sin (θ Y ) 0
r13 
10
sin (θ Y )



cos (θ Y )
0
where θX,θY,θZ are the angles of rotation about the X, Y, and Z axes, respectively. The relative pose between the
femur and the tibia is given by the matrix multiplication:
tibia
femur
H
=
tibia
sensor
H ⋅
sensor
femur
H
Results
Accuracy tests
To check the accuracy of the pose estimation process, two experiments were performed: one with synthetic
(computer generated) data, and another with real (“in vitro”) data. For the first experiment, we created a set of
synthetic images of the implant components, rendered at pre-determined poses. Three different component models
were used: one tibial implant and two femoral implants. Each component was rendered in 8 different poses (Table
1) for a total of 24 test images. The initial pose estimate for each component was found and compared to the
ground truth data (Table 2). The RMS error for translation parallel to the image plane was a maximum of 0.46
mm, which corresponds approximately to one pixel at the distance that the object was situated. The RMS error for
translation perpendicular to the image plane was much larger (a maximum of 2.24 mm).
In the next experiment, we mounted the femoral and tibial components on rotational platforms and took
fluoroscopy images of them at known orientations. Figure 17 shows the experimental setup. Although the
rotational platforms can be rotated very precisely, we did not know the precise absolute pose of the components
relative to the sensor, due to uncertainties in mounting and initial alignment. We therefore measured only the
relative pose between the two components.
Our apparatus allowed us to rotate the femoral component relative to the tibial component about the horizontal
axis. We call this angle of rotation the “lift-off” angle. Our apparatus also allowed us to rotate the tibial
component about the horizontal (X) and vertical (Y) axes. The femoral component was fixed with respect to the
tibia (except for the single degree of freedom corresponding to the lift-off angle) and thus was rotated as well. The
complete set of test poses are shown in Table 3 (all angles are expressed in degrees).
We processed the fluoroscopy images and computed the initial pose of the femoral component relative to the tibial
component for each of the 11 cases. The angle of rotation about the horizontal (X) axis was computed and
compared to the known ground truth lift-off angle for each case, shown in Table 3. In addition, we also measured
the angle of rotation about the other two axes (Y and Z). These angles should have been identically zero (for the Y
axis) and identically 90° (for the Z axis), since the femoral component was rigidly mounted with respect to the
tibial component, except for the single degree of freedom rotation about the X axis. The results are shown in Table
4.
In Table 4, note the unusually large errors for cases 2, 6, and 9. The reason for this is that the pose estimation
algorithm chose the incorrect library match for several of the tibial images. This is very easy to do because the
tibial component is very symmetrical about the Y axis. Thus, the silhouette for a negative rotation about the Y axis
is very similar to the silhouette for a positive rotation of the same magnitude.
We applied the final (interactive) pose estimation step to correct any errors and refine the pose estimate. The
revised pose error results are shown in Table 5. As can be seen, the errors are much smaller after this step. The
maximum RMS error about any axis was less than or equal to 0.35°. In our experiments, the person performing
these operations did not have knowledge of the “correct” poses and thus was not biased in choosing a different
match or adjusting the angles.
Clinical results
Our method was used to analyze and compare the kinematics of 10 subjects with implanted knees. Five subjects
had a posterior cruciate retaining knee implant and 5 subjects had a posterior cruciate substituting knee implant.
11
Each subject was asked to perform 3 successive deep knee bends to maximum flexion under fluoroscopic
surveillance in the sagittal plane. Although the working volume of the fluoroscope restricts the motions that can
be performed, deep knee bends are clinically significant because the knee goes through a full range of flexion
angles while under full load bearing conditions.
Images were analyzed at 0, 30, 60, and 90 degrees of flexion. After determining the relative pose between the
components, the two graphical models were rotated as a unit to place the tibial component into a standard
reference frame. The point of contact of the femur (the closest point of the femur to the tibia) was then determined
along the longitudinal axis.
The results for the posterior cruciate substituting knees are shown in Figure 18. Each of the knees followed a very
similar pattern. At full extension, the femur contacts the tibia anterior to the midline in the sagittal plane. As the
knee flexed, the contact point moved posteriorly in a smooth motion. This is similar to the pattern exhibited by
normal knees [35]. In contrast, the results for the posterior cruciate retaining knees show a high degree of
variability (Figure 19).
Discussion
Accuracy tests showed that the pose estimation method is accurate to less than 0.5 mm of error (RMS) for
translations parallel to the image plane, which corresponds to about one pixel in the image. Orientation accuracy
is less than or equal to 0.35° (RMS error) about any axis. Errors are larger for translations perpendicular to the
image plane (i.e., along the Z axis), as was observed by other researchers [25]. However, this error is easily seen
during the final pose determination step, when the models can be viewed from another viewpoint. Since the knee
components are physically constrained to align very closely in this dimension, we ignore the measured translation
in the Z axis, and force the two models to align along the Z dimension.
As the clinical results demonstrate, the pose estimation method can be used to accurately measure “in vivo” contact
point, and characterize the kinematic patterns of different knee implant designs. Further information that could
be derived include the area of the contact region, liftoff angle, and other kinematic data.
We have found that it is best to fit the femoral component first, then the tibial component. The reason is that there
is less likelihood of choosing an incorrect match for the femoral component, due to its more detailed contour in the
image. Once the pose of the femoral component is established, the pose of the tibial component is then
constrained.
The “in vitro” tests were performed on a model without soft tissue, in which the image contrast was quite good.
Images that are taken “in vivo” have reduced contrast because of the surrounding soft tissue. This could have the
effect of degrading the accuracy of the extracted silhouette, and thus the accuracy of the estimated pose. However,
due to the high density of the metal knee components, normally there is sufficient contrast to locate the contour to
within the resolution of the image (i.e., one pixel). One exception to this occurs in places where there is bone
cement adjacent to the knee component. In these places, it is often difficult to locate the contour precisely, and the
operator must rely on information from adjacent locations, as well as a priori knowledge of the shape of the
models.
The range of views that the system can handle is currently limited by the range of orientations that are represented
in the library. The rotation angles of a component with respect to the fluoroscope sensor must be within ±15
degrees about the x (horizontal) axis of the fluoroscope, and ±20 degrees about the y (vertical) axis. There is no
limitation on the angle of rotation about the z axis. This range of rotation angles is sufficient to encompass most
(but not all) of the poses that are typically observed when imaging from the lateral direction. If the components are
oriented outside the allowable range, then their pose cannot be determined by this method, and the image cannot be
used for analysis. Currently in such cases, the system will choose the nearest (incorrect) pose for the components,
and the operator must catch this error during the final rendering step.
12
In certain cases there is a small amount of image overlap between the silhouettes of the femoral and tibial implant
components. This can occur when the components are tipped at large rotation angles (close to ±15 degrees) about
the horizontal axis. For angles up to 15 degrees, the amount of overlap is small and the operator can usually
interpolate the contours of the components through the overlap region.
The method described in this paper is currently time-consuming, due to the need to manually extract the contours
of the components and supervise the process. However, it should be possible to partially or completely automate
the contour extraction process. Another possibility for automatically eliminating matching errors is to ensure that
the derived pose is consistent with the motion of the implants over time, which should be smooth.
Conclusion
A method has been developed to accurately measure the three-dimensional position and orientation (pose) of
artificial knee implants “in vivo” from X-ray fluoroscopy images using interactive 3-D computer graphics. “In
vitro” accuracy tests show that the method is accurate to within 0.5mm of error for translations parallel to the
image plane, and within 0.35° of orientation about any axis. The method does not measure the translation
perpendicular to the image plane, and assumes that the two knee components are aligned in this direction. The
method uses a full perspective imaging model, and incorporates multiple checks in order to detect any
mismatching errors. The method can in principle be applied to any joint where accurate CAD models are
available.
Acknowledgments
The authors are grateful to the Rose Community Foundation and the Columbia Rose Medical Center for their
support of this work. The comments and advice of the anonymous reviewers were very helpful. The authors are
also grateful to VFWorks Inc., of Palm Harbor, Florida, for their donation of fluoroscopy equipment and imagery.
13
References
[1]
M. M. Landy and P. S. Walker, “Wear of ultra-high molecular weight polyethylene components of 90
retrieved knee prostheses,” Journal of Arthoplasty, Vol. 3, pp. s73-s85, 1988.
[2]
A. Deburge, “GUEPAR: GUEPAR hinge prosthesis. Complications and results with two years follow-up,”
Clin Orthop, Vol. 120, pp. 47-53, 1976.
[3]
P. A. Freeman, “Walldius arthroplasty: A review of 80 cases,” Clin Orthop, Vol. 94, pp. 85-91, 1973.
[4]
M. A. R. Freeman, R. C. Todd, P. Barnert, and W. H. Day, “ICLH arthroplasty of the knee, 1968-1977,” J
Bone Joint Surg, Vol. 6OB, pp. 339-344, 1978.
[5]
V. M. Goldberg and B. T. Henderson, “The Freeman-Swanson ICLII total knee arthroplasty,” J Bone Joint
Surg, Vol. 62A, pp. 1338-1344, 1979.
[6]
D. G. Lewallen, R. S. Bryan, and L. F. Peterson, “Polycentric total knee arthroplasty: A ten-year follow-up
study,” J Bone Joint Surg, Vol. 66, pp. 1211-1218, 1984.
[7]
M. D. Skolnick, M. B. Coventry, and D. M. Hstrup, “Geometric total knee arthroplasty. A two-year followJ Bone Joint Surg, Vol. 58A, pp. 749-753, 1976.
[8]
D. L. Bartel, V. L. Bickness, and T. M. Wright, “The effect of conformity, thickness, and material stresses
in UHMWPE components for total joint replacement,” J Bone Joint Surg, Vol. 68A, pp. 1041-1051, 1989.
[9]
J. P. Collier, M. B. Mayor, J. L. McNamara, V. A. Surprenant, and R. E. Jensen, “Analysis of the failure of
122 polyethylene inserts from uncemented tibial knee components,” Clin Orthop, Vol. 273, pp. 232-242,
1991.
[10]
E. L. Feng, S. D. Stulberg, and R. L. Wixson, “Progressive subluxation and polyethylene wear in total knee
replacements with flat articular surfaces,” Clin Orthop, Vol. 299, pp. 60-71, 1994.
[11]
P. Lewis, C. H. Royabeck, R. B. Boume, and P. Devarle, “Posteromedial tibial polyethylene failure in total
Clin Orthop, Vol. 299, pp. 11-17, 1994.
[12]
T. M. Wright, C. M. Rimnac, and S. D. Stulberg, “Wear of polyethylene in total joint replacement.
Clin Orthop, Vol. 276, pp. 126-134, 1992.
[13]
E. K. Antonsson and R. W. Mann, “Automatic 6-DOF kinematic trajectory acquisition and analysis,” ASME
J. Dyn. Sys., Meas. and Control, Vol. 111, pp. 31-39, 1989.
[14]
M. A. Murphy, “Geometry and the kinematics of the normal human knee,” Cambridge, Massachusetts,
Massachusetts Institute of Technology, Ph.D., 1990, .
[15]
M. A. Lafortune, P. R. Cavanagh, H. J. Sommer, and A. Kalenak, “Three-dimensional kinematics of the
Journal of Biomechanics, Vol. 25, No. 4, pp. 347-357, 1992.
[16]
J. P. Holden, J. A. Orsini, and S. J. Stanhope, “Estimates of skeletal motion: Movement of surface-mounted
targets relative to bone during gait,” in Biomedical Engineering Recent Developments, J. Vossoughi, Ed.,
Washington, D.C., University of the District of Columbia, pp. 1035-1038, 1994.
14
[17]
M. Sati, J. A. de Guise, S. Larouche, and G. Drouin, “Quantitative assessment of skin marker movement at
The Knee, Vol. 3, No. 3, pp. 121-138, 1996.
[18]
M. Sati, J. A. de Guise, and G. Drouin, “Improving in vivo knee kinematic measurements: Application to
prosthetic ligament analysis,” The Knee, Vol. 3, No. 4, pp. 179-190, 1996.
[19]
K. G. Nilsson, J. Karrholm, and L. Ekelund, “Knee motion in total knee arthroplasty,” Clinical
Orthopaedics and Related Research, Vol. 256, pp. 141-161, 1990.
[20]
R. L. Perry, “Principles of conventional radiography and fluoroscopy,” Veterinary Clinics of North America:
Small Animal Practice, Vol. 23, No. 2, pp. 235-252, 1983.
[21]
J. B. Stiehl, R. D. Komistek, D. A. Dennis, R. D. Paxson, and W. A. Hoff, “Fluoroscopic analysis of
kinematics after posterior-cruciate retaining knee arthroplasty,” Journal of Bone and Joint Surgery-British,
Vol. 77-B, No. 6, pp. 884-889, 1996.
[22]
D. G. Lowe, “Fitting Parameterized Three-Dimensional Models to Images, IEEE Trans. Pattern Analysis
and Machine Intelligence, Vol. 13, No. 5, pp. 441-450, 1991.
[23]
D. J. Kriegman and J. Ponce, “On Recognizing and Positioning Curved 3D Objects From Image Contours,”
IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. PAMI-12, No. 12, pp. 1127-1137, 1990.
[24]
S. Lavallee, R. Szeliski, and L. Brunie, “Anatomy-Based Registration of Three-Dimensional Medical
Images, Range Images, X-Ray Projections, and Three-Dimensional Models Using Octree-Splines,” in
Computer Integrated Surgery: Technology and Clinical Applications, R. Taylor, Ed., Cambridge,
Massachusetts, MIT Press, pp. 115-143, 1996.
[25]
S. A. Banks and W. A. Hodge, “Accurate measurement of three-dimensional knee replacement kinematics
IEEE Trans Biomed Eng, Vol. 43, No. 6, pp. 638-49, 1996.
[26]
R. Jain, R. Kasturi, and B. G. Schunck, Machine Vision, New York, McGraw-Hill, 1995.
[27]
R. D. Komistek, D. A. Dennis, and W. A. Hoff, “A Kinematic Comparison of Prosthetically Implanted and
Nonimplanted Knees Using Dynamic Fluoroscopy,” Proc. of 19th Annual Meeting of the American Society
of Biomechanics, Stanford, California, August, 1995.
[28]
D. A. Dennis, R. D. Komistek, J. B. Stiehl, W. A. Hoff, and E. Cheal, “An In Vivo Determination of
Condylar Lift-Off Using an Inverse Perspective Technique that Utilizes Fluoroscopy,” Orthopaedic
Transactions, 1996.
[29]
W. Hoff, R. Komistek, D. Dennis, S. Walker, E. Northcut, and K. Spargo, “Pose Estimation of Artificial
Knee Implants in Fluoroscopy Images Using a Template Matching Technique,” Proc. of 3rd Workshop on
Applications of Computer Vision, IEEE, Sarasota, FL, Dec. 2-4, pp. 181-186, 1996.
[30]
R. K. Lenz and R. Y. Tsai, “Techniques for calibration of the scale factor and the image center for high
accuracy 3D machine vision metrology,” Proc. of IEEE Int. Conf. Robotics and Automation, Raleigh, NC,
March 31 - April 3, pp. 68-75, 1987.
[31]
G. Wolberg, Digital image warping, Los Alamitos, Calif, IEEE Computer Society Press, 1990.
[32]
W. K. Pratt, Digital Image Processing, 2nd ed., New York, Wiley & Sons, 1991.
15
[33]
J. Rasure and Kubica, “The Khoros application development environment,” in Experimental Environments
for Computer Vision and Image Processing, H. I. Christensen and J. L. Crowley, Ed., World Scientific,
1994.
[34]
J. Craig, Introduction to Robotics, Mechanics, and Control, 2nd ed., Addison Wesley, 1990.
[35]
D. A. Dennis, R. D. Komistek, W. A. Hoff, and S. M. Gabriel, “In-Vivo Knee Kinematics Derived Using an
Clinical Orthopaedics and Related Research, Vol. 331, pp. 107-117, 1996.
16
Figure captions
Figure 1 Tibial (left) and femoral (right) components of a posterior cruciate substituting (PCS) knee implant, with
polyethylene insert.
Figure 2 Fluoroscopic image sequence of PCS knee, showing selected images at flexion angles of 0, 30, 60, and
120 degrees.
Figure 3 Three-dimensional (3-D) CAD models of the implant components are rendered as a graphical overlay on
the fluoroscopy image at 120 degrees of knee flexion.
Figure 4 Subjects were fluoroscoped in the sagittal plane using a videofluoroscopy Xray image intensifier system.
Figure 5 The imaging model for the fluoroscope is perspective projection. X-rays are emitted from a point source
(F), pass through the object (such as point G), and strike the image plane (point Q).
Figure 6 Image of grid plate used to calibrate fluoroscope (spacing between beads is 2.54 cm).
Figure 7 A portion of the image library for a femoral implant component. The full library contains 41 x 31 =
1271 template images.
Figure 8 Data flow in visual programming (Khoros) interface for the image processing algorithms.
Figure 9 An example input fluoroscopy image of a posterior cruciate retaining (PCR) knee implant.
Figure 10 The extracted contour around the silhouette of the femoral component.
Figure 11 Image of the results of matching an input femoral contour against each of the images in the library. The
black areas indicate unmatched pixels.
Figure 12 A translation parallel to the image plane induces an apparent rotation of the object about an axis
parallel to the image plane.
Figure 13 Using an iterative procedure, the operator adjusts the pose of the model to align the rendered graphical
model with the input image.
Figure 14 The poses of the implant components in the input image (upper left) were determined, and their
graphical models were overlaid on original image (upper right), using the derived poses. The results can also be
used to view the models from alternate viewpoints, such as front (lower left), and top (lower right).
Figure 15 In this example, the pose of the femoral component was chosen incorrectly. Due to the symmetries in
the component, the overlay is a good, although not perfect, fit to the image (see the minor image overlay errors in
the upper right). However, when the two models are viewed from the front (lower left) and top (lower right), the
error can easily be detected as a severe (physically impossible) relative pose.
Figure 16 In this example, the pose of the tibial component was chosen incorrectly. Again, the overlay is a good,
although not perfect, fit to the image (upper right). However, when the two models are viewed from the front
(lower left) and top (lower right), the error can easily be detected as a severe (physically impossible) relative pose;
i.e., the two components intersect.
Figure 17 Precision optical rotation platforms were used to set the components in known positions for “in vitro”
pose tests.
17
Figure 18 Sagittal femorotibial contact positions of 5 knees with posterior cruciate substituting total knee
replacements.
Figure 19 Sagittal femorotibial contact positions of 5 knees with posterior cruciate retaining total knee
replacements.
18
Table captions
Table 1 Poses used for synthetic images, generated by the computer.
Table 2 Root-mean-squared (RMS) error after initial pose estimate for a set of synthetic images. This was the
result of the automatic template matching process, using the simplified perspective imaging model.
Table 3 Test poses for real (“in vitro”) images. These angles represent the orientation of the femoral component
relative to the tibial component.
Table 4 Errors after initial pose estimate, for the real (“in vitro”) images. This was the result of the automatic
template matching process, using the simplified perspective imaging model.
Table 5 Revised errors after final pose estimate, for the real (“in vitro”) images. This was after correction of
matching errors and refinement of the poses, using the full perspective imaging model.
19
Download