Interpolation revisited - Medical Imaging, IEEE Transactions on

advertisement
IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 19, NO. 7, JULY 2000
739
Interpolation Revisited
Philippe Thévenaz*, Member, IEEE, Thierry Blu, Member, IEEE, and Michael Unser, Fellow, IEEE
Abstract—Based on the theory of approximation, this paper
presents a unified analysis of interpolation and resampling
techniques. An important issue is the choice of adequate basis
functions. We show that, contrary to the common belief, those that
perform best are not interpolating. By opposition to traditional
interpolation, we call their use generalized interpolation; they
involve a prefiltering step when correctly applied. We explain
why the approximation order inherent in any basis function is
important to limit interpolation artifacts. The decomposition theorem states that any basis function endowed with approximation
order can be expressed as the convolution of a B-spline of the
same order with another function that has none. This motivates
the use of splines and spline-based functions as a tunable way to
keep artifacts in check without any significant cost penalty. We
discuss implementation and performance issues, and we provide
experimental evidence to support our claims.
Index Terms—Approximation constant, approximation order,
B-splines, Fourier error kernel, maximal order and minimal
support (Moms), piecewise-polynomials.
I. INTRODUCTION
T
HE ISSUE of quality is particularly relevant to the medical community; for ethical reasons, it is a prime concern
when manipulating data. Any manipulation should result in the
least amount of distortion or artifacts, so as not to influence the
clinician’s judgment. For practical reasons, efficiency is another
prime concern. Any processing should result in the least computational effort, particularly when dealing with the large amount
of data involved in volumetric medical imaging. In this paper,
we analyze the tradeoff between the quality and the cost of several interpolation methods, and we introduce generalized interpolation as a means to overcome the limitations of traditional
interpolation.
Interpolation is at the heart of various medical imaging applications [1]–[3]. In volumetric imaging, it is often used to
compensate for nonhomogeneous data sampling. This rescaling
operation is desirable to build isometric volumes [4]–[6]. Another application of this transform arises in the three-dimensional (3-D) reconstruction of icosahedral viruses [7]. In volume
rendering, it is common to apply by interpolation a texture to the
facets that compose the rendered object [8]. In addition, volume
rendering may also require the computation of gradients, which
is best done by taking the interpolation model into account [9].
In functional magnetic resonance imaging (fMRI), the relative
Manuscript received July 22, 1999; revised May 12, 2000. The Associate Editor responsible for coordinating the review of this paper and recommending its
publication was Y. Censor. Asterisk indicates corresponding author.
*P. Thévenaz, T. Blu, and M. Unser are with the Swiss Federal Institute of
Technology, Lausanne CH-1015, Switzerland.
Publisher Item Identifier S 0278-0062(00)07330-4.
amplitude of the difference between two conditions (e.g., active
versus inactive) is very small. Registration is used to align fMRI
images before they are subtracted to reveal where and how they
differ [10]. However, even with perfect knowledge of the ideal
geometric transformation, a deficient interpolation can wash out
these tiny differences.
The essence of interpolation is to represent an arbitrary
continuously defined function as a discrete sum of weighted
and shifted basis functions. An important issue is the adequate
choice of those basis functions. The traditional view asks that
they satisfy the interpolation property, and many researchers
have put a significant effort in optimizing them under this
specific constraint [11]–[17]. Over the years, these efforts have
shown more and more diminishing returns.
Here, instead, we introduce and advocate the use of generalized interpolation, which does away with the constraint of
interpolation at the cost of an additional prefiltering step. The
overall benefit is to allow for the use of a much broader class
of potential basis functions, some of which enjoy, at the same
time, excellent approximation properties and short support. We
present a performance analysis that lies on the firm ground of
approximation theory. We introduce analytical tools that allow
the practitioner to determine the theoretical performance of any
basis function, including noninterpolating ones, and provide an
in-depth analysis of many piecewise-polynomial cases. This
class is important because it contains some families of basis
functions that can be shown to be the best achievable with respect to several criteria, such as maximal regularity (B-splines),
or best least-squares approximation properties (o-Moms). We
show both theoretically and experimentally that generalized
interpolation performs better than traditional interpolation in
the context of image transformations. The resulting quality can
be arbitrarily high; for a given quality, generalized interpolation
comes at a lower computational cost than that incurred by the
traditional methods which satisfy the interpolating constraint.
The organization of this paper is as follows. Section II introduces the notations and we compare traditional interpolation with our proposition for generalized interpolation. In Section III, we discuss some desirable aspects of basis functions in
the imaging context. We expose in Section IV the main contribution of this paper, where we apply to interpolation a methodology that originates in the theory of approximation. We present
the decomposition theorem and develop tools that are beneficial to its application. In Section V, we analyze several piecewise-polynomial basis functions, while we analyze sinc-based
ones in Section VI. In Section VII, we conduct a theoretical
study of the tradeoff between speed and quality; the validity of
this study is confirmed by the experiments that we present in
Section VIII. Finally, we conclude in Section IX.
0278–0062/00$10.00 © 2000 IEEE
740
IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 19, NO. 7, JULY 2000
II. INTERPOLATION
A. Scope
We restrict this paper to the case where the discrete data are
regularly sampled on a Cartesian grid. We also restrict the discussion to exact interpolation, where the continuous model is
required to take the same values as the sampled data at the grid
locations. Finally, we restrict ourselves to linear methods, such
that the sum of two interpolated functions is equal to the interpolation of the sum of the two functions.
solution is preferable because no special value (e.g., 0) is introduced; only already-existing values are used. When compared to
the other traditional extension known as periodization, it offers
the additional advantage that no abrupt transition results on the
data boundaries. Furthermore, the structure of mirror-extended
data is invariant under filtering provided the filter is symmetric,
which yields consistency of design. In the case one-dimensional
(1-D) letting the known range be
, the mirror-extended
signal satisfies
B. Traditional Interpolation
at some (perhaps
Let us express an interpolated value
noninteger) coordinate in a space of dimension as a linear
combination of samples evaluated at integer coordinates
Given
,
, the (eventually multiple) application
,
of the folding operation just given is sufficient to define
. We explain in Appendix-A how this translates to a
practical algorithm for discrete arguments, and we extend this
1-D case to higher dimensions in Section III-B.
D. Determination of the Coefficients
(1)
The sample weights are given by the values of the function
. To satisfy the requirement of exact interpolation,
vanishes for all integer arguments
we ask that the function
except at the origin, where it must take a unit value. A classical
is the sinc function, in which
example of the basis function
case all synthesized functions are bandlimited.
C. Generalized Interpolation
As an alternative approach, let us consider the form
(2)
The crucial difference between the classical formulation (1)
and the generalized formulation (2) is the introduction of coin place of the sample values . This offers new
efficients
possibilities, in the sense that interpolation can now be carried
in two separate steps. Firstly, the determination of coefficients
from the samples , and second, the determination of defrom the coefficients . The benefit of this
sired values
separation is to allow for an extended choice of basis functions,
some with better properties than those available in the restricted
. The apparent drawback is the
classical case where
need for an additional step. We will see later that this drawback
is largely compensated by the gain in quality resulting from the
larger selection of basis functions to choose from.
Whether the interpolation is traditional or generalized, we
carry the summations all the way to—and from—infinity. Thus,
that
it is essential to assign a specific value to those samples
are unknown because they are out of the range of our data. In
the context of generalized interpolation, we will soon see that
is dependent on all sample values ,
any given coefficient
; for this reason, it is preferable to limit the degree of
arbitrariness when extrapolating data. To reduce boundary effects, we prefer to avoid the traditional extension which imposes
,
, where is the known support of the data; instead, we advocate to perform implicit data mirroring. This latter
To enforce exact interpolation for integer arguments
we write that
,
(3)
. Given some function that is known a
where
priori, this expression is nothing but a linear system of equations in terms of the unknown coefficients . We are now faced
, and a large part of the
with a problem of the form
literature (e.g., [18]) is devoted to the development of efficient
techniques for inverting the matrix in the context of specific
basis functions . The problem is trivial when is interpolating,
for then is the identity.
Another strategy proceeds by recognizing that (3) is equivalent to the discrete convolution [1]
(4)
It directly follows that the infinite sequence of coefficients
can be obtained by convolution of the infinite sequence
with the convolution-inverse
, which is uniquely defined,
and which does generally exist in the cases of interest. Con, we get that
volving both sides of (4) by
(5)
Since convolution is nothing else but filtering, (5) suggests that
discrete filtering can be an alternative solution to matrix inversion for the determination of the sequence of coefficients
needed to enforce the desirable constraint (3). To derive the
proper algorithm, we start by noting that the basis function
is always symmetric in an imaging context. Thus, we can write
as
the -transform of
THÉVENAZ et al.: INTERPOLATION REVISITED
741
where
are out of the
poles of
; those are necessarily real and come in reciprocal pairs. Thus, the convolution
can be decomposed in a series of filter pairs,
inverse
where each pair consists of a causal (with pole ) and of an
) IIR filter. With suitable parameters,
anticausal (with pole
we can then apply a very efficient algorithm which leads to a
recursive in-place implementation [19], [20]. Its computational
load for the popular cubic B-spline is two additions and three
multiplications per produced coefficient.
column-by-column, and so forth. In particular, the determination of the interpolation coefficients needed for generalized
interpolation is separable, too, because the form (2) is linear.
For the rest of this paper, we concentrate on separable basis
functions; we describe them and analyze them in one dimension, and we use the expression above to implement
interpolation efficiently in a multidimensional context.
E. Reconciliation
Preserving spatial relations is a crucial issue for any imaging
system. Since interpolation can be interpreted as a convolution
(or equivalently, filtering) operation, it is important that the response of the involved filter does not result in any phase degradation. This consideration translates into the well-known and
desirable property of symmetry such that
or
. Symmetry is satisfied by all basis functions considered here, at the possible minor and very localized
exception of nearest-neighbor interpolation.
Comparing (1) with (2), it appears that classical interpolation
,
is a special case of generalized interpolation—with
and
. The converse is also true. To see why,
from its noninterpolating
we determine the interpolating
counterpart by writing
C. Symmetry
D. Regularity
Finally, the interpolating
polating is
that is hidden behind a noninter(6)
It is crucial to understand that this equivalence allows for the
exact and efficient handling of an infinite-support, interpolating
by performing all operations with a finitebasis function
support, noninterpolating basis function .
III. DESIRABLE PROPERTIES
A. Generalities
The price to pay for high-quality interpolation is computation time. For this reason, it is important to select a basis function that offers the best tradeoff. There are several aspects to
or
consider. The most important deals with the support of
, which is a measure of the smallest interval in which we have
. The larger the support, the more the computation
that
time. Another important aspect is the quality of approximation
inherent in the basis function. Other aspects involve the ease
of analytical manipulation (when useful), the ease of computation, and the efficiency of the determination of the coefficients
when is not interpolating.
B. Separability
. Let us
Consider (1) or (2) in multidimensions, with
or the noninassume that the support of the interpolating
terpolating is of size . This large computational burden can
only be reduced by imposing restrictions on . An easy and convenient way is to ask that the basis function be separable, as in
The very beneficial consequence of this restriction is that the
data can be processed in a separable fashion, line-by-line,
Some authors insist that the regularity of the basis function is
an important issue [16]. This may be true when differentiation
of is needed, but differentiating data more than, say, once or
twice, is uncommon in everyday imaging applications. Often,
at most the gradient is needed; thus, it is not really necessary
to limit the choice of basis functions to those that have a high
degree of regularity.
IV. APPROXIMATION THEORY
A. Error Kernel
Since most clinical data are available once only, at a given
resolution (or sampling step), there exist no absolute truth regarding the value of between its samples . It is thus necessary to resort to mathematical analysis for the assessment of
the quality of interpolation. The general principle is to define an
as given by a set of samples that are
interpolated function
units apart and that satisfy
with the interpolation constraint that
for all
. The difference between
and
for all
will then describe how fast the interpolated function
converges to the true function when the samples that define
become more and more dense, or, in other words, when the sampling step becomes smaller and smaller.
Let us perform the following experiment:
and
1) Take some arbitrary square-integrable function
;
select a sampling step
;
2) Create a set of samples
3) From this sequence, using either (1) or (2), build an inter;
polated function
using some norm, for example the
4) Compare with
mean-square (or ) norm
.
742
IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 19, NO. 7, JULY 2000
When the sampling step gets smaller, more details of can
be captured; it is then reasonable to ask that the approximation
gets smaller, too. The following formula predicts the
error
approximation error in the Fourier domain [21]–[23]
(7)
is the Fourier transform of the arbitrary function
where
, and where
is an interpolation error kernel that depends on the basis function only, and that is given by
(8)
holds for bandlimited functions.
The equivalence
that would not belong to that class,
For those functions
must be understood as the average
the estimated error
, where
error over all possible sets of samples
is some phase term with
.
and to interpolating as well
This applies to dimensions
as noninterpolating basis functions [21]–[23]. In the restricting
, for bandlimited functions and when
conditions where
is interpolating, this error kernel reduces
the basis function
to the kernel proposed in [24].
as soon as
, which frequently happens in the cases
for ranking basis
of interest. Thus, it is important to use
functions of identical order only; it would be inappropriate to
when comparing
with
if
.
consider
Equations (7) and (8) describe the evolution of the error for
is
every possible sampling step ; thus, the error kernel
a key element when it comes to the comparison of basis functions, not only near the origin, but over the whole Fourier axis.
can be understood as a way to predict the
The error kernel
approximation error when is used to interpolate a sampled
. Being a
version of the infinite-energy function
single number, but being also loaded with relevant meaning, the
approximation order is a convenient summary of this whole
curve.
C. Strang–Fix Equivalence
Suppose we are interested in just the approximation order
of a basis function , without caring much about the details of
. In this case, the explicit computation of (8) is not necessary. Instead, Strang–Fix [25] have proposed a series of conditions that are equivalent to (9). The interest of these equivalent
conditions is that they can be readily tested. They are valid for
all basis functions with sufficient decay—the sinc function is
one of the very rare basis where these conditions are not satisfied. We mention three equivalent 1-D Strang–Fix conditions as
1) th order zeros in the Fourier domain
B. Order of Approximation
A decrease in the sampling step will result in a decrease of
in (7); thus, the error kernel must vanish at
the argument of
the origin to ensure that the approximation error disappears altogether. The vanishing rate is controlled by the approximation
such that
order and the constant
2) Reproduction of all monomials of degree
3) Discrete moments
as
where the parenthesized expression is recognized as being the
norm of the th derivative of the smooth function we started
from.
Finally, for a basis function of approximation order , we get
that
as
(9)
This result expresses the fact that we can associate to any a
such that the error of approxnumber and a constant
, when is suffiimation predicted by decreases like
ciently small. The number is called the approximation order
of the basis function ; it gives a global estimate of how fast the
approximation error decays when the sampling step gets finer.
allows one to further rank the quality of
The constant
those basis functions that would have the same order , where a
corresponds to a better . Nevertheless, it is clear
smaller
rather than by
from (9) that the decay of is dominated by
where
depends on only.
Under mild hypothesis, any of these conditions is equivalent to
Const
. When
, or, equivalently,
, these conditions are called the partition of unity,
when
or the reproduction of the constant. More generally, the second
Strang–Fix condition implies by linearity that, apart from technical details, a basis function of order can reproduce any polyor lower exactly. Thus, the approximation
nomial of degree
of data that are smooth at scale will be close to the original for
two equivalent reasons, since one can either analyze the quality
of the approximation in terms of frequency contents, or in terms
of polynomials. On one hand, there will be few high-frequenmay depart
cies, and the fact that the cardinal basis function
from sinc is mostly irrelevant. On the other hand, the Taylor expansion of a smoothly varying function is dominated by low-degree terms, and a polynomial of corresponding degree will capture the local behavior of the function in sufficient details.
THÉVENAZ et al.: INTERPOLATION REVISITED
TABLE I
PROPERTIES
THAT APPLY TO '(x) = (
'^(! ) = (sinc((1=2 )! )) u^(! )
743
3 u)(x) $
sense are all members of this class. For this reason, we feel important to develop analytical tools that ease their Fourier analysis. Let
be a polynomial piece
(10)
D. Decomposition Theorem
1) Theorem: Any uniform piecewise-polynomial basis
can be expressed
function that possesses an order
with a uniform
as the convolution of a B-spline of order
that has the properties
piecewise-polynomial distribution
shown in Table I.
The factorization of as the product of two terms, one of
them being the Fourier transform of a B-spline of order , was
first presented in [25] without proof and without specifying the
properties of the distribution . A rigorous proof of an extended
version of this theorem was later given in [26], in a more mathematically abstract and general context than required for this
paper. Our actual formulation is aimed at fitting the present task
and audience; the corresponding proof will appear in a forthcoming paper [27].
A direct corollary of the decomposition theorem is that the
must satisfy
,
support of any function of order
because no distribution can have a negative support. However,
it may happen that the support of be null. In this case, is
necessarily a combination of Dirac pulses and their derivatives,
which calls for an extension to negative values of the concepts
of regularity and of degree: a distribution is said to have reguwith
whenever its -times integration yields
larity
. For example, a discontina function that has regularity
,a
uous function—say, a rectangular pulse—has regularity
, the derivative
of a Dirac
Dirac pulse has regularity
, and so on. The concept of degree is
pulse has regularity
extended similarly.
This theorem is very relevant to the problem of determining
the order of a piecewise-polynomial basis function: its Fourier
transform can necessarily be factored as the product of a
sinc
term with a remaining term
, where
is the order of the original piecewise-polynomial basis function.
As a simple example, let us consider the B-spline of degree
which we call . Then, we conclude from
that
. In turn, the regularity of is
, its support
, and its degree is
. Solving for these
is
last three equations, we rederive the well-known fact that both
of a B-spline is one more than
the order and the support
regularity. Other examples are
its degree , while it has
given in Appendix-B.
E. Piecewise-Polynomial Analysis
Uniform piecewise-polynomials form an important class of
all possible basis functions. In particular, we shall see later that
the basis functions that are optimal in a precise mathematical
where
index of the piece;
overall highest polynomial degree;
polynomial coefficients.
By convention, the piece described by is valid on the interval
; in the following analysis we assume that
. The special behavior on the boundaries is such as to allow
for basis functions that are pointwise symmetric, including ones
with discontinuities (the boundaries are shared by two adjacent
pieces).
We now decompose this piecewise-polynomial into elementary functions for which an expression for the Fourier transform
is known. Let us first introduce the one-sided power function
as being given by
(11)
.
is not a finite-energy function, we can nevertheless
Although
express its Fourier transform as
where
is the th derivative of the Dirac distribution . Using
this basic building block, we now express the truncated power
as being given by
function
This function satisfies
for
for
. The boundaries are such that
, including at the particular argument
transform of this finite-energy function is
where
finite sum
and
. The Fourier
is the incomplete -function given by the
744
IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 19, NO. 7, JULY 2000
Finally, we rewrite each polynomial piece in terms of truncated
power functions. For this, we build a new set of polynomial
coefficients as
and
which results in
Then, we observe that
(12)
where is the number of polynomial pieces, and where
truncated polynomial given by
is a
By contrast with (10), the important property of (12) is that both
support membership and special treatment of the piece boundbe the
-periodic continaries are now implicit. Letting
uous Fourier transform of the discrete sequence of coefficients
we can finally express the Fourier transform of the piecewisepolynomial as
F. Example
The analysis above facilitates the calculation of the error
kernel (8) for many basis functions, because the latter are often
piecewise-polynomials themselves. Moreover, it allows for an
easy determination of the order . For illustration purposes,
we propose to analyze the quadratic Schaum basis function
(results for many more cases are available in Appendix-B). Let
this function be given by
From its explicit expression, we see that the quadratic Schaum
and the degree
.
basis function has the support
Referring to the decomposition theorem, we determine from the
; thus,
expression of its Fourier transform that its order is
the polynomial resulting from factoring out a B-spline must
. It is made of the sum of a Dirac
have the support
with unit weight (which corresponds to 1 in the Fourier domain),
[which
and of a second derivative of a Dirac with weight
in the Fourier domain]. Thus, in
corresponds to
. Also, the
short notation we can write that
and it has
regularity,
degree of is
regularity. This can be checked
which implies that has
by inspection of the explicit expression of the quadratic Schaum
basis function.
After some tedious algebra (which can be handled by most
current symbolic manipulation software), the introduction of the
expression above into (8) yields
Taking the order
into consideration, the corresponding
approximation constant of a quadratic Schaum basis function is
finally given by
V. SPECIFIC PIECEWISE POLYNOMINAL EXAMPLES
A. Nearest-Neighbor
which can be summarized by
with
and
The basis function associated to nearest-neighbor interpolation is the simplest of all, since it is made of a square pulse. It
satisfies the partition of unity, provided a slight asymmetry is introduced at the edges of the square pulse. Its expression is given
by
From that, we build
.
THÉVENAZ et al.: INTERPOLATION REVISITED
745
The main interest of this basis function is its simplicity, which
results in the most efficient of all implementations. In fact, for
any coordinate where it is desired to compute the value of
that
the interpolated function , there is only one sample
contributes, no matter how many dimensions are involved. The
price to pay is a severe loss of quality.
B. B-Splines
There is a whole family of basis functions made of B-splines
. These functions are given by [28]
(a)
where
is the one-sided power function defined in (11).
From Table I, it is easy to conclude that these B-spline functions
enjoy the maximal order of approximation for a given support;
conversely, they enjoy the minimal support for a given order
of approximation. In addition, they are maximally continuous.
They have many other interesting properties as well [28], [29];
most fall outside the scope of this paper, except perhaps the fact
that a B-spline derivative can be computed recursively by
(b)
Then, computing the exact gradient of a signal given by a discan be done as
crete sequence of interpolation coefficients
follows:
where the -times continuous differentiability of B-splines en, continsures that the resulting function is smooth when
, and bounded when
.
uous when
: The B-spline of smallest degree
• Degree
is almost identical to the nearest-neighbor basis function.
They differ from one another only at the transition values,
be symmetric with respect to the
where we ask that
origin ( is not), and at the same time that it satisfies the
partition of unity. Thus, contrary to the nearest-neighbor
case, it happens in some exceptional cases (evaluation at
requires the comhalf-integers) that interpolation with
putation of the average between two samples.
: The B-spline function
is also called
• Degree
linear interpolation. It enjoys a large popularity because
the complexity of its implementation is very low, just
above that of the nearest-neighbor; moreover, some
consider that it satisfies Occam’s razor principle by being
the simplest interpolating basis function one can think
of that builds a continuous function out of a sequence of
.
discrete samples
: No B-spline
of degree
bene• Degrees
fits from the property of being interpolating; thus, no such
Fig. 1. B-spline of third degree. (a) Function shape. (b) Equivalent interpolant.
high-degree B-spline should ever be used in the context of
Equation (1). Equation (2) must be used instead. Unfortunately, some authors have failed to observe this rule, which
led them to claim that B-splines typically blur data. Therefore, such claims are misleading, even though they can
be repeatedly found in the published literature. A cubic
B-spline is often used in practice. Its expression is given
by
.
This basis function is not interpolating. As explained in Equation (6), it is nonetheless possible to exhibit an infinite-support
that allows one to build exactly
basis function
the same interpolated function . To give a concrete illustration
of this fact, we show in Fig. 1 both the noninterpolating cubic
along with its interpolating equivalent basis funcB-spline
. Graphtion. The latter is named a cubic cardinal spline
ically, the B-spline looks similar to a Gaussian; this is not by
chance, since a B-spline can be shown to converge to a Gaussian
when its degree increases. Already for a degree as small as
, the match is amazingly close since the maximal relative
error between a cubic B-spline and a Gaussian with identical
variance is only about 3.5%.
On the bottom part of Fig. 1, the cardinal spline displays decaying oscillations that are reminiscent of a sinc function. This
is not by chance either, since a cardinal spline can be shown to
746
Fig. 2.
IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 19, NO. 7, JULY 2000
B-spline basis functions. Approximation kernel for several degrees.
converge to a sinc function when its degree increases [30], [31].
Throughout the paper, we have given numerous reasons why it
is more profitable to approximate a true sinc by the use of noninterpolating basis functions rather than by apodization. Even
for moderate degrees, the spline-based approximations offers a
sensible quality improvement over apodization for a given computational budget. In Fig. 2, we give the approximation kernel
defined in (8) for B-splines of several degrees.
C. o-Moms
The family of functions that enjoy Maximal Order and Minimal Support is called Moms (or splines of minimal support in
the terminology of [26]). It can be shown that any of these functions can be expressed as the weighted sum of a B-spline and its
derivatives [26], [32], such that the distribution of the decomposition theorem has a vanishing support
Moms
B-splines are those Moms functions that are maximally differentiable. There exist several other interesting classes in this family;
in particular, the o-Moms functions [32] are such that their leastis minimal. This constant
squares approximation constant
, because we can write that
is closely related to
while we have that
The constant
given by
corresponds to the least-squares error kernel
After substitution of
by
in (7), one is left with an
approximation error that corresponds to the least-squares projection of a signal onto the set of shifted basis functions .
This error is the smallest achievable in a quadratic sense, and
requires that the set of discrete constraints expressed in (3) be replaced by a continuous constraint [33]. While it is also possible
to find Moms functions such that their asymptotic approximaconstant reaches its absolute minimum, experiments
tion
have shown that these particular functions are in fact less favorable for interpolation than are o-Moms.
The o-Moms functions are indexed by their polynomial degree and they are symmetric. Their knots are identical to those
of the B-spline they descend from. Moreover, they have the same
; this support is the smallest achievsupport as , that is,
. Although their order is
able for an approximation order
identical to that of a B-spline of same degree, their approximais much smaller. By construction, their
tion error constant
is the smallest possible.
approximation error constant
These functions are not interpolating; thus, they need a way to
required for the imcompute the sequence of coefficients
plementation of (2). Fortunately, the same algorithm than for the
B-splines can be used. The o-Moms functions of degree zero and
and , respectively. The o-Moms funcone are identical to
tions of higher degree can be determined recursively in Fourier
[32]; we give here the expression for
o-Moms
.
THÉVENAZ et al.: INTERPOLATION REVISITED
747
F. Keys
The principal reason for the popularity enjoyed by the family
of Keys functions is the fact that they perform better than linear
interpolation, while being interpolating [15]. Thus, they do not
require the determination of interpolation coefficients, and the
classical equation (1) can be used. These functions are made
of piecewise cubic polynomials and depend on a parameter .
Their general expression is
(a)
(b)
Fig. 3.
.
Comparing this expression with that of the cubic spline, it is apparent that both require the computation of piecewise-polynomials of the same support. However, their approximation order
differ: the best order that can be reached by a Keys function is
, for the special value
, while the cubic spline
. This extra order for
comes at the cost of
has order
, for use in (2). However, by
the computation of a sequence
using a recursive filtering approach, this cost can be made negligible. Altogether, the gain in speed offered by Keys function
is not enough to counterbalance the loss in quality when comwith
. Moreover, the regularity of Keys is ,
paring
which is one less than that of the cubic spline.
o-Moms of third degree. (a) Function shape. (b) Equivalent interpolant.
G. Meijering
As a curiosity, we point out in Figs. 3 and 4 that this basis function has a slope discontinuity at the origin; thus, its regularity
(in addition, it has other slope discontinuities for
is
and
). It is nevertheless optimal in the sense described.
D. Schaum
Like the o-Moms, the pseudo-Lagrangian basis functions proposed by Schaum can also be represented as a weighted sum of
B-splines and of their even-order derivatives [17]. They have
same order and same support as B-splines and o-Moms. Their
main interest is that they are interpolating. Their disadvantage
with respect to both o-Moms and B-splines is a worse approx: for example, considering an approximaimation constant
, the value reached by a cubic o-Moms is
tion order
; the constant for the cubic spline is more than
, while the cubic Schaum loses an
twice with
. They are
additional order of magnitude with
for odd degrees.
discontinuous for even degrees, and are
E. Dodgson
Dodgson has proposed a basis function built out of quadratic
polynomials [12]. Unfortunately, it fails to be a member of the
, its approximafamily of Moms. While it has a support
, which is no higher than in the linear intertion order is
. In return, its
polation case that has the smaller support
is about five times smaller than
constant of approximation
in the linear case.
Meijering et al. [16] have designed piecewise-polynomial
basis functions that all have the same approximation order
and that are interpolating. What is distinguished between
them are their support and their polynomial degree . The
benefit of higher values for these characteristics is to allow for
; also, the constant of approximation
a stronger regularity
gets smaller. With regard to approximation properties
, the
nevertheless, and even with a support as large as
is no better than an
Meijering basis function of degree
and
o-Moms function of the much smaller support
.
much smaller degree
We give in Fig. 5 the error kernel of several basis functions
. The
that all have the identical order of approximation
corresponding basis functions have a wide range of polynomial
up to
). They also have a wide range
degree (from
up to
), which translates into
of support (from
some very large differences with respect to computational requirements. Nevertheless, the approximation quality is essentially the same for all these kernels (compare to the broader selection of performances to choose from in Fig. 2). One possible
; we give
selection criterion is the constant of approximation
it in Table II.
H. German
German [13] has proposed an interpolating basis function
. Its support is larger than
with an approximation order
necessary, which leaves some freedom to the designer. In this
case, this freedom has been used to increase the regularity to
. Note that the quartic B-spline enjoys the same order
748
IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 19, NO. 7, JULY 2000
Fig. 4. o-Moms of third degree (central part).
Fig. 5. Various basis functions of same approximation order L = 3. A sextic B-spline of degree n = 6 and order L = 7 has been added for comparison, because
it offers much better performance than any of the third-order basis functions, while it has essentially the same experimental computational cost as one of them
(septimic Meijering).
TABLE II
CONSTANT OF APPROXIMATIONS FOR VARIOUS BASIC FUNCTIONS WITH L = 3
VI. SINC-BASED BASIS FUNCTIONS
A. Bandlimitedness
for a shorter support, while it has the maximal regularity
.
Moreover, its approximation is much better than that of German,
since we have that
for the fifth-order German basis function, and that
for the quartic B-spline.
For a long time, sinc interpolation—which corresponds to
ideal filtering—has been the holy grail of geometric operations.
Nowadays, researchers acknowledge that, while interpolation
can be realized under special circumstances (e.g., translation of
a periodic signal through discrete Fourier transform operations),
in general, it can only be approximated. Another drawback of
the function is that it decays only slowly, which tends to spread
out ringing-associated effects.
The sinc function provides error-free interpolation of the bandlimited functions. There are two difficulties associated with
this statement. The first one is that the class of bandlimited
functions represents but a tiny fraction of all possible functions;
moreover, they often give a distorted view of the physical reality in an imaging context—think of the transition air/matter in
a CT scan: as far as classical physics is concerned, this transi-
THÉVENAZ et al.: INTERPOLATION REVISITED
tion is abrupt and cannot be expressed as a bandlimited function.
Further, there exist obviously no way at all to perform any kind
of antialiasing filter on physical matter (before sampling). Most
patients would certainly object to any attempt of the sort.
The second difficulty is that the support of the sinc function is
infinite. An infinite support is not too bothering, provided an efficient algorithm can be found to implement interpolation with
another equivalent basis function that has a finite support. This
is exactly the trick we used with B-splines and o-Moms. Unfortunately, no function can be at the same time bandlimited and
finite-support, which precludes any hope to find an equivalent
finite-support basis function for use in (6). Thus, the classical
solution is simply to truncate sinc itself by multiplying it with
a finite-support window; this process is named apodization. A
large catalog of apodizing windows is available in [14], along
with their detailed analysis.
B. Dirichlet Apodization
Dirichlet apodization is perhaps the laziest approach, since
is simply an enlarged version of
the window of total width
, which requires no more computational effort than a test to
indicate support membership. The apodized basis function is
given by
sinc
where
is an even integer. The price to pay for laziness is bad
quality. First, the regularity of this function is low since it is not
differentiable. More important, its approximation order is as bad
. This means that a reduction of the sampling step does
as
not necessarily result in a reduction of the interpolation error.
C. Hanning Apodization
Apodization, being defined as the multiplication of a sinc
function by some window, corresponds in Fourier to a convolution-based construction. The Hanning window is one out of
several attempts to design a window that has favorable properties in Fourier. The result is
sinc
sinc
With
, the order of approximation of Hanning interpolation is no better than that of Dirichlet interpolation; the conis significantly improved, though. Whereas it was
stant
for sinc , it is now
for sinc .
Being continuously differentiable, Hanning is also more regular
than Dirichlet.
VII. COST-PERFORMANCE ANALYSIS
A. Generalities
The single most influential parameter that dictates the comof the support of the basis function
putational cost is the size
. Second to it, we find the cost of evaluating
for a se. Last, there is a small additional cost
ries of arguments
associated to the computation of interpolation coefficients
749
in the context of (2). We want to mention here that the importance of this overhead is negligible, especially in the practical
case where it needs to be computed once only before several interpolation operations are performed. This situation arises often
in the context of iterative algorithms, and in the context of interactive imaging; moreover, it disappears altogether when the
rather
images are stored directly as a set of coefficients
. Thus, we will ignore this overhead
than a set of samples
in the theoretical performance analysis that follows.
B. Cost
Let us assume that we want to compute the interpolated value
of a 2-D image at argument , using a separable basis
function of finite-support . For each output value, we first
multiplications and additions to compute
need to perform
. This computation is embedded in
times. Finally, we
a similar outer loop that is also executed
multiplications and additions in 2-D; more generally,
need
operations in dimensions, where we consider
we need
that a multiplication is worth an addition.
-times the cost of the evaluTo this cost, one must add
ation of the separable basis function. When the latter is piecetests to determine
wise-polynomial, on average we need
which of the polynomial piece to use. Once a polynomial is selected, evaluation by the Horner’s scheme further requires
multiplications and additions under the favorable hypothesis
. Putting these results together, the magnitude
that
of the global cost of all operations for a piecewise-polynomial
, more precisely
basis function is
.
In the case of the sinc family, each evaluation requires the
computation of a transcendental function and the multiplication
by the apodization window. This cost does not depend on the
support . Hence, the magnitude of the global cost of all op; more
erations for an apodized sinc basis function is
where
operations are spent in
precisely,
the evaluation of a Hanning apodization window (we consider
that the transcendental functions sine or cosine are worth two
for a Bartlet window and
in
multiplications each),
the Dirichlet case.
It follows from these theoretical considerations that the support for which a sinc-based basis function comes at a lesser computational cost than a polynomial-based one, is about
in two dimensions. For images or volumes, where
, it is
important to realize that this result does not imply that the use
of sinc functions is more efficient than that of polynomials, because sinc typically requires a much larger support than polynomials to reach the same quality.
C. Performance
In Fig. 6, we present a comparison of the error kernel for
. It includes
several basis functions of same support
cubic B-spline, cubic o-Moms, and cubic Schaum as examples
of polynomial functions, and Dirichlet and Hanning as examples
of apodized sinc. We observe that the sinc-based basis functions
do not reproduce the constant. Since most of the energy of virtually any image is concentrated at low frequencies, it is easy to
predict that these functions will perform poorly when compared
750
IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 19, NO. 7, JULY 2000
Fig. 6. Comparison of basis functions of same support
W = 4.
TABLE III
PERFORMANCE INDEX FOR WHITE NOISE
D. Tradeoff
Fig. 7 presents the combination of the theoretical results of
computation time and of those of interpolation quality. In order
to show the versatility of the approximation kernel, we have
changed the function from bandlimited white noise
(Table III) to a function that corresponds to a Markov model
which captures the correlation of data such that
. Thus, we have that
With
, this power spectrum is representative of a large
variety of real images [34]. We have performed its integration
against the approximation error kernel over the domain
only, such as to conform to bandlimitedness.
VIII. EXPERIMENTS
A. Protocol
to polynomial-based basis functions. We will see in the experimental section that this prediction is fulfilled; for now, we limit
our analysis to that of the more promising polynomial cases.
On the grounds of (7), we can select a specific function to
sample-and-interpolate, and we can predict the amount of resulting squared interpolation error. As a convenient simplification, we now assume that this function has a constant-value
power spectrum; in this case, it is trivial to obtain the interpolation error by integrating the curves in Fig. 6. Table III gives the
resulting values as a signal-to-noise ratio (SNR) expressed in
dB, where the integration has been performed numerically over
, and where we have set the natural
the domain
. These results have been obtained by
sampling interval
giving the same weight to all frequencies up to Nyquist’s rate;
if low frequencies are considered more important than high frequencies, then the order of approximation and its associated
are the most representative quality indexes.
constant
To magnify the degradation that results from interpolation,
we adopt the following strategy that has for goal the highlighting
of—as opposed to the avoidance of—the deleterious effects of
rotations of
interpolation: we apply a succession of
each to some image, such that the output of any
given step is used as input for the next step. We then compare
the initial image to the final output. To limit potential boundary
effects, we perform the final comparison on some central portion of the image only. Also, we avoid any interaction between
interpolation and quantization by performing all computations
in a floating-point format. Fig. 8 shows the central portion of
the image we want to rotate.
B. Nearest-Neighbor and Linear Interpolation
Since the circular pattern of Fig. 8 is characterized by a radial
sinusoidal chirp with higher frequencies in the center than in the
periphery, the effect of the interpolation-associated filtering can
be read visually as increases or—more often—decreases in the
THÉVENAZ et al.: INTERPOLATION REVISITED
Fig. 7.
751
Theoretical performance index for a Markov-like power spectrum. Triangles correspond to interpolating basis functions, circles to noninterpolating ones.
(a)
Fig. 8.
(b)
(c)
Central portion of test images. (a) CT scan. (b) Lena. (c) Synthetic circular pattern.
(a)
(b)
Fig. 9. Rotation of a circular pattern. (a) Nearest-neighbor. (b) Linear
interpolation.
degree of modulation of those spatial frequencies. Fig. 9 shows
the effect of the rotation experiment when using the two most
commonly found interpolants. Nearest-neighbor interpolation
results in a lot of data shuffling, and the image is highly cluttered. Linear interpolation results in the loss of high frequencies,
which corresponds to strong blurring. These losses cannot be
compensated; they correspond to the prediction made in Fig. 2,
(a)
(b)
Fig. 10. Difference between the original and the several-times rotated image
of a CT scan. Positive values are light and negative values are dark. A zero
difference is represented by middle gray. (a) Nearest-neighbor interpolation. (b)
Linear interpolation.
according to which linear interpolation (equivalently, ) performs poorly when compared to other cases. Fig. 10 shows the
differential effect of the rotation experiment on the CT scan. In
this case, we represent error images—the result of subtracting
the rotated image from the original one. We observe that the
data shuffling aspect of nearest-neighbor interpolation results
752
IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 19, NO. 7, JULY 2000
(a)
(b)
(c)
Fig. 11. Rotation of a circular pattern. (a) Keys. (b) Cubic spline. (c) Cubic o-Moms.
(a)
Fig. 12.
(b)
(c)
Difference image for the CT scan. (a) Keys. (b) Cubic spline. (c) Cubic o-Moms.
(a)
(b)
(c)
W = 4). (b) Hanning (W = 4). (c) Sextic spline (W = 7, less computation time than Hanning with
Fig. 13. Rotation of a circular pattern. (a) Dirichlet (
W = 4).
in widespread noise, while the low-pass aspect of linear interpolation results in artifacts mostly near edges.
C. Cubic Interpolation
Figs. 11 and 12 propose three basis functions of identical support which have essentially the same computational cost. On the
, Keys
left, despite the use of the optimal parameter
offers the poorest visual performance since the central part of the
figure is blurred. In addition, close inspection (particularly on a
monitor screen) discloses blocking artifacts that betray themselves as moiré patterns. Those are absent with cubic spline and
cubic o-Moms interpolation, although patterns unrelated to interpolation may eventually be present on paper, in reason of the
dithering process inherent in printing these figures. More important, cubic spline interpolation results in less blurring, and cubic
o-Moms in even less. Fig. 11 shows the resulting image for the
circular pattern, while Fig. 12 shows the difference image for
the CT scan.
D. Sinc-Based Interpolation
Fig. 13(a) and (b) shows the result of using two different truncated and apodized approximations of sinc, where the support is
the same as in the functions of Figs. 11 and 12. The test images
of Fig. 8 have a nonnull average; since an apodized version of
sinc does not reproduce this constant value faithfully, each incremental rotation results in the drift of the average value of the
THÉVENAZ et al.: INTERPOLATION REVISITED
753
RESULTS
TABLE IV
IN NUMERICAL FORM
image. This drift manifests itself as images that appear too dark
(negative drift) or too light (positive drift). We conclude that
sinc performs extremely poorly when compared to other basis
functions of the same support, and not only drift of the mean
value, but also both blocking and excessive blurring artifacts are
present.
Even though the support of the sinc-based basis functions presented in Fig. 13 is short, the associated computation time is already substantial. Fig. 13(c) shows that much higher quality can
be achieved in less time with a more efficient, noninterpolating
basis function (here, sextic B-spline). Fig. 13(c) is visually indistinguishable from Fig. 8(c).
E. Discussion
Table IV presents in succinct form the numeric results of these
experiments, along with some additional ones. In particular, we
also provide the results for the standard Lena test image, and for
the synthetic circular test pattern of Fig. 8. The execution time
is given in seconds and corresponds to the duration of a single
rotation of a square image 512 pixels on a side. The computer
is a Power Macintosh 9600/350 and the measure of the SNR is
defined as
SNR
754
IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 19, NO. 7, JULY 2000
Fig. 14. Summary of the main experimental results for the CT scan. Triangles correspond to interpolating basis functions, circles to noninterpolating ones. The
hollow circles give the computational time for an accelerated implementation.
where is the original (nonnull average) data, and where is
given by the -times chaining of the rotation.
These results point out some of the difficulties associated
with the analysis of the performance of a basis function . For
example, the computation time should ideally depend on the
number of mathematical operations only. In reality, the optimization effort put into implementing each variation with one
basis function or another, has also some influence. For instance,
we have a fast implementation of the cubic spline and of the
cubic o-Moms (in italic in Table IV and in white circles in
Fig. 14) that runs in shorter time than the normal implementation (in bold in Table IV and in black circles in Fig. 14). We
have, nevertheless, shown the result of all implementations because this corresponds to a somewhat unified level of optimization. The time spent for the determination of the interpolation
coefficients—with a filtering algorithm implemented according
to Section II-D—is included in the reported results.
We have provided results for two classes of sinc functions.
The class labeled (-) represents traditional apodization, while
the class labeled (+) represents an apodized-normalized version.
In the latter case, we have modified the basis function as
sinc
where is the original apodization window. The purpose of this
operation is to restore some better approximation order to the
sinc-based cases, at the cost of additional processing. As can be
seen in Table IV, this is successful only for large supports and
incurs an unbearably long computation time. Meanwhile, the
resulting quality is obviously suboptimal when compared to the
much better piecewise-polynomial cases.
Fig. 14 proposes a graphic summary of the most relevant results (CT scan, quality better than 20 dB, execution time shorter
than 2 s). It is interesting to compare this figure with Fig. 7; the
similarity between them confirms that our theoretical ranking of
basis functions was justified. The difference between the interpolation methods is more pronounced in the experimental case
because it has been magnified by the number of rotations performed.
IX. CONCLUSION
We have analyzed two approaches to the exact interpolation
of data given by regular samples. In classical interpolation, the
basis functions must be interpolating, while noninterpolating
basis functions are allowed in generalized interpolation. We
have tried to dispel the too-commonly held belief according to
which noninterpolating functions (typically, cubic B-splines)
should be avoided. This misconception, present in many books
or reports on interpolation, arose because practitioners have
sometimes attempted to use noninterpolating basis functions
without the prefiltering step that is required to achieve a consistent implementation of the generalized interpolation. We have
provided a unified framework for the theoretical analysis of the
performance of both methods. We have applied this analysis
to specific cases that involve piecewise-polynomial functions
as well as sinc-based interpolants. We have performed 2-D
experiments that support the 1-D theory.
We conclude from both theoretical and practical concerns
that the most important index of quality is the approximation
order of the basis function, its support being the most important
parameter with respect to efficiency. Thus, the class of Moms
functions, stands apart as the best achievable compromise
between quality and speed. We have observed that many
formerly proposed basis functions, such as Dodgson, Keys,
Meijering, German, and any of the apodized versions of a sinc,
do not belong to this class. Experiments have confirmed that
these basis functions do indeed perform poorly. In particular,
no sinc-based interpolation results in an acceptable quality
with regard to its computational demand. In addition, they are
difficult to handle analytically, which leads to unnecessary
THÉVENAZ et al.: INTERPOLATION REVISITED
755
complications for simple operations such as differentiation or
integration.
The more favorable class of Moms functions can be further
divided into subclasses, the most relevant being B-splines,
Schaum, and o-Moms. Of those three, the Schaum functions
are the only representatives that are interpolating. Nevertheless,
experiments have shown that this strong constraint is detrimental to the performance; we observe that the time spent in
computing the interpolation coefficients required by B-splines
and o-Moms is a small, almost negligible investment that offers
a high payoff in terms of quality. For this reason, we discourage
the use of Schaum and advocate generalized interpolation
instead, with noninterpolating basis functions such as B-splines
and o-Moms.
Finally, comparing B-splines with o-Moms, we conclude that
the lack of regularity of the latter makes them less suitable than
B-splines for imaging problems that require the computation
of derivatives, for example to perform operations such as edge
detection or image-based optimization (e.g., snake contouring,
registration). These operations are very common in medical
imaging. Thus, despite a poorer objective result than o-Moms,
B-splines are very good candidates for the construction of an
image model. Moreover, they enjoy additional properties such
as easy analytical manipulation, several recursion relations, the
-scale relation (of great importance for wavelets, a domain
that has strong links with interpolation [29], [35]), minimal
curvature for cubic B-splines, easy extension to inexact interpolation (smoothing splines, least-squares [33]), simplicity
of their parametrization (a single number—their degree—is
enough to describe them), and possible generalization to
irregular sampling.
It is, however, meaningless to compare approximation
constants associated to basis functions that would differ
in approximation order;
• the smaller the support, the less the computational burden;
• never use a noninterpolating basis function in the context
of (1). Use (2) instead, and determine the interpolation
coefficients as explained in Section II-D.
1) First-Order B-spline (Symmetric Nearest-Neighbor):
Piecewise-polynomial representation
,
.
, regularity
, support
, approxiDegree
.
mation constant
, interpolating, decomposition
.
Order
Error kernel
2) Second-Order B-spline (Linear):
Piecewise-polynomial representation
Degree
, regularity , support
, approxima.
tion constant
, interpolating, decomposition
.
Order
Error kernel
APPENDIX I
A. Mirror Extension
We want to extend a finite, discrete signal
of length
into an infinite-length, discrete signal by using a
mirror extension. To obtain an explicit sample value for any ar, we write
, where
gument
and where
.
3) Second-Order Dodgson (Quadratic):
Piecewise-polynomial representation
Degree
, regularity , support
.
tion constant
, interpolating.
Order
Decomposition
B. Piecewise-Polynomial Basis
We give here a technical summary of the principal characteristics of some uniform piecewise-polynomial basis functions.1
We draw the attention of the reader on the following facts:
• the higher the approximation order, the better the quality;
• for basis functions of identical approximation order, the
smaller the approximation constant, the better the quality.
1Only those error kernels that have an expression with a manageable size are
reported.
Error kernel
, approxima-
756
IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 19, NO. 7, JULY 2000
4) Third-Order Schaum (Quadratic):
Piecewise-polynomial representation
Degree
, regularity
, support
mation constant
, interpolating, decomposition
Order
.
Error kernel
, approxi.
Degree
, regularity , support
, approxima.
tion constant
, noninterpolating, decomposition
.
Order
Error kernel
5) Third-Order Keys (Cubic):
Piecewise-polynomial representation
Degree
, regularity , support
tion constant
, interpolating.
Order
Decomposition
Error kernel
6) Third-Order Meijering (Quintic):
Piecewise-polynomial representation
Degree
, regularity , support
, approxima.
tion constant
, interpolating.
Order
Decomposition (see (A-1) at the bottom of the page).
7) Third-Order B-spline (Quadratic):
Piecewise-polynomial representation
, approxima.
8) Third-Order Meijering(Septimic):
Piecewise-polynomial representation (see (A-2) at the
bottom of the next page).
, regularity , support
, approximaDegree
.
tion constant
, interpolating.
Order
Decomposition (see (A-3) at the bottom of the next page).
9) Third-Order o-Moms (Quadratic):
Piecewise-polynomial representation
Degree
, regularity
, support
, approxi.
mation constant
, noninterpolating, decomposition
Order
.
Error kernel
(A-1)
THÉVENAZ et al.: INTERPOLATION REVISITED
10) Fourth-Order Schaum (Cubic):
Piecewise-polynomial representation
Degree
, regularity , support
, approxima.
tion constant
, interpolating, decomposition
Order
.
Error kernel
757
12) Fourth-Order o-Moms (Cubic):
Piecewise-polynomial representation
Degree
, regularity , support
, approxima.
tion constant
, noninterpolating, decomposition
Order
.
Error kernel
11) Fourth-Order B-spline (Cubic):
Piecewise-polynomial representation
13) Fifth-Order German (Quartic):
Piecewise-polynomial representation
Degree
, regularity , support
, approxima.
tion constant
, noninterpolating, decomposition
.
Order
Error kernel
(A-2)
(A-3)
758
IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 19, NO. 7, JULY 2000
Degree
, regularity , support
tion constant
, interpolating.
Order
Decomposition
, approxima.
ACKNOWLEDGMENT
The authors thank the anonymous reviewers for having
brought to their attention various sources for the demonstration
of the decomposition theorem of Section IV-D.
REFERENCES
[1] E. Maeland, “On the comparison of interpolation methods,” IEEE Trans.
Med. Imag., vol. 7, no. 3, pp. 213–217, Sept. 1988.
[2] J. L. Ostuni, A. K. S. Santha, V. S. Mattay, D. R. Weinberger, R. L.
Levin, and J. A. Frank, “Analysis of interpolation effects in the reslicing
of functional MR images,” J. Comput. Assist. Tomogr., vol. 21, no. 5, pp.
803–810, 1997.
[3] R. W. Parrot, M. R. Stytz, P. Amburn, and D. Robinson, “Toward statistically optimal interpolation for 3-D medical imaging,” IEEE Eng. Med.
Biol., vol. 12, no. 5, pp. 49–59, Sept.–Oct. 1993.
[4] M. Haddad and G. Porenta, “Impact of reorientation algorithms on quantitative myocardial SPECT perfusion imaging,” J. Nucl. Med., vol. 39,
no. 11, pp. 1864–1869, Nov. 1998.
[5] B. Migeon and P. Marche, “In vitro 3D reconstruction of long bones
using B-scan image processing,” Med. Biol. Eng. Comput., vol. 35, no.
4, pp. 369–372, July 1997.
[6] M. R. Smith and S. T. Nichols, “Efficient algorithms for generating interpolated (zoomed) MR images,” Magn. Reson. Med., vol. 7, no. 2, pp.
156–171, June 1988.
[7] S. D. Fuller, S. J. Butcher, R. H. Cheng, and T. S. Baker, “Three-dimensional reconstruction of icosahedral particles—The uncommon line,” J.
Struct. Biol., vol. 116, no. 1, pp. 48–55, Jan.–Feb. 1996.
[8] F. M. Weinhaus and V. Devarajan, “Texture mapping 3D models of realworld scenes,” ACM Comput. Surv., vol. 29, no. 4, pp. 325–365, Dec.
1997.
[9] T. Möller, R. Machiraju, K. Mueller, and R. Yagel, “Evaluation and
design of filters using a Taylor series expansion,” IEEE Trans. Visual.
Comput. Graph., vol. 3, no. 2, pp. 184–199, Apr.–June 1997.
[10] P. Thévenaz, U. E. Ruttimann, and M. Unser, “A pyramid approach
to sub-pixel registration based on intensity,” IEEE Trans. Image Processing, vol. 7, no. 1, pp. 27–41, Jan. 1998.
[11] C. R. Appledorn, “A new approach to the interpolation of sampled data,”
IEEE Trans. Med. Imag., vol. 15, no. 3, pp. 369–376, June 1996.
[12] N. A. Dodgson, “Quadratic interpolation for image resampling,” IEEE
Trans. Image Processing, vol. 6, no. 9, pp. 1322–1326, Sept. 1997.
[13] I. German, “Short kernel fifth-order interpolation,” IEEE Trans. Signal
Processing, vol. 45, no. 5, pp. 1355–1359, May 1997.
[14] F. J. Harris, “On the use of windows for harmonic analysis with the
discrete Fourier transform,” Proc. IEEE, vol. 66, no. 1, pp. 51–83, Jan.
1978.
[15] R. G. Keys, “Cubic convolution interpolation for digital image
processing,” IEEE Trans. Acoust., Speech, Signal Processing, vol.
ASSP-29, no. 6, pp. 1153–1160, Dec. 1981.
[16] E. H. W. Meijering, K. J. Zuidervel, and M. A. Viergever, “Image reconstruction with symmetrical piecewise nth-order polynomial kernels,”
IEEE Trans. Image Processing, vol. 8, no. 2, pp. 192–201, Feb. 1999.
[17] A. Schaum, “Theory and design of local interpolators,” CVGIP: Graph.
Models Image Processing, vol. 55, no. 6, pp. 464–481, Nov. 1993.
[18] M.-L. Liou, “Spline fit made easy,” IEEE Trans. Comput., vol. C-25, no.
5, pp. 522–527, May 1976.
[19] M. Unser, A. Aldroubi, and M. Eden, “B-spline signal processing: Part
I—Theory,” IEEE Trans. Signal Processing, vol. 41, no. 2, pp. 821–832,
Feb. 1993.
, “B-spline signal processing: Part II—Efficient design and appli[20]
cations,” IEEE Trans. Signal Processing, vol. 41, no. 2, pp. 834–848,
Feb. 1993.
[21] T. Blu and M. Unser, “Approximation error for quasi-interpolators and
(multi)-wavelet expansions,” Appl. Comput. Harmonic Anal., vol. 6, no.
2, pp. 219–251, Mar. 1999.
, “Quantitative Fourier analysis of approximation techniques: Part
[22]
I—Interpolators and projectors,” IEEE Trans. Signal Processing, vol.
47, no. 10, pp. 2783–2795, Oct. 1999.
, “Quantitative Fourier analysis of approximation techniques: Part
[23]
II—Wavelets,” IEEE Trans. Signal Processing, vol. 47, no. 10, pp.
2796–2806, Oct. 1999.
[24] S. K. Park and R. A. Schowengerdt, “Image sampling, reconstruction,
and the effect of sample-scene phasing,” Appl. Opt., vol. 21, no. 17, pp.
3142–3151, Sept. 1982.
[25] G. Strang and G. Fix, “A Fourier analysis of the finite element variational method,” in Constructive Aspects of Functional Analysis. Erice,
Italy: Edizioni Cremonese, Centro Internazionale Matematico Estivo,
June 27–July 7 1971, pp. 796–830.
[26] A. Ron, “Factorization theorems for univariate splines on regular grids,”
Israel J. Math., vol. 70, no. 1, pp. 48–68, 1990.
[27] T. Blu, “Complete parametrization of piecewise polynomial interpolators,”, submitted for publication.
[28] I. J. Schoenberg, “Contribution to the problem of approximation of
equidistant data by analytic functions,” Quart. Appl. Math., vol. 4, no.
2, pp. 45–99 and 112–141, 1946.
[29] M. Unser, “Ten good reasons for using spline wavelets,” in Proc. SPIE,
Wavelet Applications in Signal and Image Processing V, vol. 3169, San
Diego, CA, July 30–Aug. 1 1997, pp. 422–431.
[30] A. Aldroubi and M. Unser, “Sampling procedures in function spaces
and asymptotic equivalence with Shannon’s sampling theorem,” Num.
Function Anal. Optim., vol. 15, no. 1 and 2, pp. 1–21, 1994.
[31] A. Aldroubi, M. Unser, and M. Eden, “Cardinal spline filters: Stability
and convergence to the ideal sinc interpolator,” Signal Processing, vol.
28, no. 2, pp. 127–138, Aug. 1992.
[32] T. Blu, P. Thévenaz, and M. Unser, “Minimum support interpolators with
optimum approximation properties,” in Proc. IEEE Int. Conf. Image
Processing, vol. 3, Chicago, IL, Oct. 4–7, 1998, pp. 242–245.
[33] M. Unser and I. Daubechies, “On the approximation power of convolution-based least squares versus interpolation,” IEEE Trans. Signal Processing, vol. 45, no. 7, pp. 1697–1711, July 1997.
[34] A. K. Jain, Fundamentals of Digital Image Processing. Englewood
Cliffs, NJ: Prentice-Hall, 1989.
[35] M. Unser and A. Aldroubi, “Polynomial splines and wavelets—A signal
processing perspective,” in Wavelets—A Tutorial in Theory and Applications, C. K. Chui, Ed. San Diego, CA: Academic, 1992, pp. 91–122.
Download