Perceptually Linear Parameter Variations

advertisement
EUROGRAPHICS 2012 / P. Cignoni, T. Ertl
(Guest Editors)
Volume 31 (2012), Number 2
Perceptually Linear Parameter Variations
Norbert Lindow, Daniel Baum, and Hans-Christian Hege
Zuse Institute Berlin (ZIB), Germany
Figure 1: Surface cut of the head of David, based on the local mean curvature. The first row shows the result by using uniform
samples of the curvature range. The second row shows results using a linearized parameter (based on the RMSM image metric).
Abstract
Most visual analysis tasks require interactive adjustment of parameter values. In general, a linear variation of a
parameter, using for instance a GUI slider, changes the visual result in a perceptually non-linear way. This hampers interactive adjustment of parameters, especially in regions where rapid perceptual changes occur. Selecting
a good parameter value therefore remains a time-consuming and often difficult task.
We propose a novel technique to build a non-linear function that maps a new parameter to the original parameter.
By prefixing this function to the original parameter and using the new parameter as input, a linear relationship
between input and visual feedback is obtained. To construct the non-linear function, we measure the variation
of the visual result using image metrics. Given a suitable perceptual image metric, perceptually linear image
variations are achieved. We demonstrate the practical utility of our approach by implementing two common image
metrics, a perceptual and a non-perceptual one, and by applying the method to a few visual analysis tasks.
Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computing Methodologies]: Computer
Graphics—Picture/Image Generation, I.4 [Computing Methodologies]: Image Processing and Computer Vision—
Parameter Adjustment
1. Introduction
Nearly every graphical tool comes with parameters that effect the visual appearance of its output and that need to be
manually adjusted. In image editing programs, for example,
there are many parameters for changing colors or applying
image filters and other effects. The same is true for 3D visualization methods like iso-surfacing or volume rendering.
c 2012 The Author(s)
Computer Graphics Forum c 2012 The Eurographics Association and Blackwell Publishing Ltd. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ,
UK and 350 Main Street, Malden, MA 02148, USA.
Often, the parameter values are specified by adjusting
sliders, whose position is linearly mapped to the parameter.
Depending on the input data, the visual computation and the
parameter to be fixed, it might be rather difficult to find a
suitable parameter value, if the visual result depends perceptually non-linearly on the parameter.
To circumvent this problem, several variations of slid-
N. Lindow & D. Baum & H.-C. Hege / Perceptually Linear Parameter Variations
ers have been developed, for example sliders that map the
slider position to the parameter in a logarithmic or exponential manner. Prefixing such a standard function rectifies
only the few cases where the input-output relation follows a
fixed specific mathematical law. If we consider for instance
an iso-surface tool for analyzing data sets, then in the domain of possible iso-values often large ranges exist in which
only small changes in the iso-surface occur, and a few small
ranges in which many visual changes take place. In such a
case, perceptually linear visual changes can be achieved only
with a specifically constructed non-linear function.
In this paper, we describe a technique for constructing
such a function. The idea is to measure the visual changes
with respect to parameter changes (using some metric on the
output data, for example, a perceptual image metric) and to
construct an invertible function, which describes the amount
of visual changes for a continuous parameter interval. Prefixing the inverse of this function to the original parameter,
a new parameter is introduced whose modifications result in
visual output that changes linearly. This approach does not
only work for visual computations, but also for other kinds
of output, like auditory or tactile.
Our approach is based on the assumption that a
perception-based metric can be found for a specific application. If we have such a metric, perceptually linear parameter
variations can be achieved. We are aware of the fact, however, that a single image metric can only measure a small part
of the visual perception. Thus, a perceptually linear parameter variation can only be achieved to the extent to which the
metric is able to measure perceptual changes. In this paper,
we do not address the problem of finding a suitable metric,
but only use two common metrics, a perceptual and a nonperceptual one, to demonstrate the usefulness of our technique.
In all cases that we consider in this paper, the result of a
method w.r.t. a certain parameter or certain parameters is an
image. Hence, we will use the term ‘image variation’ when
referring to the changes of the result due to changes of the
parameter. Furthermore, we will use the term ‘image progression’ when considering the image variation within a parameter range.
2. Related Work
In this section, we first describe works on user interaction
and data exploration techniques. Note that we do not make
use of any of the described techniques. Hence, this section
is merely to put our work into context. Following this, we
present selected image metrics that we later use.
2.1. User Interaction and Data Exploration Techniques
Since many parameters in user interfaces will be manipulated by sliders, a lot of research deals with the improve-
ment of sliders. One of the first extensions of default sliders were developed by Osada et al. [OLS93] and Ahlberg
et al. [AS94], called AlphaSlider. These sliders allow the
user to scroll through sorted textual data with different velocities. A generalization of sliders was proposed by Eick
et al. [Eic94] and an elastic slider was described in 1995
by Masui et al. [MKBG95]. Two years later Koike et
al. [KSK97] presented the TimeSlider for scrolling through
time-dependent data. They used linear and non-linear mapping functions, for fine and coarse searching. In 2005,
Ramos et al. [RB05] developed a new tool, called Zlider,
which allows the user to zoom and slide through a parameter
space. They used a pressure input for zooming the parameter
space based on fluid simulation, while the change of the parameter value was done with typical sliding. The OrthoZoom
scroller was presented in 2006 by Appert et al. [AF06] and
allows one to scroll through one-dimensional data by sliding
in one direction and zooming of the data by sliding in the
orthogonal direction.
Apart from slider tools, many other user interface elements are employed for data exploration based on human expectation. In 2000, Igarashi et al. [IH00] presented
a speed-depending automatic zooming. Further descriptions
and evaluations of this technique are given in [Sav02,CS04].
A review of such and other data exploration interfaces was
done by Cockburn et al. [CKB08].
In contrast to our method, most data exploration and visualization techniques analyze the input data to enhance the visual output. Among these, histogram equalization [PAA∗ 87]
is closely related to our idea of creating perceptually linear parameters. Bertini et al. [BGS07] also use data distributions for the visualization of density maps, and Color
Lens [EDF10] allows image exploration by a modified color
scale for the contents of a lens. While Wolter [Wol10] presented improvements for navigation in time-varying scientific data, other works deal with exploration and navigation
in videos. For this purpose, Peker et al. [PD04] and Höferlin et al. [HHWH11] analyze the visual complexity based
on the human visual system (HVS). Also of interest is the
review on research in visualization dealing with human factors and perception done by Tory et al. [TM04]. And finally,
Bhagavatula et al. [BRd05] presented an approach to replace
many low-level parameters by a few high-level parameters.
Their technique is based on user image evaluation.
2.2. Image Metrics
Our method to construct a function that describes the image
progression for a linear parameter scale is based on the evaluation of the image variation. We measure this image variation by computing the differences between images, created
by discrete samples of the parameter value. For computing
the differences between images, we use image metrics. Over
the past few decades, several image metrics have been developed with different intentions. In this section, we present
c 2012 The Author(s)
c 2012 The Eurographics Association and Blackwell Publishing Ltd.
N. Lindow & D. Baum & H.-C. Hege / Perceptually Linear Parameter Variations
perceptual as well as non-perceptual metrics. Some of these
metrics are based on a fixed color space, but most of them
are not.
Dealing with color images, first a color space has to be
chosen. Ideally suited from the perceptual point of view
are color spaces that take into account that the human visual system (HVS) is more sensitive to luminance variations than to color variations. The perceptually linear color
space most often used today is CIE Lab [Hun48b, Hun48a].
To use this space, first one needs to apply a linear transformation from the RGB color space into the CIE XYZ color
space [Int32, SG32]. This transformation is followed by a
non-linear transformation from the XYZ into the Lab color
space.
2.2.1. Non-perceptual Metrics
Non-perceptual metrics are classical metrics in the mathematical sense applied to images. These metrics often measure the distance between corresponding values and accumulate them in a specific way. Consider two images X and Y
containing n values xi and yi , i ∈ {1, . . . , n}. The most widely
used metrics are the mean squared metric
n
d(X,Y ) =
∑ (xi − yi )2
i=1
or the root-mean-square metric (RMSM), which can be generalized, using the p-norm (Minkowski distance), to
!1/k
n
d(X,Y ) =
∑ (xi − yi )k
.
i=1
The difference between these metrics is the weighting of
the distance between corresponding values. Note, that there
are a lot of other similar metrics, which are, however, outside
the scope of this paper.
2.2.2. Perceptual Metrics
Perceptual image metrics compare images based on properties of the HVS. Many of them are designed for the evaluation of image compression algorithms. Note that these metrics are often not metrics in the mathematical sense. Instead
of measuring the distance between images, they measure
their similarity. For a recent extensive overview, we refer the
reader to Len et al. [LK11], who presented the most common
and important perceptual image metrics.
In this paper, we use an image metric based on the structural similarity (SSIM) described by Wang et al. [WBSS04].
Consider two images X and Y . Then the structural similarity
index is defined as
SSIM(X,Y ) := l(X,Y )α · c(X,Y )β · s(X,Y )γ ,
where l, c and s are the luminance, contrast and structure comparison functions, respectively. The parameters α,
β and γ are used to adjust the relative importance of these
c 2012 The Author(s)
c 2012 The Eurographics Association and Blackwell Publishing Ltd.
functions. In the default case, they are set to 1. The SSIM
achieves symmetry, SSIM(X,Y ) = SSIM(Y, X), boundedness, SSIM(X,Y ) ≤ 1, and unique maximum conditions,
where unique maximum means that SSIM(X,Y ) = 1, if and
only if X = Y . Using the SSIM, Wang et al. [WBSS04] define the mean structural similarity (MSSIM) by
MSSIM(X,Y ) :=
1 M
∑ SSIM(Xi ,Yi ),
M i=1
where Xi and Yi are blocks, that is sub-images, in X and
Y , and M is the number of blocks. Usually, blocks around
each pixel are used, and, hence, Xi and Yi are the blocks
around the pixel i in X and Y . If b is the dimension of
the blocks, then the complexity of computing MSSIM is
O(b2 · n), where n is the number of pixels. Wang et al. suggest to use blocks of size 11 × 11 with a Gaussian weighting function in each block. A further extension to MSSIM is
the multi-scale structural similarity (MS-SSIM) [WSB03],
which uses different image resolutions for a better perceptual comparison.
The similarity measures based on SSIM can be easily
turned into distance measures as done, for example, by Loza
et al. [LMCB06]. The resultant distance function might not
be a metric in the strict mathematical sense, because the triangle inequality may not be satisfied. However, as Loza et al.
point out, for their purpose “the descriptiveness and discriminating ability of the measure are sufficient.” This is true also
for our purpose. We chose the MSSIM as perceptual metric
because it is a common perceptual metric that is easy to implement.
3. Reparametrization
In this section, we describe how we construct a function
such that the parameter variation of this function results in
a linear image progression w.r.t. the used image metric. We
start with the one-dimensional case, where we only have a
single parameter. We then extend the method to the multidimensional case with several parameters.
3.1. Single Parameter Variation
Consider a parameter p ∈ [ps , pe ] ⊂ R affecting the visual
result of some graphical method or tool. We assume that the
evaluation of this visual result by the HVS can be grossly
described by some ‘image function’
f (p) : [ps , pe ] → R .
A very simple example of such a function could be the total brightness of the image as function of some parameter p.
In the following we will consider changes of f and we will
measure these by image metrics. For this, we adopt the basic
assumption behind perceptual image metrics that the differences that are perceived between two images can be roughly
modeled by a metric.
N. Lindow & D. Baum & H.-C. Hege / Perceptually Linear Parameter Variations
x̄
h(p)
x¯1
x̄
h([x, y1 ]T )
h([x, y2 ]T )
f (p)
x¯1
e(p)
x
x
x1
x1
Figure 3: Two image progression curves of the parameter x
for the fixed values y1 and y2 . The x-axis shows the original
parameter, while the y-axis shows the new parameter x̄. So if
we change y from y1 to y2 , we have to change x̄, too.
⇒
p
p̄
Figure 2: Sketch of three functions f , e and h, where f is
the image function (black), e is the image variation function
(red) and h shows the complete image progression (blue). In
the second row one can see the image progression (blue) depending on the original parameter p and the new parameter
p̄. The change of p is visualized by the dotted curve.
For now, let us assume that f is differentiable. Furthermore we assume
that
f has no plateaus, i.e. there exists
⊆
[p
no
interval
p
,
p
s , pe ] with f (pu ) = const., ∀pu ∈
x
y
px , py . In practice, of course, the image might change discontinuously and their might be parameter regions in which
the image does not change at all. We will describe later how
to handle these ‘irregular’ cases. A possible image function
is shown in Fig. 2 (black curve). The absolute value of the
derivative of f , denoted by
e(p) : [ps , pe ] → R,
p 7→
∂f
,
∂p
shows the effect of image variation, so we call e ‘image variation function’. We use the absolute value since we need
only the intensity of the variation and since this allows us
to construct a strictly monotonically increasing function
h(p) : [ps , pe ] → R,
p 7→
Z p
e(p) dp =
ps
Z p
∂f
ps
∂p
dp,
which describes the image progression w.r.t. parameter p.
The strict monotony of h is guaranteed by the absence of
plateaus in f . In Fig. 2, e and h are shown together with
the image function f . The function h shows the non-linear
image progression w.r.t. parameter p. Because h is strictly
monotonically increasing by construction, one can compute
the unique inverse function h−1 . This allows
usto replace
the parameter p by a new parameter p̄ ∈ 0, h(pe ) ⊂ R. The
variation of p̄ is mapped to p by p = h−1 ( p̄), which results
in a linear image progression. This means that parameter re-
gions with large image progression will be stretched in p̄ and
regions with small changes will be shrunk.
Consider now the irregular case with a single plateau in
the interval [px , py ], i.e. with h(pu ) = hc , ∀pu ∈ [px , py ]. Of
course, we can invert the strictly
monotonic parts [ps , px )
and (py , pe ] of h. For px , py , we can choose an arbitrary
value of this range for h−1 (hc ), for example px . As result,
h−1 is not continuous and does not map to the complete parameter space. However, this is not necessary, because the
image does not change in the range of the plateau. Thus, we
can shrink the range [px , py ] to a single value. In the case of
several plateaus in the entire parameter range this recipe can
be applied subsequently to each of them.
In a real implementation, we never deal with function f
as function on a continuous domain. Instead we approximate
∂f
by finite differences of a grid function fi = f (pi ), sam∂p
pled on points pi , i = 1, n. The requirement of differentiability of f therefore is only a formal one. In practice it suffices
to require that changes in f can be sufficiently well approximated by finite differences of f . Furthermore, for a discontinuity at pd in f , which leads to an infinite image variation,
we set e(pd ) = 0. This avoids incorrect parameter mappings.
We will describe the detection of discontinuities in Sect. 4.3.
3.2. Multiple Parameter Variation
In this paragraph, we deal with the n-dimensional case, given
by n one-dimensional parameters. The ranges are given by
the vectors ps and pe with
h
i
[ps1 , ..., psn ]T , [pe1 , ..., pen ]T ⊂ Rn .
Our function f now evaluates the image depending on an
input parameter vector p. The intensity of the image variations described by e are given by the absolute values of the
directional derivatives
#T
"
∂f
∂f
n
, ...,
.
e(p) : [ps , pe ] → R , p 7→
∂p1
∂pn
c 2012 The Author(s)
c 2012 The Eurographics Association and Blackwell Publishing Ltd.
N. Lindow & D. Baum & H.-C. Hege / Perceptually Linear Parameter Variations
p̄
screenshot of the visual result. One can use several strategies for the sampling and we will describe a more advanced
version later, which also allows us to detect discontinuities (Sect. 4.3). For now, consider an initial uniform sampling with n samples p1 , . . . , pn , with p1 = ps , pn = pe and
pi < p j , for i < j.
p̄i+1
p̄i
pi
p
pi+1
Figure 4: The diagram shows the discretization of our technique. The piecewise linear function in red approximates e
and the blue function approximates h. One can see that regions with high perceptual changes will be stretched and regions with low change will be shrunk.
In analogy to the one-dimensional case, we construct h by
integrating e in the directions of the parameter
h(p) : [ps , pe ] → Rn ,
"Z
p 7→
p1
ps1
e(p) dp1 , ...,
#T
Z pn
e(p) dpn
.
psn
Note, that in general this function is not invertible, so we
cannot create a new parameter space mapped by the inverse
of h to the original space. But if we keep n−1 parameters fix,
we get an invertible one-dimensional subset of h. In practice,
this is most often the case, because many applications use
one-dimensional user interfaces for each parameter. Hence
the user can only change one parameter at a time. Consider
a new parameter vector p̄ with a perceptually linear image
progression. The change of one parameter, for example p̄i ,
changes the ranges and values of all other parameter in p̄,
too. This can be shown by the following simple example.
Consider a two-dimensional parameter space p = [x, y]T . For
two fixed values y1 and y2 , we get two different image progression curves for x. If a currently fixed x1 has the value
h([x1 , y1 ]T ) w.r.t. p̄, the change from y1 to y2 results in another value h([x1 , y2 ]T ), see Fig. 3. So the new parameter
space is dynamic and changes with each change of p̄i .
From the screenshots and the given image function f , we
can compute also a piecewise linear approximation of the
image variation function e. However, in practice it is very
difficult to construct a satisfying image function representing
the HVS. An easier way is to compare images by an image
metric (see Sect. 2.2). Image metrics allow us to compute
the ei directly. Consider a metric d(pi , p j ), which returns the
distance between the images rendered with the parameters pi
and p j . Then we can define ei by
ei =
d(pi , pi−1 )
,
pi − pi−1
and
e1 = 0.
Finally, we create the discretization hi of h by the following
recursive method
h1 = 0
hi = hi−1 + ei · (pi − pi−1 ) .
For an infinite number of samples, this converges to the Riemann integral using a mixture of lower and upper sums. Because ei is always positive, we get a monotonically increasing function. An example is shown in Fig. 4. If we substitute
the ei , we get the simplified formulas
hi = hi−1 + d(pi , pi−1 ) .
4.2. The Inverse of the Image Progression
We use the monotony of h to compute the inverse of the approximated image progression function. Consider a certain
value of the new parameter p̄ ∈ [0, hn ] of the piecewise linear function h of the parameter p. Because h is monotonically increasing, we can use a binary search algorithm to
quickly find the interval i with hi ≤ p̄ < hi+1 . The value of
the inverse of h is then given by
h−1 ( p̄) = pi +
4. Discretization and Implementation
In this section, we describe details about our implementation
and the necessary discretization to approximate the image
progression function.
for i > 1
p̄ − hi
· (pi+1 − pi ) .
hi+1 − hi
For the case p̄ = hn , which is outside the intervals, the inverse value is set to pe .
4.3. Adaptive Sampling
4.1. Approximation of the Image Progression
In order to compute the inverse of the image progression
function, we first need to compute the image progression
function itself. Since the image progression function cannot be computed analytically, we need to approximate it by
a piecewise linear function created by discrete samples of
the parameter space. To do so, for each sample, we make a
c 2012 The Author(s)
c 2012 The Eurographics Association and Blackwell Publishing Ltd.
For a good approximation of the image progression, one
might need very many samples when using a uniform sampling. This is because the image variation function in general
does not change continuously. Particularly in the most interesting regions of the parameter space, the image variation
function will have large values.
In order to reduce the number of samples, we used an
N. Lindow & D. Baum & H.-C. Hege / Perceptually Linear Parameter Variations
Figure 5: The diagrams depict the image curves for several applications. The image progression curves are shown by the
dotted lines while the other curves show the image variation. The colors represent the image metrics: blue for MSSIM and red
for RMSM. For better comparison, the curves are scaled. From left to right: surface-cut viewer of David’s head, surface-cut
viewer of aneurysm, iso-surface of acceleration of a flow, and iso-surface of λ2 of a flow around a cuboid.
adaptive sampling method. In the first step, we sample the
image variation function at several uniform positions. Note
that the number of these samples must be large enough to
detect the most significant image changes. Afterwards, we
evaluate the samples and identify the region in the parameter
space with the highest absolute image variation d(pi , pi+1 ).
We refine this region by computing an new sample at q =
(pi + pi+1 )/2 and repeat the procedure until a maximal number of samples is reached or all image distances are below
a certain threshold. With this approach we can also detect
discontinuities. If we divide in every step a region, that converges more and more to a single point, we can stop at a
given threshold and identify this as discontinuity and handle
the remaining range like a plateau with no changes.
5. Results
We applied the proposed technique to several visualization methods. In our applications, we used two different
image metrics, the non-perceptual root-mean-square metric
(RMSM) and a perceptual one based on the mean structural
similarity (MSSIM) (see Sect. 2.2). More precisely, we used
(1−MSSIM), which might not be a metric in the strict mathematical sense, but seems to be sufficient for our purposes.
For the approximation of the image variation function,
screenshots need to be taken, the evaluation of which depends on the size. In all our tests we used an image resolution
of 512 × 512. Apart from the image size, the MSSIM further
depends on the size of the blocks used for comparing pixels. We used the suggested block size of 11 × 11. The time
needed for the creation of the screenshots depends highly on
the frame rate of the visualization technique. Since our visualization techniques are interactive, the most critical part
in terms of computation time is the evaluation of the image
metrics. The computation of the RMSM took approximately
half a minute for 200 images. In contrast, we measured approximately 1.5 hours for the MSSIM-based metric. Thus,
the RMSM is about 200 times faster. Note, however, that
our implementation has not been optimized yet and runs currently only on the CPU.
In the following, we present some results of our proposed
technique applied to several visualization methods, namely
a surface-cut viewer, iso-surface rendering and volume rendering. For some of the examples we tested four different
views on the data. The first three views uses the coordinate
axes as view directions, where the complete object filled
nearly the entire screen. For the forth view we used a random direction and the complete object filled only 10% of
the screen. While we noticed differences in the image variation functions, the image progression functions were quite
similar (see supplementary material). Thus, for the following examples, we used one fixed camera position, showing
the complete visualized object.
5.1. Surface-Cut Viewer
The surface-cut viewer is a tool for rendering triangulated
surfaces w.r.t. a given scalar value at each triangle or vertex. The tool can be used to cut those parts of the surface
that have a smaller scalar value than a user-defined threshold. The cut is done per fragment, where the scalar value
for each fragment is determined by linear interpolation. We
modified the viewer such that the original cut parameter is
replaced by a new parameter that results in a perceptually
linear image variation.
5.1.1. Mean Curvature
For the head of Michelangelo’s David sculpture [LRC∗ 00],
we used the mean curvature of the surface as scalar field. In
Fig. 1 one can see a comparison between the original (upper
row) and the new parameter (lower row), where the new parameter was determined using RMSM. For both parameters,
the same 8 uniformly distributed positions on the slider were
used. Both parameters cover the whole range of the original
parameter space. It can clearly be seen, that for the original parameter, the major image variation takes place from
the fifth to the seventh image, in which the whole surface
is already almost cut. For the new parameter, a perceptually
linear image variation can be observed.
We approximated the image progression function using
both RMSM and MSSIM. The visual results using MSSIM
are very similar to the results of RMSM (see video in the
supplementary material). For both metrics, we used 25 uniform and 25 adaptive samples, that is, 50 samples altogether.
c 2012 The Author(s)
c 2012 The Eurographics Association and Blackwell Publishing Ltd.
N. Lindow & D. Baum & H.-C. Hege / Perceptually Linear Parameter Variations
Figure 6: Wall shear stress of an aneurysm. The left image shows the stress by coloration. On the other images one can see the
surface cuts created by MSSIM (blue), RMSM (red) and the default linear stress (yellow). The surfaces are rendered one upon
the other in this order.
The approximated image progression functions can be seen
in Fig. 5, left most diagram. The approximated functions are
very similar, but their differences can be visualized by the
curves showing the image distances. Nevertheless, for both
metrics we can clearly observe the same steep slope of the
image progression function. This is where the greatest images changes occur.
We also tested other surfaces using the mean curvature
and obtained similar results. In all cases, our proposed technique had no problems to handle the areas with nearly no
visual changes, but in some cases we needed up to 200 samples because of outliers in the parameter space.
5.1.2. Wall Shear Stress
We also applied our method to an aneurysm data set, for
which the scalar field was given by the wall shear stress due
to the blood flow in the aneurysm. The data set is a single time step of a simulation of the wall shear stress of an
aneurysm. By setting the cut threshold, we keep only the surface, where the magnitude of the wall shear stress is greater
than the selected value. The wall shear stress is much lower
in the aneurysm than in the normal blood vessel. Thus it will
be cut first. Using the original parameter, it is very difficult
to modify the parameter so that one can see any details. With
our technique, the cut is much more perceptually linear and
the aneurysm is cut much slower, so that details can be seen.
In this example, the image variation curves differ much
more between RMSM and MSSIM than when using the
mean curvature (see second diagram in Fig. 5). A visual
comparison is given in Fig. 6 and the video. The blue surface
shows the remaining surface by using the MSSIM, the red
surface is created by the RMSM, and the yellow one shows
the default parameter change. The surfaces are rendered in
this order, one upon the other. Again, the same positions of
the sliders were used, uniformly distributed. Note, that the
red surface always includes the yellow surface and the blue
includes the red. The figure shows the visual differences between the three parameter and that the MSSIM metric works
better than the RMSM.
c 2012 The Author(s)
c 2012 The Eurographics Association and Blackwell Publishing Ltd.
5.2. Iso-Surface Rendering
We also applied our approach to iso-surface rendering. An
iso-surface of a three-dimensional scalar field is the set of
all points that have the same fixed value, the iso-value, in the
scalar field. With modern GPU technology it is possible to
render iso-surfaces very fast, so that interactive changes of
the iso-value are possible. This allows one to analyze scalar
fields of different kind in an easy and fast way. To further
facilitate the exploration of the data, using our approach we
created a new parameter for the selection of an iso-value.
The first data set that we explored is the result of a flow
simulation. It is a two-dimensional data set, where the third
dimension of the scalar field describes the time. The field
shows the magnitude of the acceleration of the flow, which
is often used to analyze turbulent structures, for example,
to find vortices. As can be seen in Fig. 7 and in the video,
it is nearly impossible to analyze the vortex structures using the original parameter, because the interesting parameter
range is very small. With our approach one can easily detect these structures, like the large vortex moving from left
to right. The third diagram in Fig. 5 shows the curves of the
image progressions and variations. Both metrics stretch the
first part of the parameter space, but the MSSIM does this to
a greater extent, which seems favorable.
Another data set that we tested is a scalar field showing
the λ2 values of a flow around a cuboid. This data set is a
time step of a three-dimensional flow simulation that was
generated using NaSt3DGP [NaS]. Similar to the acceleration, λ2 is often used for identifying turbulent structures like
vortices. All values greater than 0 characterize these structures. The computation is very unstable for results around 0,
and hence, for the analysis of these regions it is very difficult to find a good iso-value with the original slider. Since
our approach stretches the region around 0, we are able to
identify and evaluate the unstable results (see Fig. 8, Fig. 5,
right most diagram, and the video in the supplementary material). The curves show that the MSSIM evaluates the image
variation around 0 with larger relative differences than the
RMSM. So nearly the complete range of the new parameter
computed with the MSSIM shows the unstable region.
We tested our approach on other iso-surface examples, all
N. Lindow & D. Baum & H.-C. Hege / Perceptually Linear Parameter Variations
of which yielded similar results. Both metrics stretch areas
with large visual changes and shrink the others. In all cases,
MSSIM was more sensitive to image variations.
5.3. Volume Rendering
Finally, we tested our approach on volume rendering, which
has a more-dimensional parameter space. Consider again a
three-dimensional scalar field. In volume rendering, one creates for each pixel of the viewing window a ray (similar to
raytracing) and accumulates the values of the scalar field
on the ray. The values are weighted by the transfer function [PM04], which works on a selected range of the scalar
field values. This range is given by the minimum and the
maximum values. In our tests, we created two new parameters, to control the minimum and the maximum value.
We used the same flow data set as in Sect. 5.2 showing the
acceleration of a flow field. The parameters are controlled by
two sliders. The video in the supplemental material shows,
that both sliders change during interaction. Note that the
slider still changes the image in a linear way. For this test,
we used the RMSM with a 25 × 25 uniform sampling of the
parameter space. When the minimum value became greater
than the maximum value, we set the distance equal to 0.
6. Discussion and Conclusion
In this paper, we presented a new technique that allows one
to replace parameters that lead to highly non-linear image
variations by parameters leading to perceptually linear image changes. The approach works very well for all examples
we tested. We showed that the proposed parameter replacement is useful for many applications dealing with interactive data exploration. Note that the new method does not interfere with or even change the data exploration method. It
just facilitates adjustment of parameters. Our adaptive sampling avoids expensive uniform sampling, but it is still necessary to start with an initial sampling, which is fine enough
to detect the major variations. Furthermore, we showed that
our technique can be in principle applied to multi-parameter
cases. However, this requires a costly pre-computation even
for two parameters. Here further research work is necessary,
also regarding the user interface for controlling simultaneously several parameters.
Of course it is not guaranteed that always the most interesting parameter regions are stretched such that accurate
parameter adjustment is facilitated in any case. One example is the iso-surface of the λ2 values of the flow around the
cuboid. The MSSIM detects the large visual changes around
0 and stretches the region nearly over the complete new parameter space. But the results of this region are unstable and
it could be more interesting for the user to analyze the data
at the upper boundary of this unstable region. This problem
can be circumvented easily by restricting the consideration
to the parameter region of interest.
Figure 7: Iso-surfaces of the acceleration field of a flow
around a cavity. The images shows five uniform samples
of the original iso-value parameter (yellow), the parameter
created by the RMSM (red), and the parameter created by
the MSSIM (blue).
To measure the image variation w.r.t. the parameter variation, we implemented and tested two image metrics. Both
image metrics create pleasing results, but the perceptual
MSSIM-based image metric seems to work slightly better
than the non-perceptual RMSM. However, the computation
of the MSSIM is much more expensive and does not seem to
be suitable for practical use, at least not using a CPU implementation. Clasen et al. [CP10] showed that 1000 RMSM
distances of images with a resolution of 512 × 512 can be
computed in 1 second on a GPU. With such an implementation the RMSM can be used in practice.
c 2012 The Author(s)
c 2012 The Eurographics Association and Blackwell Publishing Ltd.
N. Lindow & D. Baum & H.-C. Hege / Perceptually Linear Parameter Variations
Image metrics reduce the complexity of the HVS by describing image differences with a single value. Thus, the success of our presented methods depends mainly on the used
image metric and of course it can not be expected that some
image metric works well for all kinds of applications. Many
metrics for instance weight large image changes in one small
region similarly to many small changes distributed over the
entire image. There are applications, for example in information visualization, where both used metrics might not be able
to produce satisfying results. In conclusion, the suitability of
image metrics is application dependent and for a specific application design a better suited metric might be necessary.
However, it was outside the scope of this paper to present
metrics for a variety of applications.
The parameter computed in the proposed approach is
view-dependent. This means, specification of a parameter is
facilitated for a specific view. This was not a problem for
the examples in our tests. However, one can imagine applications and data sets where it is necessary to solve the viewdependency problem. Therefore, we suggest the following
possibilities. The first is to use screenshots from different directions. This could be helpful for methods, where mostly
the complete data set is shown. Another possibility is to update the parameter on user demand. This is quite simple and
helps if the user wants to analyze a part of the data in detail. An extension of this approach is to automatically detect,
when it becomes necessary to update the parameter, maybe
also by measuring the change of the image.
In summary, we developed a new technique that allows
the user to better analyze and explore parameter spaces of
image generating processes. By using GPU implementations
of image metrics, most parameter-dependent interactive visualization techniques should profit from this approach.
7. Future Work
The major limitation of our current implementation is the
time needed for evaluating the image metrics. Hence, we
plan to implement GPU-based versions of both RMSM and
the MSSIM-based metric. Another problem could be the discussed view-dependency. We want to analyze this in more
detail and implement our suggestions.
Apart from making improvements to our method, it would
also be interesting to utilize the method in other types of
applications. One example is the definition of camera paths.
Creating camera paths is often quite tedious, particularly the
adjustment of the speed is difficult. With our method, the
camera path time parameter could be adjusted such that a
perceptually linear image variation is achieved.
Another application is the analysis of time-dependent
data. Here one could adjust the time parameter such that time
intervals with large changes are stretched and those with few
variations would be shrunk.
c 2012 The Author(s)
c 2012 The Eurographics Association and Blackwell Publishing Ltd.
Figure 8: Iso-surfaces of λ2 of a flow around a cuboid.
The images shows five uniform samples of the original iso-value parameter (yellow), the parameter created
by the RMSM (red), and the parameter created by the
MSSIM (blue).
Acknowledgments
We thank Mo Samimy from the Ohio State University for the
two-dimensional time-depending flow data set and Alexander Wiebel from Zuse Institute Berlin (ZIB) for the cuboid
flow data set. He generated the data with NaSt3DGP, which
was developed by the research group in the Division of Scientific Computing and Numerical Simulation at the University of Bonn. Furthermore, we want to thank Leonid Goubergrits from the Charité - Universitätsmedizin Berlin for
providing the aneurysm data and the whole team of the Digital Michelangelo Project for the David surface. Finally we
want to thank Kai Poethkow from ZIB for many helpful discussions.
N. Lindow & D. Baum & H.-C. Hege / Perceptually Linear Parameter Variations
References
[AF06] A PPERT C., F EKETE J.: OrthoZoom scroller: 1d multiscale navigation. In Proceedings of the SIGCHI conference on
Human Factors in computing systems (2006), ACM, pp. 21–30.
2
[AS94] A HLBERG C., S HNEIDERMAN B.: The alphaslider: a
compact and rapid selector. In Proceedings of the SIGCHI conference on Human factors in computing systems: celebrating interdependence (1994), ACM, pp. 365–371. 2
[BGS07] B ERTINI E., G IROLAMO A. D., S ANTUCCI G.: See
what you know: Analyzing data distribution to improve density
map visualization. In EuroVis’07 (2007), pp. 163–170. 2
[BRd05] B HAGAVATULA S., R HEINGANS P., DES JARDINS M.:
Discovering high-level parameters for visualization design. In
EuroVis’05 (2005), pp. 255–262. 2
[CKB08] C OCKBURN A., K ARLSON A., B EDERSON B.: A review of overview+ detail, zooming, and focus+ context interfaces. ACM Computing Surveys (CSUR) 41, 1 (2008), 1–31. 2
[CP10] C LASEN M., P ROHASKA S.: Image-error-based level of
detail for landscape visualization. In VMV’10 (2010), pp. 267–
274. 8
[CS04] C OCKBURN A., S AVAGE J.: Comparing speed-dependent
automatic zooming with traditional scroll, pan and zoom methods. People and Computers (2004), 87–102. 2
[EDF10] E LMQVIST N., D RAGICEVIC P., F EKETE J.-D.: Color
lens: Adaptive color scale optimization for visual exploration.
IEEE Transactions on Visualization and Computer Graphics 17,
6 (2010), 795–807. 2
[Eic94] E ICK S.: Data visualization sliders. In Proceedings of
the 7th annual ACM symposium on User interface software and
technology (1994), ACM, pp. 119–120. 2
[HHWH11]
H ÖFERLIN B., H ÖFERLIN M., W EISKOPF D., H EI G.: Information-based adaptive fast-forward for visual
surveillance. Multimedia Tools Appl. 55 (October 2011), 127–
150. 2
DEMANN
[Hun48a] H UNTER R. S.: Accuracy, precision, and stability
of new photo-electric color-difference meter. In JOSA (1948),
vol. 38, p. 1094. 3
[Hun48b] H UNTER R. S.: Photoelectric color-difference meter.
In JOSA (1948), vol. 38, p. 661. 3
[IH00] I GARASHI T., H INCKLEY K.: Speed-dependent automatic zooming for browsing large documents. In Proceedings
of the 13th annual ACM symposium on User interface software
and technology (2000), ACM, pp. 139–148. 2
[Int32] I NTERNATIONAL C OMMISSION ON I LLUMINATION
(CIE): CIE 1931 XYZ color space. In Commission internationale de l’Eclairage proceedings (Cambridge, 1932), Cambridge University Press. 3
[KSK97] KOIKE Y., S UGIURA A., KOSEKI Y.: Timeslider: an
interface to specify time point. In Proceedings of the 10th annual ACM symposium on User interface software and technology
(1997), ACM, pp. 43–44. 2
[LRC∗ 00] L EVOY M., RUSINKIEWICZ S., C URLESS B., G INZ TON M., G INSBERG J., P ULLI K., KOLLER D., A NDERSON
S., S HADE J., P EREIRA L., DAVIS J., F ULK D.: The digital
Michelangelo project: 3d scanning of large statues. pp. 131–144.
6
[MKBG95] M ASUI T., K ASHIWAGI K., B ORDEN I., G EORGE
R.: Elastic graphical interfaces to precise data manipulation. In
Conference companion on Human factors in computing systems
(1995), ACM, pp. 143–144. 2
[NaS] NaSt3DGP - A Parallel 3D Flow Solver.
http://wissrech.ins.uni-bonn.de/research/projects/NaSt3DGP/
index.htm. 7
[OLS93] O SADA M., L IAO H., S HNEIDERMAN B.: AlphaSlider:
Searching textual lists with sliders. In Proceedings of the Ninth
Annual Japanese Conference on Human Interface (1993). 2
[PAA∗ 87]
P IZER S. M., A MBURN E. P., AUSTIN J. D., C RO MARTIE R., G ESELOWITZ A., G REER T., ROMENY B. T. H.,
Z IMMERMAN J. B.: Adaptive histogram equalization and its
variations. Comput. Vision Graph. Image Process. 39 (September
1987), 355–368. 2
[PD04] P EKER K. A., D IVAKARAN A.: Adaptive fast playbackbased video skimming using a compressed-domain visual complexity measure. In In IEEE International Conference on Multimedia and Expo (2004), pp. 2055–2058. 2
[PM04] P OTTS S., M ÖLLER T.: Transfer functions on a logarithmic scale for volume rendering. In Proceedings of Graphics Interface 2004 (School of Computer Science, University of
Waterloo, Waterloo, Ontario, Canada, 2004), GI ’04, Canadian
Human-Computer Communications Society, pp. 57–63. 8
[RB05] R AMOS G., BALAKRISHNAN R.: Zliding: fluid zooming
and sliding for high precision parameter manipulation. In Proceedings of the 18th annual ACM symposium on User interface
software and technology (2005), ACM, pp. 143–152. 2
[Sav02] S AVAGE J.: Speed-dependent automatic zooming. University of Canterbury (2002). 2
[SG32] S MITH T., G UILD J.: The C.I.E. colorimetric standards
and their use. Transactions of the Optical Society 33, 3 (193132). 3
[TM04] T ORY M., M ÖLLER T.: Human factors in visualization
research. IEEE Trans. Vis. Comput. Graph. 10, 1 (2004), 72–84.
2
[WBSS04] WANG Z., B OVIK A. C., S HEIKH H. R., S IMON CELLI E. P.: Image quality assessment: From error visibility to
structural similarity. IEEE TRANSACTIONS ON IMAGE PROCESSING 13, 4 (2004), 600–612. 3
[Wol10] W OLTER M.: Navigation in time-varying scientific data.
PhD thesis, RWTH Aachen, 2010. 2
[WSB03] WANG Z., S IMONCELLI E. P., B OVIK A. C.: Multiscale structural similarity for image quality assessment. In
Proc. IEEE Asilomar Conf. on Signals, Systems, and Computers,
(Asilomar (2003), pp. 1398–1402. 3
[LK11] L IN W., K UO C.-C. J.: Perceptual visual quality metrics:
A survey. J. Visual Communication and Image Representation
(2011), 297–312. 3
[LMCB06] L OZA A., M IHAYLOVA L., C ANAGARAJAH N.,
B ULL D.: Structural similarity-based object tracking in video
sequences. In Proc. of the 9th International Conf. on Information Fusion (2006), pp. 10–13. 3
c 2012 The Author(s)
c 2012 The Eurographics Association and Blackwell Publishing Ltd.
Download