Measures of Low-level Visual Properties to be

advertisement
1
MMDAC Project
Low-Level Display Feature Measures
Prepared by Theo Veil & David Kaber
Luminance
Luminance is a measure of brightness in nits: candelas per square meter. A
photometer is a common tool for measuring luminance. Photometers may already be
available in the ergonomics lab. Quantum Instruments makes for about $525. The
scientific versions include sensors for luminance. The field of view of the photometer is
limited to 25 degrees:
QuickTi me™ and a
decompressor
are needed to see t his pict ure.
This field of view is advantageous since it would allow for the sensor to capture
the display’s light output specifically rather than that of the entire room.
Contrast
Contrast is the difference between the darkest and lightest point in an image. In
the case of the HUD we are interested in the contrast between iconic and non-iconic
imagery. There are a couple metrics available to choose from to define contrast.
Michelson (1927) Contrast Ratio:
V

I1  I2
I1  I2
I1 is the intensity of the brighter area, I2 is the brightness at the less bright area,
and V is defined as “the visibility of the interference pattern.”
Perhaps more simply put: V 

Lmax  Lmin
where L is luminance.
Lmax  Lmin
2
DOYLE: Aviram and Rotman (2000)
DOYLE  (x t  x b )2  k(st  sb )2

x t , x b = mean target and background brightness, respectively
st , sb = target and background variances, respectively
k = weighting coefficient
 
 
DOYLE considers variance in brightness across the two areas of interest
according to the adjustable weight coefficient.
Tullis (1997) recommends a simple sampling technique. Images can be converted
to grayscale in a photo editor for pixel brightness to be examined. These measures of
brightness would be relative since the display properties (contrast ratio, etc.) of the
simulator may not be perfectly replicated in screenshots. Adobe Photoshop CS3,
available at N.C. State, is capable of measuring brightness and gives output in
percentages. Specific pixels can be targeted in Photoshop for examination and compared
to others. The contrast between the darkest iconic pixel and the brightest non-iconic
pixel is of high interest for determining how discernable a virtual or enhanced image of
terrain is from overlaid symbols.
Density
For graphical user interfaces (GUIs), Tullis (1997) defines screen density as the
number of black pixels divided by the total number of available pixels on the screen,
assuming any text appears black. In the case of the HUD the pixels of interest would be
green. Screenshots from the simulator could be analyzed to find the percentage of pixels
the HUD occludes. Tullis divides his observations into three categories for GUIs: Sparse
(having less than 3% black pixels), Average (about 6%), and Dense (more than 10%).
The density across the entire HUD would be considered a global density measure.
Local density is a little harder to define. Tullis (1984) states that in one eye
fixation humans can see information within a 5-degree of visual angle (VA). Within this
area of the VA weights are assigned to all pixels according to the distance from the center
point. The weights assigned are inversely proportional to the distance from the center of
the fixation point.
Image from Tullis 1984:
QuickTime™ and a
decompressor
are needed to see this picture.
3
Local density for the target point is a summation of the weights where pixels are
active, or part of the HUD, divided by the assigned summation of weight for the entire
area:
LD 
W
W
BP
AP
Where WBP and WAP equal weight of black (green) and all available pixels within the region, respectively.
It is in this way thatthe measurement accounts for the difference in sensitivities
of the eye. Note that even parts of the VA that lie outside of the display must be
included. The area outside the display is still counted in the overall weight where the 5degree VA protrudes outside the display. Since the area outside the display perimeter
does not have active pixels (in fact no pixels) the local density tends to be lower at the
edges of the display. The total local density of a display is a mean value of the weighted
densities of these 5-degree “chunks” within the display. The 5-degree area of the
simulator display will need to be calculated according to the distance subjects are
positioned from the display. This area will then need to be converted to a pixel count for
evaluations performed on a lab computer.
Grouping
Tullis (1984) defines groups in displays with an algorithm. Alphanumeric
characters separated by no more than one space horizontally and no spaces vertically are
classified as groups. Tullis (1997) states that for GUIs, chains of active pixels within 20
pixels of each other could be grouped (assuming a 14 inch screen with a resolution of
640x480 pixels). The 20 pixels count needs to be adjusted to account for the HUD size
and resolution of screenshots taken during our experiment.
Again, the 5-degree VA is key. Tullis shows that with groups smaller than 5
degrees, search times are more attributable to the number of groups; however, when
larger than 5 degrees, the size of groups plays a larger role.
Saturation
Hue saturation is a value associated with color. One common scale for hue is
H.S.V. (Hue Saturation Value). Hue is the actual color or mix of color, value is the
darkness of the color, and saturation is the amount of hue in the color. The hue is
measured in degrees since it is based on a color wheel while saturation and value are
measured in percentages. Other scales are H.S.L. (Hue Saturation Lightness) and R.G.B.
(Red Green Blue) values. R.G.B. values are three independent values typically between 0
and 255. Any can be used for analysis and there are equations to convert from one value
to another. Apple’s ColorSync Utility provides a calculator that can perform these
conversions.
Photoshop CS3 is capable of outputting both R.G.B. and H.S.B. (where B stands
for brightness). There are options for sampling one pixel at a time or averaging values
4
over a square area of up 101 pixels per side. Samples are taken with the “color sampler
tool.” These measurements would be useful in determining if color is uniform
throughout the HUD.
Occlusion
Occlusion is a measure of overlap in displays. Ellis and Dix (2006) use occlusion
measures in analyzing data plots. They take circular samples, which are termed “lenses,”
and examine every pixel where overlap occurs.
Variables:
S = Number of Pixels in Sample = based on area (R2) for lens with radius R
L = Number of Lines Crossing a Sample
ppl = Average Pixels Per Line
M = ppl * L (It was noted that M counts plotted pixels which will, in general, be
greater than active pixels)
M1 = Number of Active Unshared Pixels
Mn = Number of Points in Image Sharing a Pixel (Intersections)
S0 = Number of Inactive Pixels
S1 = Number of Active Unshared Pixels (Same as M1)
Sn = Number of Pixels Sharing Points (Multiple pixels may be used to present
same point)
A Simple Example:
An example of a 3x3 pixel section of the
screen with a horizontal and vertical line
crossing at the center pixel.
Quick Time™a nd a
dec ompr esso r
ar e nee ded to see this pictur e.
M = 6, S = 9
M1 = 4, Mn = 2
S0 = 4, S1 = 4 and Sn = 1
Figure 3 from Ellis and Dix
Three measures can be derived:
Overplotted %
Sn
100 
S1  Sn
Overcrowded %
M
100  n
M



Hidden %
M  Sn
100  n
M
The percentage of pixels with more than
1 plotted point. The range is 1 (all
plotted points on their own) to 100 (no
single plotted poins)
The percentage of plotted points that are
in pixels with more than 1 plotted point.
The range is 1 (all plotted points on their
own) to 100 (no single plotted points)
The percentage of plotted points that are
hidden from view due to being
overplotted. Note that pixels with more
than 1 plotted point will be showing the
top plotted point. The range is 0 to just
less than 100, depending on the number
of pixels.
Table 1 from Ellis and Dix
5
Brath (1997) defines occlusion as simply the number of occluded data points over
the number of data points.
With so many pixel counts being involved in the evaluations of these
measurements it may seem as if the experimental analyses will be very time consuming.
Hopefully we will be able to write or find some programs that could quickly examine
screenshots pixel-by-pixel and tabulate results.
6
Sources
Aviram, G. & Rotman, S. R. (2000). Evaluation of human detection performance of
targets embedded in natural and enhanced infrared images using image metrics. Optical
Engineering, 39(4), 885-896.
Ellis, G. & Dix, A. (2006). The plot, the clutter, the sampling and its lens: Occlusion
measures for automatic clutter reduction. Proceedings of AVI2006, 266-269.
Michelson, A. A. (1927). Studies in optics. Chicago: University of Chicago.
Tullis, T. S. (1984). Predicting the Usability of Alphanumeric Displays. Doctoral
dissertation, Rice University, Houston, TX. 172 pages.
Tullis, T. S. (1997). Screen Design. In M. G. Helander, T. K. Landauer, & P. V. Prabhu
(Eds.), Handbook of Human-computer Interaction (pp. 503-531). Amsterdam: Elsevier
science.
Brath, R. (1997). Concept demonstration metrics for effective information visualization.
In J. Dill & N. Gershon (Eds.), IEEE Symposium on Information Visualization 1997 (pp.
108-111). Phoenix: IEEE computer society press.
Download