Three-Dimensional Imaging Chapter 15 Limitations of Stereology

advertisement

Chapter 15

Three-Dimensional Imaging

Limitations of Stereology

Stereological analysis provides quantitative measures of several kinds of structural information. Whether determined manually or by computer measurement, most of these require only simple counting or measurement operations and give unbiased parameters (provided that IUR conditions on the point, line and plane probes are met). Measures of phase volume and surface area are reasonably straightforward. Distributions of feature size and shape, and alignments and neighbor distances, can be determined in many cases. However, the data that are provided by stereological measurement are sometimes difficult to interpret in terms of the appearance of a microstructure as it is understood by a microscopist.

While the global metric parameters of a microstructure can be measured with the proper stereological probes and techniques, and with more difficulty the feature-specific ones can be estimated, topological ones are more difficult to access.

These include several fundamental properties that are often of great interest to microscopist, such as the number of objects per unit volume. Counting on a single plane of intersection can estimate this for the case of convex features of a known size, or with some difficulty for the case of a distribution of such features of a known shape. Counting with two or more planes (the disector, Sterio, 1984) can determine the number per unit volume of convex features of arbitrary size and shape.

But the general case of objects of arbitrary shape, and networks of complex connectivity, cannot be fully described by data obtained by stereological measurement. This is essentially a topological problem, and is related to the desire to “see” the actual structure rather than some representative numbers. Humans are much more aware of topological differences in structure than in metric ones.

Serial Methods for Acquiring 3D Image Data

The method longest used for acquiring 3D image data is serial sectioning for either the light or electron microscope. This is a difficult technique, because the sectioning process often introduces distortion (e.g., compression in one direction).

Also, the individual slices are imaged separately, and the images must be aligned to build a 3D representation of the structure (Johnson & Capowski, 1985). This is an extremely critical and difficult step. The slices are usually much farther apart in the

Z direction than the image resolution in the X , Y plane of each image (and indeed may be nonuniformly spaced and may not even be parallel, complications that few reconstruction programs can accommodate).

345

346 Chapter 15

This alignment problem is best solved by introducing some fiduciary marks into the sample before it is sectioned, for instance by embedding fibers or drilling holes. These provide an unequivocal guide to the alignment, which attempting to match features from one section to another does not (and can lead to substantial errors). For sequential polishing methods used for opaque specimens such as metallographic samples, hardness indentations provide a similar facility. They also allow a direct measure of the spacing between images, by measuring the change in size of the indentation before and after polishing.

For objects whose matrix is transparent to light, optical sectioning allows imaging planes within the specimen nondestructively. Of course, this solves the section alignment problem since the images are automatically in registration. It also allows collecting essentially continuous information so that there is no gap between successive image planes. The conventional light microscope suffers significant loss of contrast and resolution due to the light from a plane deep in the sample passing through the portions of the sample above it. This applies to the case of reflected light viewing or fluorescence imaging. For transmission imaging the entire thickness of the sample contributes to the blurring.

Deconvolution of the blurring is possible using an iterative computational procedure that calculates the point spread function introduced by each layer of the specimen on the affected layer. As shown in Figure 15.1 this allows sharpening image detail and recovering image contrast. However, it does little to improve the depth of field of the optics and the resolution in the Z direction remains relatively poor.

The confocal microscope solves both problems at once. By rejecting light scattered from points other than the focal position, contrast, resolution and depth of field are all optimized. Scanning the point across the specimen optically in the a b

Figure 15.1. Immunofluorescence cell image (a) showing green microtubles, red mitochondrial protein, and blue nucleus. Deconvolution of the point spread due to specimen thickness (b) performed using software by Vaytek (Fairfield, Iowa), who provided the image. (For color representation see the attached CD-ROM.)

Three-Dimensional Imaging 347

X , Y , plane and shifting the specimen vertically in the Z direction can build up a

3D data set in which the depth resolution is 2–3 times poorer than the lateral resolution, but still good enough to delineate many structures of interest. There are still some difficulties with computer processing and measurement when the voxels (the volume elements which are the 3D analogs to pixels in a plane image) are not cubic, and these are discussed below. Confocal microscopy has primarily been used with reflected light and fluorescence microscopy, but in principle can also be extended to transmission imaging.

Inversion to Obtain 3D Data

In confocal microscopy, or even in conventional optical sectioning, light is focused onto one plane within the sample. The light ideally enters through a rather large cone (high numerical aperture objective lens), so that the light that records each point of the image has passed along many different paths through the portion of the specimen matrix that is not at the focal point. This should cause the variations in the remainder of the sample to average out, and only the information from the point of focus to remain.

This idea of looking at a particular point from many directions lies at the heart of inverse methods used to reconstruct images of microstructure (Figure 15.2).

They are often called tomographic or computed tomographic (CT) methods, and the most familiar of them is the X-ray CAT scan (computed axial tomography, because the most common geometry is a series of views taken from different radial

Figure 15.2. The principle of inverse reconstruction. Information from many different views through a complex structure is used to determine the location of objects within the volume. (For color representation see the attached CD-ROM.)

348 Chapter 15 positions around a central axis) used in medical diagnosis (Kak & Slaney, 1988).

This method actually reconstructs an image of one plane through the subject, and many successive but independent planes are images to produce an actual 3D volume of data. Other projection geometries are more commonly used in industrial tomography and in various kinds of microscopy, some of which directly reconstruct the

3D array of voxels (Barnes et al., 1990).

There are two principal methods for performing the reconstruction: filtered back projection (Herman, 1980) in which the information from each viewing direction is projected back through the voxel array that represents the specimen volume, with the summation of multiple views producing the final image, and algebraic methods (Gordon, 1974) that solve a family of simultaneous linear equations that sum the contribution of each voxel to each of the projections. The first method is fast and particularly suitable for medical imaging, because speed is important and the variation in density of geometry of the sample is quite limited, and because image contrast is important to reveal local variations for visual examination, but measurement of dimension and density are not generally required. The second method is more flexible to deal with the unusual and asymmetrical geometries often encountered in microscopy applications, and the desire for more quantitatively accurate images from a relatively small number of projections.

Tomographic reconstruction requires some form of penetrating radiation that can travel through the specimen volume, being absorbed or scattered along the way as it interacts with the structure. There are many situations in which this is possible. X-rays are used not only in the medical CAT scan, but also for microscopy. A point source of X-rays passing through a small specimen produces a magnified conical projection of the microstructure. Rotating the specimen (preferably about several axes) to produce a small number of views (e.g., 12–20) allows reconstruction of the microstructure with detail based on variation of the absorption cross section for the X-rays, which is usually a measure of local specimen density. However, using a tunable X-ray source such as a synchrotron, it is possible to measure the distribution of a specific element inside solid samples (Ham, 1993) with a resolution on the order of 1 m m.

Electron tomography (Frank, 1992) in the transmission electron microscope typically collects images as the specimen is tilted and rotated. Constraints upon the possible range of tilts (due to the stage mechanism and thickness of the sample) cause the reconstruction to have poorer resolution in the Z direction than in X and

Y , and it is important to avoid orientations that cause electron diffraction to occur as this is not dealt with in the reconstruction.

Light is used tomographically in some cases, such as underwater imaging methods developed for antisubmarine detection. It is also used to measure the variation in index of refraction with radius in fibers. Sound waves are used for medical imaging as well as seismic imaging and some acoustic microscopy. In the latter case, frequencies in the hundreds or thousands of megahertz provide resolution of a few m m. Seismic tomography is complicated greatly by the fact that the paths of the sound waves through the specimen are not straight lines, but in fact depend upon the structure so that both the attenuation and path must be part of the reconstruction calculation. However, using pressure and shear waves generated by many

Three-Dimensional Imaging 349 earthquakes and detected by a worldwide network of seismographs, detailed maps of rock density and fault lines within the earth have been computed.

Magnetic resonance imaging (MRI) used in medical diagnosis also has curved paths for the radio waves that interact with the hydrogen atom spins in a magnetic field. The use of very high field gradients has allowed experimental spatial resolutions of a few m m for this technique. Many other signals have been used, including neutrons (to study the structure of composite materials) and gamma rays

(photons with the same energies as X-rays but from isotope sources, which are suitable for use in some industrial applications and for down-borehole imaging in geological exploration).

In all these cases, the reconstruction methods are broadly similar but the details of the microstructure that interact with the signal and thus are imaged vary greatly. Some techniques respond to changes in density or composition, others to structural defects, others to physical properties such modulus of elasticity or index of refraction. In the ideal case, the image that is reconstructed is a cubic array of voxels. Depending on the application, the size of the voxels can range from nanometers (electron microscopy) to kilometers (seismic imaging). A significant amount of computing is generally required, and the number of voxels that can be reconstructed is quite small and gives resolution that is poor compared to the number of pixels in a two-dimensional image. Processing is generally needed to minimize noise and artefacts. This means that tomography is not a “live” viewing technique, and that the resolution of the images is limited.

Stereoscopy as a 3D Technique

Tomography usually employs a fairly large number of projections taken from as widely distributed a set of viewpoints as possible. Medical imaging of a single slice through the body may use more than 100 views along radial directions spaced on a few degrees apart, while some conical projection methods for 3D reconstruction use as few as a dozen projections in directions spread over the full solid angle of a sphere. By contrast, stereoscopy uses only two views that are only slightly different, corresponding to the two points of view of human vision. The advantage is that humans possess the 3D reconstruction software to merge these two views into a 3D structural representation (Marr & Poggio, 1976).

It is important not to confuse stereoscopy with stereology. In both, the root stereo is from the Greek and refers to three-dimensional structure. Stereology is defined as the study of that structure based on geometric principles and using two dimensional images. Stereoscopy is the recording or viewing of the structure, and it generally taken to mean the two-eye viewing of structures in a way that reveals them to the brain of the observer. This does not exclude the possibility of making measurements based on those two images (Boyde, 1973).

As shown in Figure 15.3, the parallax between two points viewed in left- and right-eye images gives a direct measure of their vertical separation, provided that the scale of the pictures and the angle between the two views is known. Because the angle is typically small (5–15 degrees) so that the views can be successfully fused by the human visual system, small uncertainties in the angle strongly affect the

350 Chapter 15

Figure 15.3. The principle of stereoscopic measurement. Measurement of the parallax or apparent displacement of features in two views separated by a known angle ( Q ) allows calculation of the vertical displacement of the features.

precision of the Z calculation. Finite precision in the lateral measurements that give the parallax measurement, which ultimately depend on the lateral resolution or pixel size in the images, produce uncertainties in the Z measurement about an order of magnitude worse than the X , Y measurements.

Stereoscopy (and for that matter stereology) do not offer any way to determine the number of features in a clump as shown in Figure 15.4. This SEM image

Figure 15.4. SEM image of a three-dimensional clump of yeast cells. Stereoscopy can measure the outer dimensions of the clump but cannot reveal internal details such as the individual cells.

Three-Dimensional Imaging 351 of yeast is certainly not densely packed so that the number might be estimated from the outer dimensions, nor are all of the particles the same size nor are those visible on the outside necessarily representative of the interior ones. Stereoscopy can provide the height of the clump from two images, but the invisible interior is not accessible.

Viewing stereo images does not require making measurements, of course.

It is the relative position of objects which humans judge based on whether the parallax increases or decreases, and this is determined by the vergence (motion of the eyes in their sockets) needed to bring points of interest to the fovea in the eyes for detailed viewing. This means that relative position is judged one feature at a time, rather than a distance map being computed for the entire image at once.

When stereo pair images are used for applications such as aerial mapping of ground topography or SEM mapping of relief on integrated circuits, the computer works entirely differently than human vision (Medioni & Nevatia, 1985). It attempts to match points between the two images, typically using a cross correlation technique that looks for similarities in the local variation of grey (or color) values from pixel to pixel. Some systems rely on human matching of points in the two images; this can be quite time consuming when thousands of points need to be identified and marked. Matched points then have their parallax or horizontal offset values converted to elevation, and a range image is produced in which the elevation of each point on the surface is represented by a grey scale value.

These techniques are suitable for surface modeling and in fact are widely used for generating topographic maps of the earth’s surface, but they are much more difficult to apply to volumetric measurement. Matching of points between the two views is complicated when the left-to-right order in which they appear can change, and the contrast may vary or points may even disappear because of the presence of other objects in the line of sight. Stereoscopic images of volumes can be used for discrete measurements of z spacings between objects, but producing a complete voxel array with the objects properly arranged is not possible with only two views— it requires the full tomographic approach.

However, stereoscopic images are very useful and widely employed to communicate voxel-based data back to the human user. From a volumetric data set, two stereoscopic projections can be generated quickly and displayed for the user to view.

This is sometimes done by putting the images side by side (Figure 15.5) and allowing the user to relax his or her eyes to fuse them, or by putting the images into the red and green color planes of a display and using colored glasses to deliver each image to one eye.

The example display in Figure 15.5 is a processed image of the human brain.

The original data were obtained by magnetic resonance imaging, which shows the concentration of protons (and hence of water molecules) in tissue. The images were assembled into a voxel array, and processed using a gradient operator as discussed below. The two projections 9 degrees apart were generated for viewing. Other schemes in which LCD lenses on glasses rapidly turn opaque and transparent and the computer display alternately flashes up the left and right images, or lens and mirror systems oscillate to assist the viewer, are also used.

352 Chapter 15 a

Figure 15.5. Stereoscopic display of a processed voxel array. The left and right images can be viewed to see the internal structure of a human brain as imaged with MRI, and processed with a 3D gradient operator to reveal regions where there is a large change in concentration of water.

b

The majority of people can see and correctly interpret stereo images using one or more of these methods, but a significant minority cannot, for a variety of reasons. Many of the people who cannot see stereo images get the same depth information by sequential viewing of images from a moving point of view, for example by moving their head sideways and judging the relative displacement of objects in the field of view to estimate their relative distance. The relationship is the same as for the parallax. Displays that rotate the voxel array as a function of time and show a projection draw on this ability of the human visual system to judge distance.

Almost everyone correctly interprets such rotating displays.

Visualization

When a three-dimensional data set has been obtained, whether by serial sectioning, tomographic reconstruction or some other method, the first desire is usually to view the voxel array so as to visualize the volume and the arrangements of objects within it. The example in Figure 15.5 shows one mode than is often used, namely stereo presentation of two projections. This relies on the ability of the human to fuse those images and understand the spatial relationships which they encode. As mentioned above, the second common technique is to use motion to convey the same kind of parallax or offset information. This may either be done by providing a continuous display of rotation of the volume to be studied, or an interactive capability to “turn it over” on the screen and view it from any angle (usually with a mouse or trackball, or whatever user interface the computer provides).

Three-Dimensional Imaging 353

Producing the projected display through the voxel array can be done in several different ways, to show all of the voxels present, or just the surfaces (defined as the location where voxel value—density for example—changes abruptly), or just a selected surface cut through the volume. We will examine these in detail below.

But in all cases, the amount of calculation is significant and takes some time. So does addressing the voxels that contribute to each point on the displayed image. If the array of data is large, it is likely to be stored on disk rather than in memory, which adds substantially to the time to create a display. For these reasons, many programs are used to create a sequence of images that are then displayed one after the other to create the illusion of motion. This can be very effective, but of course since it must be calculated beforehand it is not so useful for exploring an unknown sample, but rather is primarily used to communicate results that the primary scientist has already discovered to someone else.

Computer graphics allow three principal modes of display (Kriete, 1992): transparent or volumetric, surface renderings, and sections. The volumetric display is the only one that actually shows all of the information present. For any particular viewing orientation of the voxel array, a series of lines through the array and perpendicular to the projected plane image are used to ray-trace the image. This is most often done by placing an extended light source behind the sample and summing the absorption of light by the contents of each voxel. This makes the tacit assumption that the voxels values represent density, which for many imaging modalities is not actually the case, but at least it provides a way to visualize the structure.

It is also possible to do a ray tracing with other light sources, to use colored light and selective color absorption for different structures.

The chief difficulty with volumetric displays is that they are basically foreign to human observers. There are few natural situations in which this kind of image arises—looking at bits of fruit (which must themselves be transparent) in Jell-O can be imagined as an example. In most cases, we cannot see through objects; instead we see their surfaces. Even if the matrix is transparent or partially visible, the objects we are interested in are usually opaque (consider fish swimming in a pond, for example). For this reason, most volumetric display programs also include the capability to add incident light to reflect from surfaces, and to vary the color or density assigned to voxels in a nonlinear way so that surfaces become visible in the generated image.

Figure 15.6 shows an example. The data are magnetic resonance images of a hog heart, and the color and density of the heart muscle and the vasculature

(which can be distinguished by their different water content) have been adjusted to make the muscle partially transparent and the blood vessels opaque. Arbitrary colors have been assigned to the different voxel values to correspond to the different structures. The sequence of images from which the selected frames have been taken shows the heart rotating so that all sides are visible. Note however that no single frame shows all information about the exterior or interior of the object. The observer must build up a mental image of the overall structure from the sequence of viewed images, which happens to be something that people are good at. Movies in which the point of view changes, or the opacity of the various voxel values is altered, or the light source is moved, or a section plane through the data is shifted,

354 Chapter 15 a b c

Figure 15.6. Several views of magnetic resonance images of a hog heart from a movie sequence of its rotation. The data set was provided by and the images were generated using software from Vital Images (Fairfield, Iowa). (For color representation see the attached CD-ROM.) all offer effective tools for communicating results. They do not happen to fit into print media, but video and computer projectors are becoming common at scientific meetings and network distribution and publishing will play an important role in their dissemination.

Changing the nonlinear relationship between the values of voxels in the array and their color and opacity used in generating the image allows the transparency of the different structures to be varied so that different portions of the structure can be examined. Figure 15.7 shows the same data set as Figure 15.6 but with the opacity of the heart muscle tissue increased so that the interior details are hidden but the external form can be displayed. Again, because of the number of variables at hand and the time needed to produce the images, this is used primarily to produce final images for publishing or communicating results rather than exploring unknown data sets.

The surface rendering produced by changing voxel contributions in volumetric display programs is not as realistic as can be produced by programs that are dedicated to surface displays. By constructing an array of facets (usually triangles)

Three-Dimensional Imaging 355

Figure 15.7. The same data set as Figure 15.6 with the opacity of the voxels with values corresponding to muscle tissue increased. This shows the exterior form of the heart.

(For color representation see the attached CD-ROM.) between points on the surface defined by the outlines of features in successive planes in the 3D array, and calculating the scattering of light from an assumed source location from the facet as a function of its orientation, visually compelling displays of surfaces can be constructed.

Figure 15.8 shows data from more than 100 serial section slices through the cochlea from the inner ear of a bat. The program used to generate the surface display simply shades each individual surface voxel according to the offset from the voxels above, so that the surface detail is minimally indicated but not truly rendered. This program also allows rotating the object, but only in 90 degree steps, and does not allow varying the location of the light source. The same data set is shown in Figure

15.9 using a more flexible reconstruction program that draws outlines for each plane, fills in triangles between the points in adjacent planes, calculates the light scattering from that triangle for any arbitrary placement of the light source, and allows free rotation of the object for viewing (it also generates movie sequences of the rotation for later viewing).

Surface rendering provides the best communication of spatial relationships and object shape information to the human viewer, because we are adapted to properly interpret such displays by our real-world experiences. The displays hide a great deal of potentially useful information—only one side of each object can be seen at a time, some objects may be obscured by others in front of them, and of course no internal structure can be seen. But this reduction of information makes available for study that which remains, whereas the volumetric displays contain so much data that we cannot usually comprehend it all.

There is a third display mode that is easily interpretable by humans, but (or perhaps because) it is rather limited in the amount of information provided. The three-dimensional array of voxels is either obtained sequentially, one plane at a time, or produced by an inverse computation that simultaneously calculates them all.

Showing the appearance of any arbitrary plane through this array is a direct analog to the sectioning process that is often used in sample preparation and imaging. The

356 Chapter 15 a b

Figure 15.8. Views of serial section slices through the cochlea from the inner ear of a bat. The surface voxels are shaded to indicate their orientation based on the offset from the voxels above. The data set was provided by (Duke Univ.) and the reconstruction performed using Slicer (Fortner Research, Sterling, VA). (For color representation see the attached CD-ROM.)

a b

Figure 15.9. The same data set as Figure 15.8 reconstructed by converting each image plane to a contour outline and connecting points in each outline to ones in the adjacent outlines forming a series of facets which are then shaded according to their orientation.

The entire array can be freely rotated. Reconstruction performed using MacStereology

(Ranfurly MicroSystems, Oxford UK). (For color representation see the attached CD-

ROM.)

358 Chapter 15 amount of computation is minimal, requiring only addressing the appropriate voxels and perhaps interpolating between neighbors.

As shown in Figure 15.10a, this allows viewing microstructure revealed on planes in any orientation, not just those parallel to the X , Y , Z axes. Sometimes for structures that are highly oriented and nonrandom, it is possible to find a section plane that reveals important structural information. But such unique planes violate most of the assumptions of stereology (IUR sections) and do not pretend to sample the entire structure.

Arbitrary section displays do not reveal very much about most specimens, except for allowing stereological measurements to test the isotropy of the structure.

In the example of Figure 15.10, which is an ion microscope image of a two-phase metal structure, the volume fraction of each phase, the mean intercept lengths, and the surface area per unit volume of the interface can easily be determined from any plane. Three-dimensional imaging is not required for these stereological calculations. The key topological piece of information about the structure is not evident in section images. Reconstructing the 3D surface structure (the figure show two different methods for doing so) reveals that both phases are connected structures of great complexity. This is the kind of topological information that requires

3D imaging and analysis to access. But the amount of added information, while key to understanding the structure, requires only visualization, not measurement.

Measurement is often performed more straightforwardly with less effort and more accuracy on two-dimensional images than on three-dimensional ones, as discussed below.

In summary, visualization of three-dimensional data sets typically involves too much data to put all on the screen at once. Selection by discarding parts that are not of interest (e.g., the matrix) or selecting just those parts that are most easily interpreted (e.g., the surfaces) helps produce interpretable displays. These are often used to create movies of sequences (e.g., a moving section plane, changing transparency, or rotating point of view) to convey to others what we have first learned by difficult study. Such pictures are in much demand, but it isn’t clear how well they convey information to the inexperienced, or how they will be efficiently disseminated in a world still dominated by print media.

Processing

Image processing in three dimensions is based on the same algorithms as used in two dimensions. Most of these algorithms generalize directly from 2 to 3D.

Fourier transforms are separable into operations on rows and columns in a matrix, the Euclidean distance map has the same meaning and procedure, and so forth. The important change is that neighborhood operations involve a lot of voxels. In a 2D array, each pixel has 8 neighbors touching edges or corners. In 3D this becomes 26 voxels, touching faces, edges and corners. For most processing, a spherical neighborhood is an optimal shape. Constructed on an array of cubic voxels, such a 5voxel wide neighborhood involves 57 voxels (Figure 15.11). Addressing the array to obtain the values for these neighbors adds significant overhead to the calculation process.

Three-Dimensional Imaging 359 a b

Figure 15.10. Ion microscopic images of a two-phase Fe-Cr alloy (data provided by H.

Miller, Oak Ridge National Labs): a) arbitrary sections through the voxel array do not show that the regions for each phase are connected; b) display of the voxels on the surface of one phase; c) outlines of the phase regions; d) rendered surface image produced by placing facets between the outlines of (c).

A cubic array is not the ideal one for mathematical purposes. Just as in a

2D image with square pixels the problem of distinguishing the corner and edge neighbors can be avoided by using a grid of hexagonal pixels for greater isotropy, so in three dimensions a “face-centered cubic” lattice of voxels is maximally isotropic and symmetrical. But this is not used in practice because the acquisition processes do not lend themselves to it, and computer addressing and display would be further complicated. Cubic voxels give the best compromise for practical use. But many acquisition methods instead produce voxels that are not cubic, but as noted before have much different resolution between planes than within the plane. This

360 c d

Figure 15.10.

Continued

Chapter 15

Figure 15.11. The 57 voxels that form an approximation to a spherical neighborhood with a diameter of 5 voxels in a cubic array. (For color representation see the attached

CD-ROM.)

Three-Dimensional Imaging 361 causes difficulties for processing since the voxels should be weighted or included in the neighborhood in proportion to their distance from the center. The usual approaches to deal with this are to construct tables for the particular voxel spacing present in a given data set, or to re-sample the data (usually discarding some resolution in one or two directions and interpolating in the others) to construct a cubic array.

Because of the size of the arrays and the size of the neighborhoods, processing of three-dimensional images is considerably slower and needs more computer horsepower and memory than processing of two-dimensional images. The purposes are similar, however. As discussed in earlier chapters, these include correction of defects, visual enhancement, and assistance for segmentation. Figure 15.5

shows an example, in which the gradient of voxel value is calculated by a threedimensional Sobel operator (combining the magnitude of first derivatives in three orthogonal directions to obtain the square root of the sum of squares at each pixel) to generate a display of brightness values that highlights the location of structures in the brain.

Measurement

Global parameters such as volume fraction and surface area can of course be measured directly from 3D arrays by counting voxels. In the case of surfaces, the area associated with each one depends on the neighbor configuration. It is not necessarily the case that this type of direct measurement is either more precise nor more accurate than estimating the results stereologically from 2D images. For one thing, the resolution of the voxel arrays is generally much poorer. A 256*256*256 cube requires 16 Megabytes of ram (even if only one byte is required for each voxel).

A cube 1000 voxels in each direction would require a Gigabyte. However the data are obtained, by a serial sectioning technique or volumetric reconstruction, such a large array is unworkable with the current generation of instrumentation and computers. The size of voxels limits the resolution of the image, and so details are lost in the definition of structures and surfaces which limits their precise measurement. And accuracy in representing the overall structure to be characterized generally depends upon adequate sampling. The difficulty in performing 3D imaging tends to limit the number of samples that are taken. Measurement of global parameters is best performed by taking many 2D sections that are uniformly spread throughout the structure of interest, carrying out only rapid and simple measurements on each.

The measurement of feature-specific values suffers from the same limitation.

With resolution limited, the range of sizes of features that can be measured is quite restricted. The size of the voxels increases the uncertainty in the size of each feature, and the result is often no better than can be determined much more efficiently from

2D images. Figure 15.12 shows an example. The structure is a loosely sintered ceramic with roughly spherical particles, imaged by X-ray microtomography. The figure shows various sections through the structure, and the surfaces of the particles. Figure 15.13 shows the measurement results, based on counting the voxels in the 3D array and also by measuring the circles in 2D sections and using the

362 Chapter 15 a b c

Figure 15.12. X-ray microtomographic reconstruction of sintered particles in a ceramic

(the data set is artificially stretched by a factor of 2 in the vertical direction): a) sections along arbitrary planes; b) sections along a set of parallel planes; c) display of the surface voxels of the spherical particles.

Three-Dimensional Imaging 363

a b c

Figure 15.13. Comparison of 2D and 3D measurement of size of spherical particles in structure shown in Figure 15.12: a) size distribution of circles in 2D plane sections; b) estimated size distribution of spheres by unfolding the circle data in (a)—note the negative values; c) directly measured size distribution of spheres from 3D voxel array.

364 Chapter 15 a b

Figure 15.14. Volumetric reconstruction of Golgi-stained neurons (data provided by

Vital Images, Fairfield, Iowa). The two different orientations show the topological appearance of the network, but the limited resolution of this 256 ¥ 256 ¥ 256 array does not actually define a continuous network nor reveal many of the smaller structures. (For color representation see the attached CD-ROM.)

Three-Dimensional Imaging 365 unfolding method to determine the sizes of the spheres that must have produced them. Even with the same resolution in the 2D images as in the 3D array, the results are similar. It would be relatively easy to increase the resolution of 2D section images to obtain a more precise measurement of sizes.

Counting voxels to determine volume is analogous to counting pixels in 2D to determine area. Other parameters are not quite so easily constructed. The convex hull in 3D consists of a polygon, ideally with a large number of sides. In practice, cubes and octahedrons are used. The length (maximum distance between any two points in a feature) is determined by finding a maximum projection as axes are rotated, and since this need be done only in fairly crude steps (as discussed in the chapter on measurement of 2D images) this generalizes well to 3D. But the minimum projected dimension requires much more work.

The midline length of an irregular object can be based on the skeleton. Actually there are two different skeletons defined in 3D (Halford & Preston, 1984;

Lobregt et al., 1980), one consisting of surfaces and a different one of lines; both require quite a bit of work to extract. Line lengths can be summed as a series of chain links of length 1, 2 and 3 (depending on how the voxels lie adjacent to each other), but the poor resolution tends to bias the resulting length value (too long for straight lines, too short for irregular ones). Shape parameters in 3D are typically very ad-hoc. In principle the use of spherical harmonics, similar to the harmonic shape analysis in 2D, can be used to define the shape. However, the resolution of most images is marginal for this purpose, and the presence of re-entrant shapes can occur with real features and frustrate this approach.

Figure 15.14 shows an example of the power and limitation of 3D imaging.

The Golgi-stained neurons can be displayed volumetrically and the structure rotated to allow viewing of the branching pattern. However, it is probably not desired and would in any case be very difficult to achieve a quantitative measure of the branching. In fact, the comparatively poor resolution (as compared to 2D images) means that the linear structures are not necessarily fully connected in the voxel image, and that some of the finer structures are missing altogether. This does not impede the ability of the viewer to judge the relative complexity of different structures in an qualitative sense, but it does restrict the quantitative uses of the images.

The simple conclusion must be that few metric properties of structures or features are efficiently determined using 3D imaging. The resolution of the arrays is not high enough to define the structures well enough, and the amount of computation required for anything beyond voxel counting is too high. Metric properties are best determined using plane section images and stereological relationships.

On the other hand, topological properties do require 3D imaging. However, these are not usually measured but simply viewed. Most of the use of 3D imaging is for visual confirmation of structural understanding that has been obtained and measured in other ways using classical stereology.

Download