Visualization of Forms in the Inside of the Human Body

advertisement
Review _________________________________________________________________ Forma, 21, 67–80, 2006
Visualization of Forms in the Inside of the Human Body
Junichiro TORIWAKI
Department of Life System Science and Technology, Chukyo University, Toyota 470-0393, Japan
E-mail address: jtoriwak@life.chukyo-u.ac.jp
(Received February 26, 2005; Accepted July 4, 2005)
Keywords: Organs, Lung, Colon, Human Body, 3D CT Images, Virtual Endoscopy,
Micro CT Images
Abstract. This article presents a brief review of methods to visualize forms of parts of the
human body in different spatial resolutions by applying navigation observation and
pattern recognition to three dimensional (3D) X-ray CT images. Several examples of 3D
views of the parts of human body are shown. They are characterized by that the viewpoint
is selected arbitrarily inside the body and moved around anywhere almost continuously.
1. Introduction
Human organs have very complicated form, sometimes far beyond our imagination.
We need to know forms of various parts of organs and tissues as exactly as possible to
diagnose diseases and treat them because some kinds of changes will be observed
necessarily due to diseases. Apart from such medical requirements forms of human body
or parts of it will be helpful to creative activity such as painting and sculpture, animation
production, and industrial design
Recent development of technology has made it possible to see forms of various parts
of human body without injuring it. In particular, imaging technologies such as CT and MRI
in medicine contribute much (DUCHMAN, 2000). For instance, by 3D X-ray CT we can
reconstruct parts of human body on computer memory with 0.5 mm 3 of spatial resolution.
Molecular imaging provides a method to record functions and shapes of molecule level. By
micro CT we can observe microstructure in the spatial resolution of micron-meter order
(MATSUBARA et al., 2004; SATO et al., 2004).
In this article, the author presents a brief review of methods to visually observe or to
visualize forms of parts of human body in different spatial resolutions by applying
navigation observation and pattern recognition to three-dimensional (3D) X-ray CT
images. We also show several examples of 3D views of the parts of human body. They are
characterized by that the viewpoint is selected arbitrarily inside the body and moved around
anywhere almost continuously.
By imaging technology, we obtain a 3-dimensinonal array recording the parts of the
individual human body. This is regarded as a virtualized version of the individual human
body. We call this the virtualized human body (VHB) (T ORIWAKI, 1997). VHB is a replica
67
68
J. T ORIWAKI
of the individual human body. However, we (human visual system) cannot see such a set
of 3D numerical values directly. In order to visualize it we employ a CG technique known
as the volume rendering (LEVOY, 1988; TORIWAKI, 2002). Visualization methods are more
enriched by combining them with navigation and structurization.
2. Visualization of 3D Images
In the case of the observation of the human body, the original data is a set of three
dimensional (3D) density values obtained by scanning human body by imaging equipments
such as X-ray CT and MRI. We call this 3D array a 3D digital picture (image) (TORIWAKI,
2002c). The 3D array of numerical data stores physical measurement data of 3D volume
elements of the human body such as the attenuation factor of X-ray measured by scanners.
The spatial resolution spreads over the range of 1 cm to 10 µm. However we cannot see
directly the inside structure of such 3D array of numerical data unless utilizing visualization
techniques.
The visualization technique that is employed most widely now is the volume rendering
(VolR). We do not intend to explain details of VolR here. Details will be found in
(TORIWAKI, 2002c; MORI et al., 2003), if necessary. Instead we will give brief comments
to be noted in seeing images rendered with VolR below.
(1) VolR is considered as a kind of the orthogonal or the perspective mapping of a 3D
array of numerical data (or a solid) to a 2D picture plane. Therefore a gray tone value
(density value) on a resulting 2D picture is an accumulative sum or multiplication of
density values of an original 3D picture along a line of mapping (called “the ray” in the field
of computer graphics) (MORI et al., 2003) (Fig. 1).
(2) Before generating a VolR picture, we replace density values by suitable values,
usually for the convenience of understanding spatial distribution of original density values
by human vision. The resultant values of replacement are called “opacity”. Correspondence
relationship among density values and opacity values is defined by the opacity table, which
Fig. 1. Illustration of the volume rendering.
Visualization of Forms in the Inside of the Human Body
69
is most important parameters of VolR user can select.
(3) Voxels in a 3D picture may be divided into subgroups if possible, so that different
colors may be assigned to each subgroup. By using colors we can attract observers’
attention to particular objects in a displayed picture. Assignment of colors is defined by a
color table in the VolR system, which is the other important parameter given by a user.
(4) If a solid object in a 3D space is defined explicitly, we can draw the surface of the
solid object on a 2D display. This is done by another well known method of computer
graphics “surface rendering (SurR)”. We will neglect explanation of SurR here, because
it is also found in any textbook of computer graphics easily (WATT and P OLICARPO, 1998).
(5) From the viewpoint of shape understanding, VolR is considered as the method
directed to discovering or finding form existing in natural phenomena or natural things
without much a priori knowledge. As was introduced above, it contains two sets of
parameters “opacity table” and “color table”. Observed forms or resulting “impression” of
generated images greatly depends on these two parameter sets.
(6) On the other hands, SurR is the method directed more strongly toward design or
creation of a new form artificially. In order to apply SurR to natural objects such as the
human body, we need to extract border surfaces of objects beforehand. To perform this
automatically, pattern recognition of solid objects or border surfaces in 3D space is
required (TORIWAKI, 2002c). Details of pattern recognition are omitted here because
important parts of it have already been described in (TORIWAKI, 2002c)
3. Navigation
In the ordinary volume rendering, we assume a fixed viewpoint and fixed view
directions. Also we set necessary parameters fixed such as opacity parameters. Although
all of them may be given arbitrarily, they are fixed once they were given. In several
applications, however, parts of viewing parameters (positions of viewpoints and view
directions) may be changed interactively by users. In medical applications this technique
is well known as virtual endoscopy nowadays (VINING et al., 1993; MORI et al., 1994;
ROGALLA et al., 2000; V INING, 2003). Sometimes they are called by the name of the target
organs to be diagnosed such as virtual colonoscopy and virtual bronchosccopy (ROGALLA ,
2000; DUCHMAN, 2003).
Technically this is not difficult if recent computers with graphic engine are available.
Let us represent a two dimensional (2D) picture sequence by F = {F(t)} = {{fij(t)}}, where
fij(t) means a gray tone value at the i-th row and the j-th column at the time t. If we generate
a 2D image F(t) in the suitable rate, faster than 10 frames/sec, for example, with changing
the location of the view point gradually we can produce a moving picture sequence which
gives the impression that we fly through the inside of tublar organs such as colon. We call
this way of presentation the navigation (TORIWAKI, 1997). The picture sequence F is a time
sequence of 2D pictures F(t), which shows a scene, seen when our viewpoint moves along
the time axis. This is not the only possible way of the viewpoint movement. Conceptually
various types of navigation along other axis will be considered. We call this the generalized
navigation or navigation observation (T ORIWAKI, 1997, 2004a, 2004b). One interesting
example is the change in the physical scale, or the magnification. The example is found in
(MORROSON , 1982).
70
J. T ORIWAKI
In general, by seeing generated picture sequences we can get feeling that we are
traveling inside the object or inside abstract space along a suitably defined axis. In this
sense, virtual endoscope images show moving images which might be seen during driving
cars or airplanes inside human body or flying though pipeline shape of organs along their
inside wall with small airplanes. These feelings are aroused from the moving of a viewpoint
along the real world coordinate axis. Conceptually the movement of the viewpoint along
various other axes is considered, such as the physical scale, opacity, and time (TORIWAKI,
2004a).
4. Collective Examples of Inside Views of the Human Body
Let us show examples of pictures generated by rendering applying VolR and SurR to
3D CT images of real human body. They have been produced and collected in the process
of researches in authors’ group concerning computer aided diagnosis of cancer, pattern
recognition of 3D images, computer graphics, and visualization of 3D gray tone images.
We do not intend here to give accurate medical meanings to those images. Instead we
expect readers to enjoy views inside human body. Also they could find how complicated
forms exist inside our body that is usually invisible.
4.1. Example 1: The inside view of colon
The first image is the view of the colon (HAYASHI et al., 2003a, b; ODA et al., 2004;
KITASAKA et al., 2004). Figure 2 shows the outside view of colon. Blue curves show
approximated center lines automatically determined by a 3D thinning algorithm (SAITO
and TORIWAKI, 1994, 1995; TORIWAKI and M ORI, 2001). If observer’s viewpoint proceeds
into the colon along these centerlines, the observer can see scenes of the space inside colon.
Let us show in Fig. 3 an example of a picture sequence, which is consisting of successive
scenes obtained by virtual colonoscopy. The colon wall has many successive convex and
concave parts called “haustra” in medicine. By presenting each scene with the enough high
Fig. 2. The outside view of colon.
Visualization of Forms in the Inside of the Human Body
71
rates, we can feel as if we are flying through the cylindrical closed space. In clinical
applications doctors are expected to find symptoms of diseases such as polyps, tumors and
inflammations.
Figure 4 illustrates effects of changes in the opacity table using one of such scenes of
the colon wall. Here a typical polyp exists, but its apparent shape seriously varies according
to the values of the opacity table of the VolR algorithm employed here. Thus we should be
careful enough concerning what we are seeing in VolR images.
4.2. Example 2: The inside view of the lung
Let us proceed to the scenery of the lung. Figure 5 presents two scenes observed from
the viewpoint located inside the lung. Parts of blood vessels, bronchus branches, ribs and
chest wall are seen in the image. Small massive objects in the right image, which seem to
Fig. 3. Picture sequence showing inside views of colon (=views which will be seen in navigating inside the
colon). The viewpoint was moved along the centerline of the case 2 in Fig. 2 from the bottom (the upper left
of the figure) to the top (the lower right of the figure). Each picture shows the view in looking ahead in the
tangential direction of the centerline at the sample point selected at even intervals (approximately 6 cm on
the real body) on the path.
72
J. T ORIWAKI
Fig. 4. Effect of change in the opacity table (1). The viewpoint and the view direction are the same for two
pictures.
Fig. 5. Scenery of the lung.
Fig. 6. Effect of change in the opacity table (2). H.U. means the Houndsfield Unit used for the density values
in CT images. Numerals under the each image show opacity values used in rendering by VolR.
Visualization of Forms in the Inside of the Human Body
73
Fig. 7. Piece of the inflated fixed lung specimen.
Fig. 8. Example of a slice of a µ -CT image of the lung specimen.
be floating in the air, are nodules suspected to be lung cancer. In this case more than 100
such small nodules were detected by a medical doctor. We can move around them, if needed
for diagnosis. Those are marked by areas of green color at the corresponding locations in
the left image. Such nodules are recognized as only small vague massive shadows scattered
in the lung field of a slice of CT images. In these 3D images we can fly around among such
many 3D nodules freely with examining details of shapes and calculating shape features.
Even in this case, however, borders of nodules are not fixed decisively. They are only
perceived visually in VolR images. Apparent shapes of nodules are very sensitive to values
of the opacity table employed here (Fig. 6) (KUSANAGI et al., 2003; MEKADA et al., 2003).
74
J. T ORIWAKI
Fig. 9. Image of the whole sample specimen produced with volume rendering.
4.3. Example 3: Microstructure of lung tissue
The third example is the visualization of microstructure of the tissue of the lung.
Source data was obtained by scanning a piece of an inflated fixed lung with micro CT
scanner. Its spatial resolution is 0.2 µm approximately. Let us omit here detailed explanation
of medical or anatomical contents of the structure. We would like to notice that the sample
was made after a piece of an anatomical sample was dried. Therefore the structure seen here
is a kind of skeleton structure of the architecture in living organ. However the approximated
shape was preserved because the sample was filled with air after it was extracted and then
dried (G ROSKIN, 1993). The whole of the piece of the sample is shown in Fig. 7. Figure 8
shows an example of a slice of the CT image, and Fig. 9 is a VolR image of the whole of
the sample. The real size is about 5 mm3.
Let us show in Fig. 10 a VolR image with the viewpoint inside the piece of the sample
used here. We can see the sophisticated architecture of parenchyma of the human lung,
peripheral structure of thin bronchial branches, alveolar duct, alveolus etc. Basic units
forming the lung architecture are the alveolus and the alveolar ducts. The number of
alveolus is a key determinant of the lung architecture, and has been counted in various ways
(OCHS et al., 2004). In this sample, however, individual alveolus is expected to be observed
directly, because the mean size of a single alveolus was about 4.2 × 106 µm3 (O CHS et al.,
2004). The total number of alveoli was estimated as 480 million according to (OCHS et al.,
2004). This value was derived by applying classical stereology to 2D sections observed by
light microscope. Apart from medical or anatomical meanings, we can see complicated 3D
network architecture. By shifting the viewpoint a little we see different views of this
architecture. In the case of Fig. 10, the viewpoint is considered to be located inside a
peripheral bronchus branch (left), and in the peripheral vein (right).
Shape features characterizing this architecture have not been proposed, nor been
measured. In (O CHS et al., 2004), the Euler number of this architecture was estimated only
by stereological method. In (MATSUBARA et al., 2003, 2004), the method proposed in
(TORIWAKI and YONEKURA , 2002a, b) was applied to calculate 3D digital Euler number and
the connectivity index from a 3D binary picture obtained by the threshold from the 3D gray
Visualization of Forms in the Inside of the Human Body
75
Fig. 10. Images produced with the volume rendering. The viewpoint seems to be located in a peripheral bronchus
branch (left), and in the peripheral vein (right).
Fig. 11. Effect of change in the opacity table for the same material as Fig. 9, and 10.
tone image. Apparently similar structure is found in bone, although the size is larger than
this (P ARFITT et al., 1983; KINNEY and LADD, 1995; M AJUMDAR et al., 1995; S AHA and
CHAUDHURI 1996; KINNEY et al., 1998; WEHRLI et al., 2000, 2001, 2003).
We should be careful, however, to interpret this image again, because the same
problem as was described before occurs here, too. For example, Fig. 11 shows two VolR
images of this sample from the same viewpoint and the same direction. Only the opacity
tables are different among them. <Which is true?> is the reasonable question. Perhaps, they
provide information of the shapes of 3D equi-density surface at the different level of
76
J. T ORIWAKI
Fig. 12. Tree structure in the human body (back view of the body).
density values. Still we could have recognized some of individual alveoli by computer
(S ATO et al., 2004).
4.4. Example 4: Tree structure in the human body
Let us present several views of the tree structure in the human body. Figure 12 shows
the bronchus tree and vessels extending from the hilum toward the peripheral of the lung.
The gray column at the center of the image is the spine. The next image (Fig. 13) is vessels
in the lung extracted automatically by the image analysis procedure and then were
classified into arteries and veins by the medical specialist (T ANAKA et al., 2004).
Strictly speaking, vessels and arteries in the lung are connected by capillary, but the
present CT systems cannot record figures of the capillary. Thus vessels and bronchus
branches are observed as the tree structures in CT images. Anatomical names are also given
to individual branches regarding them as parts of tree structure.
4.5. Example 5: Abdominal organs
Finally let us show the outside views of major abdominal organs in Fig. 14. Since all
organs in this figure were again extracted by pattern recognition by computer, their forms
are not always exact, but important errors have not been found here (KITASAKA et al.,
Visualization of Forms in the Inside of the Human Body
77
Fig. 13. Trachea, bronchus, and vessels in the lung.
Fig. 14. Abdominal organs. Brown red: liver. Yellow: kidney. Green: spleen. Pink: lung and stomach. Light
gray: bone.
78
J. T ORIWAKI
2005). Anyway, we can never see such forms in living human body without X-ray CT
images. However, we have not succeeded in deriving reasonable way of mathematical
expressions for these forms.
5. Conclusion
In this article, the author briefly reviews methods to visualize forms of human organs
and tissues as examples of forms found in natural things. By the great progress in recent
imaging technologies we can obtain 3D digital images of parts of human body with the
spatial resolution of 10 mm~10 µm. Once we have stored digitized data from scanning
devices, we can reconstruct the individual human body (virtualized human body VHB) in
computer. By visualizing VHB, we can see various forms existing in the human body.
Furthermore we can move around or fly through inside the human body interactively.
The article presented several examples of images showing scenes inside the human
body. All of them are characterized by that the viewpoint was set inside the body. However
theoretical (or mathematical) analysis of those forms still remains unsolved for future
problems. For instance, simple mathematical expressions for the form of colon have not
been known. Shape features have not been reported for the complicated spatial architecture
of the lung tissue. We expect that the science on form will contribute much to solving these
problems in the near future.
The authors deeply thank all people listed below for their assistance. For providing CT images
and for valuable discussion from medical viewpoints: Makoto Hashizume, MD, Kyushu University,
Hiroshi Natori, MD, Sapporo Medical College, Masaki Mori, MD, Sapporo Kousei Hospital,
Hirotsugu Takabatake, MD, Minami-Sanjo Hospital.
For producing VolR images included in the article and for significant collaboration: Kensaku
Mori, Ph.D., Nagoya University, Yoshito Mekada, Ph.D., Chukyo University, Yuichiro Hayashi,
Ph.D., Nagoya University, Takayuki Kitasaka, Ph.D., Nagoya University.
This research was partially supported by the Grant-in-Aid for Scientific Research from the
Ministry of Education, Science and Culture, and the Grant-in-Aid for Cancer Research from the
Ministry of Health and Welfare and Labor, and Grant-in-Aid for Private University High-tech.
Research Center from the Ministry of Education, Science and Culture.
REFERENCES
B ANKMAN , I. N. (ed.) (2000) Handbook of Medical Imaging, Academic Press.
DUCHMAN, A. H. (2003) Atlas of Virtual Colonoscopy, Springer-Verlag.
GROSKIN , S. A. (1993) Heitzman’s the Lung—Radiologic-Pathologic Correlations, Mosby, St. Louis, U.S.A.
HAYASHI , Y., MORI , K., H ASEGAWA, J., SUENAGA, Y. and T ORIWAKI, J. (2003a) A method for detecting
undisplayed regions in virtual colonoscopy and its application to quantitative evaluation of fly-through
methods, Academic Radiology, 10, 1380–1391.
HAYASHI , Y., M ORI , K., HASEGAWA, J., SUENAGA, Y. and T ORIWAKI , J. (2003b) Quantitative evaluation of
observation methods in virtual endoscopy based on the rate of undisplayed region, in Proc. of SPIE, Medical
Imaging 2003, Physiology and Function: Methods, Systems, and Applications, pp. 69–79.
HERMAN , G. T. (1998) Geometry of Digital Spaces, Birkhauser, Boston.
KINNEY , J. H. and L ADD , A. J. (1998) The relationship between three-dimensional connectivity and the elastic
properties of trabecular bone, J. Bone Mineral Res., 13, 839–845.
KINNEY , J. H., LANE , N. E. and HAUPT, D. L. (1995) In vivo, three-dimensional microscopy of rabecular bone,
Visualization of Forms in the Inside of the Human Body
79
J. Bone Mineral Res., 10, 264–270.
KITAOLA , H., T AMURA, S. and T AKAKI , R. (2000) A three-dimensional model of the human pulmonary acinus,
J. Appl. Physiol. 88, 2260–2268.
KITASAKA , T., MORI , K., HAYASHI , Y., SUENAGA, Y., HASHIZUME , M. and T ORIWAKI , J. (2004) Virtual
pneumoperitoneum for generating virtual laparoscopic views based on volumetric deformation, in 7th
International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI
2004.9), Saint-Malo, France, September, 26–29, 2004, Proceedings, Part II, LNCS 3217 (eds. C. Barillot,
D. R. Haynor and P. Hwllier), pp. 559–567.
KITASAKA , T., OGAWA, H., YOKOYAMA, K., M ORI , K., MEKADA , Y., HASEGAWA , J., SUENAGA, Y. and
T ORIWAKI , J. (2005) Automated extraction of abdominal organs from uncontrasted 3D abdominal X-ray CT
images based on anatomical knowledge, Journal of the Computer Aided Diagnosis of Medical Images, 9,
1–14.
KUSANAGI , T., M EKADA, Y., TORIWAKI , J., H ASEGAWA, J., MORI , M. and NATORI , H. (2003) Correspondence
of lung nodules in sequential chest CT images for quantification of the curative effect, Proc. of CARS2003,
983–989.
L EVOY, M. (1988) Volume rendering, display of surfaces from volume data, IEEE Computer Graphics and
Applications, 8, 29–37.
M AJUMDAR, S., NEWITT , D., J ERGAS , M., GIES, A., CHIU , E., OSMAN, D., KELTNER, J., KEYAK, J. and GENANT ,
H. (1995) Evaluation of technical factors affecting the quantification of trabecular bone structure using
magnetic resonance imaging, Bone, 17, 417–430.
M ATSUBARA, A., KITASAKA, T., MORI , K., SUENAGA, Y., T ORIWAKI , J. and T ABATAKE , H. (2003) A preliminary
study on micro structure analysis of lung tissue (2), in Paper of Professional Group, Institute of Electronics,
Information and Communications Engineers, Japan, PRMU2003-303, Vol. 103, No. 738, pp. 113–118.
M ATSUBARA, A., KITASAKA , T., M ORI, K., SUENAGA, Y. and TORIWAKI , J. (2004) A study on topological feature
values of 3-D digital images—toward micor structure analysis of lung tissue—, in Paper of Professional
Group, Institute of Electronics, Information and Communications Engineers, Japan, PRMU2003-303, Vol.
103, No. 738, pp. 113–118.
M EKADA, Y., KUSANAGI , T., HAYASE , Y., MORI , K., TORIWAKI , J., HASEGAWA , J., M ORI, K. and NATORI , H.
(2003) Detection of small nodules from 3D chest X-ray CT images based on shape features, in Proc. of
CARS2003, pp. 971–976.
M ORI , K., HASEGAWA , J. and TORIWAKI , J. (1994) A method to extract pipe structured components in three
dimensional medical images and simulation of bronchus endscope images, in Proc. of Conference on Three
Dimensional Digital Images ’94, pp. 269–274.
M ORI , K., SUENAGA, Y. and T ORIWAKI , J. (2003) Fast software-based volume rendering using multimedia
instructions on PC platforms and its application to virtual endoscopy, in Proc. of SPIE, Medical Imaging
2003, Physiology and Function: Methods, Systems, and Applications, pp. 111–122.
M ORRISON, P. (1982) P. Morrison and the Office of Charles and Ray Eames: Powers of Ten, about the Relative
Size of Things in the Universe, W. H. Freeman and Company, San Francisco.
OCHS , M., NYENGAARD, J. R., JUNG , A., KNUDESEN , L., VOIGT, M., W AHLERS , T., RICHITER, J. and GONDERSEN,
H. J. G. (2004) The number of alveoli in the human lung, Am. J. Respir Grit Care Med., 169, 120–124.
ODA , M., KITASAKA , T., M ORI , K. and SUENAGA, Y. (2004) Development of computer aided diagnosis system
for colorectal cancer based on navigation diagnosis, in Paper of Professional Group, Institute of Electronics,
Information and Communications Engineers, Japan, PRMU2004-18, MI2004-18, MI, Vol. 104, No. 91, pp.
35–40.
P ARFITT, A. M., MATHEWS, C. H. E., VILLANUEVA, A. R., KLEEREKOPER , M., FRAME, B. and RAO, D. S. (1983)
Relationships between surface, volume, and thickness of iliac trabecular bone in aging and in osteoporosis,
Implications for microanatomic and cellular mechanisms of bone loss, J. Clinical Invest, 72, 1396–1409.
R OGALLA, P., T ERWISSCHA, J., SCHELTINGA, van and HAMM , B. (eds.) (2000) Virtual Endoscopy and Related
3D Techniques, Springer, Heidelberg, Germany.
S AHA, P. K. and CHAUDHURI , B. B. (1996) 3D digital topology under binary transformation with applications,
Comput. Vision Image Underatand., 63, 418–429.
S AITO, T. and T ORIWAKI , J. (1994) New algorithms for n-dimensional Euclidean distance transformation,
Pattern Recognition, 27, 1551–1565.
S AITO, T. and TORIWAKI , J. (1995) A sequential thinning algorithm or three dimensional digital pictures using
80
J. T ORIWAKI
the Euclidean distance transformation, in Proc. 9th SCIA (Scandinavian Conf. on Image Analysis), pp. 507–
516.
S ATO, Y., NAGAO, J., KITASAKA, T., M ORI , K., SUENAGA, Y., T ORIWAKI , J. and T AKABATAKE , H. (2004)
Extraction of pulmonary alveoli from Micro CT image of lung tissue, in Paper of Professional Group,
Institute of Electronics, Information and Communications Engineers, Japan, PRMU2004-9, MI2004-9,
WIT2004-9, MI, Vol. 104, No. 90, pp. 49–54.
T ANAKA , T., MEKADA , Y., M URASE , H., HASEGAWA , J., T ORIWAKI , J., and OTSUJI , H. (2004) Automated
classification of pulmonary artery and vein from chest X-ray CT images based on their spatial arrangement
features, in Proc. of the Sym. on Media and Image Recognition and Understanding 2004 (MIRU2004), Vol.
I, pp. (I)15–20.
T ORIWAKI , J. (1997) Virtualized human body and navigation diagnosis, BME (Journal of Japanese Society for
Medical and Biological Engineering), 11, 8, 24–35 (in Japanese).
T ORIWAKI , J. (2004a) Navigation observation with interactive operation of the viewpoint inside an object,
Technical Report No. 2003-2-01, School of Computer and Cognitive Sciences, Chukyo University, Toyota,
Japan.
T ORIWAKI , J. (2004b) The front of radiation imaging technique (8): Navigation observation—observing objects
with the free inside viewpoint and medical applications, ISOTOPES, 53, 331–342.
T ORIWAKI , J. (2002c) Three-dimensional Image Processing, Shokodo (in Japanese).
T ORIWAKI , J. and M ORI , K. (2001) Distance transformation and skeletonization of 3D pictures and their
applications to medical images, in Digital and Image Geometry, Advanced Lectures, LNCS (Lecture Notes
in Computer Science) (eds. G. Bertrand, A. Imiya and R. Klette), pp. 2243, pp. 412–428, Springer-Verlag.
T ORIWAKI , J. and YONEKURA , T. (2002a) Euler number and connectivity indexes of a three-dimensional digital
picture, FORMA, 17,173–209.
T ORIWAKI , J. and YONEKURA , T. (2002b) Local patterns and connectivity indexes in a three dimensional digital
picture, FORMA, 17, 275–291.
VINING , D. J. (2003) Virtural colonosocopy: The inside story, in Atlas of Virtual Colooscopy (ed. V. H.
Dachman), pp. 3–4, Springer-Verlag, N.Y.
VINING , D. J., PADHANI, A. R., W OOD, S., E SERHOUNI , E. A., FISHMAN , E. K. and KUHLMENN , J. E. (1993)
Virtual bronchoscopy: a new perspective for viewing the tracheobronchial tree, Radiology, 189(P), Nov.
W ATT , A. and POLICARPO , F. (1998) The Computer Image, Addison-Wesley.
W EHRLI, F. W., SAHA , P. K., GONBERG, B. R. and SONG , H. K. (2000) noninvasive assessment of bone
architecture by magnetic resonance micro-imaging-based virtual bone biopsy, Proc. of the IEEE, 91, 1520–
1542 (2003.10); J. Appl. Phsiol., 88, 2260–2268 (2000).
W EHRLI, F. W., GONBERG, B. R., SAHA , P. K., S ONG, H. K., HWANG , S. N. and SNVDER , P. J. (2001) Digital
topological analysis of in vivo magnetic resonance microimages of trabecular bone reveals structural
implications of osteoporosis, J. Bone Mineral Res., 16, 1520–1531.
W EHRLI, F. W., S AHA, P. K., G ONBERG, B. R. and SONG, H. K. (2003) Noninvasive assessment of bone
architecture by magnetic resonance micro-imaging-based virtual bone biopsy, Proc. of the IEEE, 91, 1520–
1542.
Download