Imaging Platforms for Detecting and ... with Applications in Skin -

advertisement
Imaging Platforms for Detecting and Analyzing Skin
Features and Its Stability - with Applications in Skin
Health and in Using the Skin as a Body-Relative AR-HVS
Position-Encoding System
MASSACHUSETTS INSTITUTE
T TUT
MASSACHN
by
JUL 3 0 2015
Ina Annesha Kundu
LIBRARIES
B.S. in Mechanical Engineering, University of Arizona (2013)
B.S. in Mathematics, University of Arizona (2013)
Submitted to the Department of Mechanical Engineering
in partial fulfillment of the requirements for the degree of
Master of Science in Mechanical Engineering
at the
MASSACHUSETTS INSTITUTE OF TECHNOLOGY
June 2015
@ Massachusetts Institute of Technology 2015. All rights reserved.
Signature redacted
. . . . . .. . . . .
.
Author...
Department of Mechanical Engineering
Certified by.
Signature redacted
...........
ay8, 2015
Dr. Brian Anthony
Principal Research Scientist, Department of Mechanical Engineering
-*Supervisor
.n -. /n
Signature redacted
Accepted by ..............
....
David E. Hardt
Professor, Department of Mechanical Engineering
Graduate Officer
2
Imaging Platforms for Detecting and Analyzing Skin Features
and Its Stability - with Applications in Skin Health and in
Using the Skin as a Body-Relative Position-Encoding System
by
Ina Annesha Kundu
Submitted to the Department of Mechanical Engineering
on May 8, 2015, in partial fulfillment of the
requirements for the degree of
Master of Science in Mechanical Engineering
Abstract
Skin imaging is a powerful, noninvasive method used with potential to aid in the
diagnosis of various dermatological diseases and assess overall skin health. This thesis
discusses imaging platforms that were developed to aid in studying skin features and
characteristics at different time and length scales to characterize and monitor skin.
Two applications are considered: (1) using natural skin features as a position encoding
system and an aid for volume reconstruction of ultrasound imaging and (2) studying
natural skin feature evolution or stability over time to aid in assessing skin health.
A 5-axis, rigid translational scanning system was developed to capture images at
specific locations and to validate skin based body registration algorithms. We show
that natural skin features could be used to perform ultrasound based reconstruction
accurate to 0.06 mm. A portable, handheld scanning device was designed to study
skin characteristics at different time and length scales. With this imaging platform,
we analyze skin features at different length scales: pim (for microreliefs), mm (for
moles and pores), and cm (for distances between microreliefs and other features).
Preliminary algorithms are used to automatically identify microreliefs. Further work
in image processing is required to assess skin variation using these images.
Thesis Supervisor: Dr. Brian Anthony
Title: Principal Research Scientist, Department of Mechanical Engineering
3
4
Acknowledgments
If I were to acknowledge every person and every encounter that has somehow made
an impact in my life during the past two years, this section would be longer than the
thesis itself. However, there are just some people that cannot go unnamed, and I take
the time now to express my deepest gratitude for their help, love, and support in the
past two years.
First and foremost, I want to thank my family for providing a nurturing and
supportive environment. I am always grateful for their never-ending love and encouragement. Throughout my life, my parents have been my role models and I can only
aspire to be as kind, considerate, and selfless as they are. Despite the miles that
separated us, my mom always made sure my mind and body were well nourished.
She taught me that being a good person is much more respectable and valuable than
being a good student; a lesson that is easily lost in the focused, studious life at MIT.
A man of few words and having a very busy professional schedule, my father left
the responsibilities of disciplining and guiding the kids to my mom. But he was also a
silent supporter of my sister, Auni, and me. In the few stressful times we experienced
in grad school, he always reminded us that we went for graduate studies for fun and
echoed my mom's sentiments that "school is not everything." His sarcastic comments
and far-fetched theories always brought a smile to our faces.
My sister, Auni, was my anchor. Growing up as twins and constantly together,
starting a new chapter separated by 2683 miles was the toughest part about grad
school. But through the advances of modern technology, we managed to talk, text,
and video chat enough to stay updated on each other's lives.
I would definitely not be where I am without my advisor, Dr. Brian Anthony.
Through his guidance, I grew as an individual and researcher. He encouraged me to
"fail; fail fast, and fail often" and to make sure to "never let perfection get in the way
of progress." Throughout lab lunches and meetings, he also taught me valuable life
skills (like sarcasm). It will probably take the remaining years of my PhD before I
can fully understand his humor, but I am always grateful to him for making the lab
5
such an enjoyable place to work.
Besides my advisor, my lab mates made the basement lab of Building 35 my
home away from home. Our lab lunches were always filled with spirited debates and
lighthearted conversations, even at the most stressful times.
Shawn Zhang, Nigel
Kojimoto, and Tylor Hess were forever trying to break my notions of "good" vs.
"bad" and became some of my closest confidantes. As Nigel leaves for California
next month, the lab dynamic will no longer be the same, but I know the lab group
will continually grow and learn from one another as we have done with former PhD
students, Matthew Gilbertson and Shih-Yu Sun. I know we will continue to have
stimulating conversations in the hallways or in lab, like I did over the past couple of
years with my lab mates and associates: Sisir Koppaka, Kristi Oki, Aaron Zakrzewski,
John Lee, Bryan Ranger, Megan Roberts, Dr. Xian Du, and Ian Lee.
While the lab contributed to a significant part of the journey, the people I met
outside of lab really enhanced my overall experience at Boston and MIT. From partying together to taking classes together, these people were an integral part of my
assimilation in Boston. Claudio Hail and Joao Ramos of the "Three Musketeers"
were the first people I met on campus - from day one of orientation to the last day
of our master's thesis, we always managed to find time to explore Boston together.
Times with the one and only "Hot Chocolate Society" (Affi Maragh, Claudio Hail,
Joao Luiz Alemeida de Souza Ramos, Andrew Houck, Connor Mulcahy, Mustafa Mohamad, Robert Katzschmann) will be forever cherished. A special thank you to Joao
and Robert for their help with controls.
Last, but certainly not least, I thank Steve Racca for his friendship and all the
engaging conversations over the past year. Always willing to be my test subject, I got
the opportunity to test out theories to advance my research. He taught me so much,
in the lab and beyond, for which I will forever be thankful.
I am so fortunate to have experienced being a grad student with all my Mech
E companions.
It has been a pleasure taking classes, hanging out, and traveling
together. Robbie Bruss: thank you for always having a positive attitude. Hearing
you sing a cappella and taking ceramics class together were a welcome artistic break
6
to an otherwise technical life. To my MEng peers (Grace Fong, Ali Shabbir, Derek
Straub, Shaozheng Zhang, Paramveer Toor, Siddharth Udayshankar, Aditya Prasad,
Daniel Dillund, Happy Zhu, Rahul Chawla, Saksham Saxena, and Steve Racca):
thank you for willing to be my subjects during the skin scanning experiments. To all
my friends I met through my time at the Graduate Student Council (GSC), Graduate
Association of Mechanical Engineers (GAME), and Ashdown, there are too many of
you to name individually, but know that spending time with you has enriched my
MIT experience. To my Boston and MIT comrades, I extend a heartfelt thank you.
I could not be where I am today without you.
7
8
Contents
Introduction
19
Skin Research . . . . . . . .
. .........
19
1.2
Skin Features
. . . . . . . . . .
22
.
Melanin Variations
. . . . . . . . . .
22
1.2.2
Hair Follicles.....
. . . . . . . . . .
23
1.2.3
Microrelief Structures
. . . . . . . . . .
23
1.2.4
Superficial Veins
. . . . . . . . . .
24
Existing Imaging Technologies
. . . . . . . . . .
26
1.3.1
Imaging Hardware
. . . . . . . . . .
26
1.3.2
Imaging Methodologies
. . . . . . . . . .
30
. . . . . . . . . .
32
.
1.2.1
.
1.4
. . . . . . . .
.
1.3
.
1.1
. .
.
Thesis Outline . . . . . . . .
.
1
2 Ground Based Mechanical Scanning System for Evaluating Skin
Based Body Registration Algorithms
33
2.1
. . . . . . . . . . . . . . . . . . . . . .
34
2.1.1
3-Axis CNC Mill . . . . . . . . . . . . . . . . . . . . . . . . .
35
2.1.2
Servo Motors
. . . . . . . . . . . . . . . . . . . . . . . . . . .
37
2.1.3
Webcam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38
2.1.4
Integration of Hardware
. . . . . . . . . . . . . . . . . . . . .
40
2.1.5
Ergonomic Considerations . . . . . . . . . . . . . . . . . . . .
44
Mechanical System Control Using LabView . . . . . . . . . . . . . . .
45
2.2.1
Stepper Motor Control . . . . . . . . . . . . . . . . . . . . . .
46
2.2.2
Servo Motor Control . . . . . . . . . . . . . . . . . . . . . . .
49
2.2
Mechanical System Hardware
9
2.3
2.4
2.5
2.6
Characterizing the Mechanical System
.......
2.3.1
Resolution of the 3-Axis Linear Stage . . . . . . . . . . . . . .
54
2.3.2
Quantifying Error of the System . . . . . . . . . . . . . . . . .
55
Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
59
2.4.1
Lighting and Artificial Skin Features
. . . . . . . . . . . . . .
59
2.4.2
Linear Motion Experiments
. . . . . . . . . . . . . . . . . . .
62
2.4.3
Defocus Blur
. . . . . . . . . . . . . . . . . . . . . . . . . . .
62
2.4.4
Motion Blur . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64
2.4.5
Underwater Experiments . . . . . . . . . . . . . . . . . . . . .
64
Validation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
67
2.5.1
Linear Motion . . . . . . . . . . . . . . . . . . . . . . . . . . .
69
2.5.2
Defocus Blur
. . . . . . . . . . . . . . . . . . . . . . . . . . .
70
2.5.3
Motion Blur . . . . . . . . . . . . . . . . . . . . . . . . . . . .
70
Summary
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3 Handheld Skin Scanning Device
3.1
3.2
3.3
52
Camera
71
73
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
74
3.1.1
Variable Optical Parameters . . . . . . . . . . . . . . . . . . .
76
3.1.2
Camera Control with LabView
. . . . . . . . . . . . . . . . .
79
Handheld Scanning Device Design . . . . . . . . . . . . . . . . . . . .
80
3.2.1
Frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
81
3.2.2
Set Working Distance . . . . . . . . . . . . . . . . . . . . . . .
82
3.2.3
Ring Stand
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
82
3.2.4
Iteration 1 of Handheld Scanning Device . . . . . . . . . . . .
83
3.2.5
Iteration 2 of Handheld Scanning Device . . . . . . . . . . . .
85
3.2.6
Iteration 3 of Handheld Scanning Device . . . . . . . . . . . .
86
3.2.7
Final Design of Handheld Scanning Device . . . . . . . . . . .
88
Lighting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
90
3.3.1
Uniform Lighting . . . . . . . . . . . . . . . . . . . . . . . . .
92
3.3.2
Directional Lighting
96
. . . . . . . . . . . . . . . . . . . . . . .
10
3.3.3
4
Calibrating the Light . . . . . . . . . . . . . . . . . . . . . . .
97
3.4
Skin Scanning Experiments
. . . . . . . . . . . . . . . . . . . . . . .
99
3.5
Preliminary Image Analysis
. . . . . . . . . . . . . . . . . . . . . . .
102
3.6
Skin Studies: Closing Comments and Ongoing Work . . . . . . . . . .
104
Conclusion
109
4.1
109
Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A Figures
111
B Matlab Codes
115
11
12
List of Figures
Skin Furrows and Ridges . . . . . . . . .
. . . .
21
1-2
Melanin and Melanocytes
. . . . . . . .
. . . .
23
1-3
Hair Follicle . . . . . . . . . . . . . . . .
. . . .
24
1-4
Structural Aging of Skin . . . . . . . . .
. . . .
25
1-5
Imaging System for Finger Knuckle Prints
. . . .
27
1-6
Imaging System for Microrelief Structure
. . . .
28
1-7
Handheld Imaging System for Skin Color
. . . .
29
2-1
Longitudinal vs Transverse Axes . . . . . . . . . . . . . . . . . . . .
35
2-2
CN C A xes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36
2-3
USB Webcam Used for Experiments
. . . . . . . . . . . . . . . . .
39
2-4
USB Camera Calibration . . . . . . . . . . . . . . . . . . . . . . . .
41
2-5
Camera Scanning Arm While on Platform
. . . . . . . . . . . . . .
42
2-6
Servo Motor Connection to 3-Axis Linear Stage
2-7
3D Printed Bracket to Connect Rotation Servo to 3-Axis Translational
.
.
.
.
.
.
.
.
.
1-1
.
. . . . . . . . . . .
43
....................................
43
2-8
CAD Model of Ground Truth Mechanical System . . . . . . . . . .
44
2-9
Full System Integration: 3-Axis Translational Stage with Servo Motors
.
Stage.........
.
and C am era . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2-10 Front Panel Inputs .......
............................
47
48
2-12 Flat Sequence Loop to Turn 1 Revolution . . . . . . . . . . . . . . .
49
2-13 LabView Code for Linear Axis-Full Cycles . . . . . . . . . . . . . .
50
.
. . . . . . . . . . . . . . . .
.
.
2-11 Stepper Automation Code Flow Chart
45
13
. . . . .
51
2-15 LabView Code for Rotational Axes . . . . . . . . . .
. . . . .
53
2-16 Rigid Support Behind Tattoo
. . . . . . . . . . . . .
. . . . .
54
2-17 Precision G rid . . . . . . . . . . . . . . . . . . . . . .
. . . . .
56
2-18 Hardware Set Up-Quantifying Two Direction Errors .
. . . . .
59
2-19 Tattoo . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . .
60
2-20 Experimental Images from Initial Experiments . . . .
. . . . .
61
2-21 Graphic of Set Up for Defocus Blur Experiments . . .
. . . . .
63
2-22 Waterproof Camera Used for Underwater Experiments
. . . . .
65
2-23 Underwater Experimental Set Up . . . . . . . . . . .
. . . . .
66
2-24 Underwater Image 1
. . . . . . . . . . . . . . . . . .
. . . . .
66
2-25 Water-proofing the Webcam . . . . . . . . . . . . . .
. . . . .
67
2-26 Reconstruction Algorithm Flow . . . . . . . . . . . .
. . . . .
68
2-27 Contrast Results
. . . . . . . . . . . . . . . . . . . .
. . . . .
69
2-28 Defocus Blur Results . . . . . . . . . . . . . . . . . .
. . . . .
70
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2-14 LabView Code for Linear Axis-Fractional Cycles . . .
72
3-1
Initial Image Acquired with Basler Camera . . . . . . . . . . . . . .
75
3-2
Optical Parameters . . . . . . . . . . . . . . . . . . . . . . . . . .. .
76
3-3
Set Up Parameters
. . . . . . . . . . . . . . . . . . . . . . . . . . .
77
3-4
DOF Geometry
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
78
3-5
Skin LabView Code . . . . . . . . . . . . . . . . . . . . . . . . . . .
79
3-6
Handheld Scanning Device-Frame, LED Light Ring, Optomechanical
.
.
.
.
.
.
2-29 Results of Motion Blur: Algorithm vs. Experimental Results . . . .
81
3-7
Optomechanical Stiff Rods . . . . . . . . . . . . . . . . . . . . . . .
82
3-8
Handheld Scanning Device V1 . . . . . . . . . . . . . . . . . . . . .
83
3-9
Ring Stand V2
84
.
.
.
. . . . . . . . . . . . . . . . . . . . . . . . . . .
Rods, and Camera
.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
84
. . . . . . . . . . . . . . . . . . . . . .
85
.
3-12 Handheld Scanning Device Version 2
14
. . . . . . . . . . . . . . . . .
.
3-11 Handheld Device Frame V1
.
. . . . . . . . . . . . .
3-10 Handheld Device Frame with Extrusions V1
87
3-13 Handheld Scanning Device V3 . . . . . . . . . . . . . . . . . . . . . .
88
3-14 Handheld Scanning Device-Final Version . . . . . . . . . . . . . . . .
89
3-15 Desktop Lamp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
91
3-16 Circular Shadow
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
92
3-17 Reflective Lining
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
93
3-18 Reflective Lining Attached to Mount
. . . . . . . . . . . . . . . . . .
94
. . . . . . . . . . . . . . . . . . . . . . . . . . .
95
3-20 Light Cover Final Design . . . . . . . . . . . . . . . . . . . . . . . . .
96
3-21 Directional Light Schematic
. . . . . . . . . . . . . . . . . . . . . . .
97
. . . . . . . . . . . . . . . . . . . . . . . .
97
3-23 Light Calibration Set Up . . . . . . . . . . . . . . . . . . . . . . . . .
98
3-24 Skin Scanning Experimental Setups . . . . . . . . . . . . . . . . . . .
100
3-25 Sample Image Acquired from Skin Scanning Experiment
101
3-19 Light Cover Options
3-22 Directional Light Applied
. . . . . . .
3-26 Sample Images Acquired from Skin Scanning Experiments: Skin Features 102
103
3-28 Skin Image Preprocessing
. . . . . . . . . . . . . . . . . . . . . . . .
106
3-29 Skin Analysis Work Flow . . . . . . . . . . . . . . . . . . . . . . . . .
107
A-1 CAD Model of Servo Connection to CNC
. . . . . . . . . .
112
A-2 CAD Model of Webcam Mount to CNC.
. . . . . . . . . .
113
B-1 Quantifying Melanin Code . . . . . . . .
. . . . . . . . . . . . . . .
116
B-2 Skin Image Comparisons Using SSIM . .
. . . . . . . . . . . . . . .
117
. . . . . . . . . .
. . . . . . . . . . . . . . .
117
B-4 SSIM for Skin Images . . . . . . . . . . .
. . . . . . . . . . . . . . .
118
B-5 Quantifying Lighting Code . . . . . . . .
. . . . . . . . . . . . . . .
119
.
.
.
B-3 Grayscale Skin Images
.
.
3-27 Sample Images for Hair Removing Algorithm . . . . . . . . . . . . . .
15
16
List of Tables
1.1
Imaging Platforms
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
31
2.1
Ergonomic Platform Choices . . . . . . . . . . . . . . . . . . . . . . .
46
2.2
Characterizing Repeatability with Position Grid . . . . . . . . . . . .
56
2.3
Error as Measured by High Resolution Camera . . . . . . . . . . . . .
57
3.1
Reflective Lining Choices . . . . . . . . . . . . . . . . . . . . . . . . .
93
3.2
Light Cover Choices
. . . . . . . . . . . . . . . . . . . . . . . . . . .
95
3.3
Comparison of Light Sources . . . . . . . . . . . . . . . . . . . . . . .
99
3.4
SSIM Tests for Stability . . . . . . . . . . . . . . . . . . . . .
17
. . .
104
18
Chapter 1
Introduction
Skin serves as the body's first line of defense against harmful pathogens and microbes;
it is the largest organ. Its various properties (structure, elasticity, and color) may
be used as powerful tools to aid in the diagnosis and treatment of various diseases.
Observing and quantifying skin conditions over time may be useful to characterize the
rate and severity of disease progression. Prior research has been done for skin cancer
and its evolution 116, 37, 41] and the aging effects on skin (wrinkles) 11, 24, 44, 30, 42].
Little research has been conducted in observing skin feature stability and evolution of
healthy skin. Skin feature stability characteristics is the motivation for the hardware
developed and described in this thesis.
1.1
Skin Research
Many skin conditions could be characterized or identified by quantifying changes of
features over time. Some examples are tuberous sclerosis complex (TSC) and eczema.
TSC is characterized by hypomelanotic macules (changing melanin levels) and angiofibromas (perppercorn sized bumps)
134].
Eczema is diagnosed when dermatologists
notice inflamed or irritated skin patches 119]. Like eczema, other dermatological disorders are localized to different sites in the body. Grice et al. used gene phylotyping
over a period of 6 months to investigate the influence of microorganisms on overall skin health [191. They found that microbes are specific to skin sites and thrive
19
under varying site-dependent conditions (i.e. sweat glands). These microbes cause
dermatological disorders, such as psoriasis and eczema 1191.
Skin features are also indicative of photoaging, or skin damage due to prolonged
exposure to sunlight 1201. Excessive wrinkles (more than expected due to age) and
irregular pigmentation are signs of UV radiation (exposure to sunlight) 1201. With
prolonged exposure to UV radiation, skin loses its elasticity and firmness 1131. There
are also perceived changes in the microrelief line structure (discussed in Section 1.2): a
decrease in density of the lines (from 400/cm 2 to 250/cm 2 ) and a change in orientation
of the lines [23, 29, 321.
In modern society, the pressure to remain beautiful is synonymous with looking
young and vibrant 1291. Consequently, the cosmetics industries are continuously researching and developing new anti-aging formulas for topical creams and hormonal
therapies. In one example, the effects of hormone therapy on aging skin was studied
over a 5 year period 1301. The viscosity and elastic properties of the cheeks were determined; aged skin was less elastic. A subgroup of women undergoing hormone therapy
exhibited more elastic skin, confirming the beneficial effects of hormone therapy. Another study investigated the effects of topical vitamin C treatments on premature
aging of the skin by observing the microrelief density and depth of "furrows" for six
months 1201. Younger skin was found to be characterized by higher microrelief density
144] and shallow "furrows" (Figure 1-1) [44].
Skin age is affected by both physiological (natural aging) and environmental (UV
exposure, hydration) effects. Literature suggests that the perceived age of skin is
closely correlated with skin hydration
11].
Measuring skin hydration can be used to
characterize overall skin health 113, 451. Skin hydration can be measured with the
Corneometer CM 825 (discussed in Section 1.3) 1131. When the skin is less hydrated,
the microrelief structure is affected; the furrows get deeper and the ridges expand,
giving an aged appearance 111. Figure 1-1 provides an illustration of the skin "furrows"
and "ridges."
Besides aging, the various skin features have been used in identification, motion
tracking, and skin growth. Fingerprints are used as a form of identification in many
20
S-k-n--......Skin
Skin Furrow
Ridges
00
Figure 1-1: Skin Furrows and Ridges Note: Image is adapted and modified from
Shiseido [ll
fields, from forensics to unlocking smartphones (touch ID sensors) [40]. Biometrics
have been extensively studied in the past thirty years as computer techniques advance
1431.
Fingerprints, face, iris, retina, palmprint, hand geometry, hand vein, finger
surface, and finger knuckle print are just a few examples of the unique physiological
characteristics that distinguish individuals 1431. The prior technology used for imaging
the finger knuckle is described in Section 1.3.
Some motion tracking technology used in virtual cinematography images the small
High
(mm) skin features on the face and hands as markers for tracking motion [221.
resolution cameras with a high-frame rate were used to image the small (mm) skin
features, such as wrinkles. When high resolution cameras are used to image small
scale features, multiple pixels span the feature. This means the estimate of position
should be based on the area of the pixels (instead of the feature point) for estimation
algorithms.
The last interesting and important application highlighted here is skin growth. By
studying the wound healing process at the cellular, tissue, and organ level, Buganza
noted that scars resulting from wounds maintained the same skin integrity ("1same
microstructure, collagen content, and mechanical properties") as the native tissue
skin
[141. By investigating the anisotropic prestrain and deformation of newly grown
used for skin expansion, Buganza illustrated that new skin had the same mechanical
21
properties as its original native state
1391.
He drew a printed checker grid on the
surface of pig skin (to simulate artificial skin structure) and inflated the tissue, which
caused the skin to expand.
The printed grid was deformed, but the mechanical
properties were maintained. Since the mechanical properties of the original tissue were
maintained, he proved that newly grown skin is mechanically analogous to original
skin, so it can be used for defect correction
1.2
139].
Skin Features
General skin features have been found to vary by location; they depend on the underlying bone and muscle structure, mobility, and tension [331. Since features vary so
greatly with location, different parts of the body are imaged: forearm, upper part of
inner arm, and thigh have been imaged in different studies 136, 42, 44]. Furthermore,
as alluded to in Section 1.1, features are also age and gender dependent
1151.
Features of various sizes could be imaged and later studied with the systems
developed in this work.
These are:
(1) melanin variations, (2) hair follicles, (3)
microrelief structures, and (4) superficial veins.
In the following sections, each is
described and in the context of existing literature when possible.
1.2.1
Melanin Variations
Melanocytes, which are cells that contain melanin pigment, are 7 ,am in cross section
125].
Observing skin features at such a small length scale can be used for diagnostic
purposes. J. Sun et al. observe the pigment variation of skin lesions at the macro
level, which can be used to diagnose skin tumors
1371.
Pigment variations, at dimen-
sional scales of 10mm x 7mm, are caused by melanin, oxygenated hemoglobin, and
deoxygenated hemoglobin [37].
The melanin proto-molecules themselves are 6 nm - 10 nm in size, though they vary
based on pH levels
1251.
By studying the optical reflective and scattering properties
of melanin, various skin diseases may be diagnosed
1251.
Although melanin is the
primary contributor to overall skin color, skin color is also affected by blood flow
22
1241.
Melanocyte
Melanin
Figure 1-2: Melanin and Melanocytes Note: Image is reproduced from Medline
1.2.2
191
Hair Follicles
Hair follicles are on the order of 50pJm - 160pm and are found on the dermis
1281.
Since they expose the dermis layer to the environment, it is important to study
these features.
One study investigated the penetration of topically applied drugs
and cosmetics through hair follicles using noninvasive cyanoacrylate skin biopsies and
light microscopy 1281.
dependent.
It was found that hair follicle size and density are location
Developed in the early fetal period, hair follicles are dense and spread
apart overtime as the body grows 1281. Of the seven regions sampled, the forehead
had the highest hair follicle density. The calf had the largest follicles of size 160 jIm.
Forearms had one of the lowest density of hair follicles and were smallest in size at
78 1um.
1.2.3
Microrelief Structures
Skin at the 20 ptm to 8 mm scale reveals prominent microrelief structure. The mechanical forces imposed on the tissue create a net-like structure comprised of triangles
and quadrangles
129, 441. Absorbing and excreting excess water is dependent on the
microrelief structure, which affects skin hydration 1361. Microrelief structure and density vary with body location
[151; they are particularly prominent on the wrists and
forearms 1291.
23
Hair Follicle
Figure 1-3: Hair Follicle Found in Dermis Note: Image is reproduced from Medline
181
Microrelief structure also changes with age [29, 441. As elasticity decreases, the
skin folding capacity increases, forming wrinkles that are commonly seen in the elderly.
Zou et. al studied the aging effects on forearm microrelief structure
1441.
The sampled
population ranged from 20-79 years of age. The authors identified two types of skin
lines which form the skin structure: primary lines that are 20-100 fam deep
and uniformly directed lines"
1441)
1331
("wide
and secondary lines that are 5-40 pm deep [331
("all other lines with different directions not belonging to the primary sector" [441).
Primary lines get deeper with age, whereas the secondary lines start to disappear
144, 331. These can be seen in Figure 1-4.
1.2.4
Superficial Veins
The visibility and size of superficial veins is also location dependent
1351.
Vein di-
ameters are on the order of 2 mm and 0.25 cm 126, 351. Knowing the location and
size of superficial veins is important in many applications. One study used the vein
size to find the appropriate insertion point for dialysis treatment [351. In this study,
ultrasound was used to image superficial veins over a period of 7 years. The authors
found that over time, vein size decreased 1 mm in diameter every 3 years.
24
Age: 21
Gender: Female
Age: 48
Gender: Female
Age: 60
Gender: Male
Age: 70
Gender: Male
Figure 1-4: Structure of Aging Skin: Original skin images and microrelief structures
detected across aged populations. Note: Image is reproduced from Zou et. al [441.
25
1.3
Existing Imaging Technologies
There is a public database of skin images, which is used for disease diagnosis [16].
Accurately imaging the skin is difficult since image quality varies with imaging parameters: viewing angles and illumination angles
115,
37]. The complex optical properties
of skin and the geometry of pores and wrinkles cause reflections, making it difficult to
consistently image the skin. Hardware and imaging methodology technologies have
been developed to address these challenges.
1.3.1
Imaging Hardware
Desktop and handheld devices have been developed to image the skin. These platforms are used to capture images of microrelief structure, pigment variation, and skin
hydration levels.
Finger knuckle prints are unique and can be used to recognize a person's identity
143].
Like fingerprints, finger knuckle textures are distinctive and can be imaged and
processed in real time with a finger-knuckle-print (FKP) imaging system (shown in
Figure 1-5) as designed by Zhang et. al [43].
The finger bracket provides the subject with a consistent resting platform, which
allows for repeatable finger knuckle images
the finger) for a better user experience
143].
143].
It is also ergonomic (curves around
Repeatable images are desired because
it simplifies data processing. The LED light source provides consistent lighting. Acquired FKP images are 768 pixels x 576 pixels with a resolution of 400 dots per inch
(DPI). Repeatability of the experimental set up and the longitudinal stability of the
finger knuckle print was studied by taking images at an interval of 56 days. The
system is repeatable and compact, but can only be used to image a specific biometric
(finger knuckle).
Zou et. al studied the changing forearm microrelief structure by using a USB
skin detector produced by Boseview Technology Company (illustrated in Figure 16)
144].
This system imaged the skin at 50x zoom and analyzed the "oil [content],
moisture, pigment, pore, elasticity, and collagen fibers of the skin"
26
17].
However, Zou
Finger Bracket
LED
CCD Sensor
-1
.
Triangular Block
.
-
Figure 1-5: Imaging System for Finger Knuckle Prints: The data acquisition module
is shown here, with a finger bracket, LED light ring, lens, and CCD sensor. The full
device measures 160mm x 125mm x 100mm. Note: Image is reproduced from Zhang
et. al 1431.
et al. noted the drawbacks of the imaging device: there were no polarizing filters
that minimized light diffusion, so there were specular reflections (i.e. hot spots) in
the images for subjects with oily, smooth, or well-moisturized skin. Furthermore,
directional lighting induced shadows for uneven surfaces. The effects of shadows and
hot spots were eliminated by the algorithm they developed, not the device itself. This
drawback is addressed in the device design presented in this thesis. Our device images
the microrelief structure with minimal specular reflections and shadows.
The Skin Visiometer
1131
uses the amount of light absorbed through a silicone
replica of the skin (9mm x 6.7mm area) to image the skin microreliefs. The silicone
replica is placed between a LED light source and a black-and-white, high resolution
(5 MP) CMOS camera. A software program converts the amount of light absorbed
through the silicone replica to reproduce the skin valleys and give a topographical
view of the skin surface. The device weighs 2.7 kg and measures 26cm x 24cm x 7cm
15].
Other noninvasive methods to characterize the microstructure for aging effects
27
Visual Transmission Lens
White LED Light
CCD
Figure 1-6: Imaging System for Microrelief Structure: The principle of the image
acquisition device is shown here. It consists of white LED lamps to illuminate the
skin surface, an optical transmission magnifying glass, and a 6mm x 8mm CCD sensor
to image. Note: Image is reproduced from Zou et. al [441.
are video microscopy and confocal laser scanning microscopy (CLSM) 1361. In 1361, a
calibrated video microscope with a magnitude of 100x was used to capture "in vivo
light reflectance images of the skin surface." The skin area imaged was minimal at
2.7mm x 2.1mm. CLSM was also used to image infant skin (at a much smaller area of
0.5mm x 0.5mm) and has been used for diagnosing dermatological conditions. Since
it is a noninvasive method that can image the skin microstructure in vivo, CLSM is
preferred to painful biopsies. It focuses an incident light on the skin at various depths
and scans sections parallel to the skin surface.
Very different from the aforementioned devices, Capacitance Imaging (CI) measures the capacitance of the microrelief structure with a 50 pm resolution 1231. The
device is called SkinChip. It has a 1.8cm x 1.28cm sensor that has 92,160 capacitors to
provide the fine resolution. Grayscale capacitance maps illustrate the density of the
microrelief structure, which provides information about the skin surface hydration. A
drawback is that CI is sensitive to patients applying moisturizers right before imaging. CI can also be used to analyze photoaging, which is used to show heterogeneous
28
patches of dry and hydrated skin.
Skin hydration has been measured in a number of ways. The Corneometer 1131 uses
the change in dielectric constant resulting from changing capacitance to measure skin
surface hydration over an area of 49 mm 2 . It is accurate within 3%. It weighs 41 g and
is 11 cm in length 161. Skin hydration may also be affected by the vascular structure
1241. To assess the vascular structure in real-time, videocapillaroscopy (VCS) is used
1241.
J. Sun et al. developed a handheld camera (Figure 1-7) which was used to study
varying pigmentation patterns
[37].
The system acquired six color images to study
the color variation in skin lesions. Each image was acquired with light coming from
a different direction, free from the effects of "topographical shading, shadowing and
specular reflections" [371. The skin was imaged with LED lights, with at least three
LEDs illuminating the surface for any one image. Because of the diffuse lighting,
however, the microrelief structure was not visible and only pigmentation was analyzed.
4 cm
CCD
~Detached
Light
Shed
Lens
LED
Skin
Camera
Figure 1-7: Handheld Imaging System for Skin Color: The skin analyzer uses a
IEEE1394 digital camera, a high-resolution compact lens, and six white LEDs with
a 40" spread. Note: Image is reproduced from J. Sun et. al 137].
A colorimeter (Minolta chomameter CR-200) was used to characterize skin color,
29
especially the red hues resulting from blood flow
1241. Optical coherence tomography
(OCT) was used to construct a 3D volume of the tissue by acquiring images at 2
frames per second (fps) 1311. It was used as a diagnostic tool because the 3D volume
could be used to investigate different internal features. To acquire the 3D volume, a
two beam interferometer was used. At a specific height, an image of the object plane
was obtained; one beam scanned the object depth while the other beam scanned the
object transversally.
Since the microscopic level also provides insight into skin health (young skin has
a thicker epidermis than aged skin 1421), a microscope can also be used to image skin.
A suitable microscope is the AM4815T Dino-Lite Edge microscope. It comes with
lights on board, has a high frame rate (30 fps), a 20x - 220x optical zoom, and an
extended depth of field [4]. A drawback is a very limited field of view (FOV). The
portability of this microscope inspired the handheld device design described in this
thesis.
Table 1.1 provides a summary of the various imaging platforms described above.
1.3.2
Imaging Methodologies
Cula et. al argue that highly specialized equipment is not necessary for analysis of skin
texture; instead a number of camera and lighting positions are required (a method
known as bidrectional lighting) [15, 16]. They image with a high magnification, high
resolution Sony DFW-V500 IEEE-1394 camera on an articulated arm that allows 6
degrees of freedom (DOF) to view pores and fine wrinkles. The camera has manual
focus. Since dermatologists use a regular digital camera and image in ambient lighting,
a DC-regulated light source is used without special filters or polar lenses, which
mirrors the clinical lighting conditions. The camera must be calibrated before each
measurement since the camera location with respect to the imaging surface changes
each time the camera is removed from the articulated arm [15]. This system works
best for planar and rigid objects.
Setaro and Sparavigna addressed the problem of non-planar surfaces by making
and imaging skin replicas
133]. They cleaned and applied silicone polymer mixed
30
Table 1.1: Imaging Platforms
Imaging
form
FKP
Plat-
Application
Benefits
Drawbacks
Finger knuckle
Repeatable
images;
Consistent
lighting;
Ergonomic
Image at 50x zoom;
Pigment, pore, and
microrelief imaged
5MP camera; Light
absorption used to get
skin topography
10Ox zoom
Image
only
knuckle
USB Skin Detector
Microrelief
Skin Visiometer
Microrelief
Video
croscopy
CLSM
Microrelief
Mi-
Lighting can be controlled
High resolution
Corneometer
Microrelief and diagnosis
Microrelief and hydration
Skin hydration
Colorimeter
Skin color
Our
handheld
device
Microrelief structure;
Skin color; Hair follicles; Vascular structure
Characterize red color
resulting from blood
flow
Uniform
lighting;
portable
CI
Accurate to 3%
Shadows; Specular reflections
Skin replica required
No lighting changes
Small image area
Sensitive to moisturizer use
Uses
capacitance
changes to measure
hydration, so cannot
image other features
RGB data used for
skin color only
USB connection; AC
light source
with catalyser to the skin surface to create the replicas. Silicone rubber skin replicas
were 5cm x 5cm and were imaged with a stereomicroscope (Olympus Sz404STR).
Repeatability was ensured by putting the replica on a horizontal plane, using a 7x
objective lens, and two illumination sources with a 45* incidence. Silicone rubber skin
surfaces were also used for skin relief measurements in [20]. A laser probe with an
optical measuring head was used to analyze the skin. It consisted of a transmitting
laser (to image the surface) and photodiodes (to receive the reflected signals).
31
finger
1.4
Thesis Outline
Although numerous technologies, both hardware and imaging methodologies, exist
to image skin, there is no one single device that images feature length scales (Pm to
cm) in the visible spectrum. The hardware that has been developed for this thesis
images pigment variation, hair follicles, microrelief structure, and superficial veins.
It is hoped that by observing feature evolution and variability of these skin features,
overall skin health may be assessed.
This thesis focuses on systems developed to study skin features. In Chapter 2, a
fixed mechanical scanning system is used for validating skin based body registration
algorithms. Chapter 3 follows with a description of a handheld scanning device used
to image the four features and shows preliminary experimental data.
concludes with ongoing work and recommendations for future work.
32
Chapter 4
Chapter 2
Ground Based Mechanical Scanning
System for Evaluating Skin Based
Body Registration Algorithms
A skin-based mapping algorithm was created by Dr. Shih-Yu Sun in which the skin
surface was mapped while simultaneously estimating camera motion by tracking skin
features 1381. The skin features that were tracked were extracted by Matlab's builtin SIFT (Scale Invariant Feature Transform) algorithm 1381. These were artificial
skin features. Skin features that are robust for tracking can be artificial or natural.
Artificial skin features can be achieved by feature extraction (as described above)
or by a high-contrast pattern "tattoo" (discussed further in 2.4.1).
Natural skin
features are those that were discussed in Section 1.2 (moles, hair follicles, microrelief
structure).
We developed a 5-axis mechanical scanning system to serve as the ground truth
to validate the accuracy of the mapping algorithms. Important parameters which influence algorithm performance include lighting variations and camera viewing angles.
Thus, the platform had to incorporate methods to determine the effects of these two
variables (described in Section 2.4). The algorithm performance itself would be evaluated by the accuracy of motion estimation and quality of reconstruction compared
to the prescribed and known motion of the test platform.
33
Motion was prescribed and controlled by a 5-axis mechanical scanning system,
which was created from a 3-axis translational Computer Numerical Control (CNC)
system, two added servo motors, and a camera. The experimental intent was two fold:
first, replicate and extend the results obtained by Dr. Sun with both artificial and
natural skin features and second, evaluate significant parameters (i.e. linear distance
traveled, distance between skin surface and camera) in order to quantify algorithm
performance.
This chapter discusses the design of the scanning system with a focus on the
various hardware components (Section 2.1), describes the calibration of the scanning system (Section 2.3), outlines the experimental procedure, data collection, and
analysis (Section 2.4), and summarizes results (Section 2.5).
2.1
Mechanical System Hardware
The mechanical system consists of: (1) a 3-axis translational stage from a CNC mill,
(2) two additional servo motors for rotational degrees of freedom, and (3) a camera.
The 3-axis translational stage was purchased from Zen Toolworks and assembled
according to the instruction manual [10]. Mach3 was used to control the 3-axis linear
stage. Moving away from Mach3 and controlling the three stepper motors (motion
in X, y, z axes) with LabView was challenging. LabView was chosen as the software
platform to simultaneously control the 3-axis linear stage, servo motors for rotation,
and camera with one program.
For the clinical applications of ultrasound-camera
based scanning, 5 degrees of freedom (DOF) are evaluated. The DOF identified for
the applications were: 3 linear axes - along the longitudinal length of the scan region,
transverse to the scan region, and compression into the scan region; 2 rotational axes
- rotation about the transverse axis and rotation about the longitudinal axis. These
axes can be seen in Figure 2-1. Rotation on the plane of the skin surface (scan region)
was neglected since images could be re-oriented afterwards.
The 3-axis linear stage system re-purposed from a CNC mill enabled the translational motions. To account for the 2 additional rotational DOF in the mechanical
34
Transverse
Axis
Ultrasound
probe
Scan Region
Longitudinal Axis
Figure 2-1: Longitudinal vs Transverse Axes as Shown on the Scan Region
system, two servo motors were added.
These were also controlled with LabView.
Skin images were acquired with a Macally IceCam2 USB webcam. It was also controlled with LabView. Each of these components is discussed in greater length in the
following sections.
2.1.1
3-Axis CNC Mill
The 3-axis desktop CNC mill provided 3 translational DOF. It was assembled as
outlined in 1101. High density PVC boards and steel guide rods make up the frame.
M8xl.25 stainless steel leadscrews are used for each axis. Anti-backlash brass nuts
ensure backlash is reduced. Backlash reduction is important for repeatability of experiments and calibrating the mechanical system (see Section 2.3). The x and z axes
have 7" of travel, the y axis has 2" (Figure 2-2). These travel capacities are considered
sufficient because, based on observation, many clinical applications (i.e. scanning the
thyroid, biceps, or kidney) do not require more than 7" of travel.
One system integration challenge was controlling the five axes of motion. Mach3
was the software provided for the CNC mill from Zen Toolworks. A single, scalable
35
NEMA 17 Stepper
Motors
Figure 2-2: CNC Axes: The 3-Axis commercially available CNC mill is equipped with
3 NEMA 17 stepper motors. Note: the image is adapted and modified from 1101.
software platform is preferred; Arduino and LabView are considered. The specifications and power requirements of the motor are needed for switching software platforms. The open source Arduino codes to control stepper motors were used for early
experiments. Finally, all motors were controlled with LabView (see Section 2.2.1).
Choosing a motor with appropriate resolution for the clinical application is important. Assuming a 0.5 ' probe travel speed during scans 1381, the finest resolution
required is 0.0125
".
This is double the resolution (smallest increment (theoreti-
cally) at which images can be captured) of the Nema 17 stepper motor, which is rated
for 1.8*(see Equation 2.1c).
36
3600
steps
= 200
revolution
1.80
200 steps
16 microsteps =
microsteps
x
=-3200
rev
step
rev
M(
1 rev
1.25 mm
x
= 0.00625 mm2
1 rev
200 steps
step
(2.1a)
(2.1b)
(.b
1c
A sinusoidal signal is supplied to drive the stepper motors, providing signals for a
step and a direction. Sparkfun Big Easy motor drivers are used to drive the motors
since the microcontroller (myRio) cannot provide sufficient power. Each driver requires 1.5 A from the power supply (1.5 x 3 = 4.5 A total current draw from all three
motors during operation) and 2 A when stalled.
Since the motors are not stalled
during operation, a 5 A, 12 V power supply is sufficient. Details of the LabView code
to control the stepper motors are found in Section 2.2.1.
2.1.2
Servo Motors
The servo motors are used to: (1) rotate a 0.8 oz camera (Section 2.1.3), so they
must have sufficient torque rating (4 oz-in), and (2) to provide the 2 rotational DOF
of the mechanical system to model the clinical case when there is rotation about the
longitudinal or transverse axes during scans. They must operate in the rotational
range of 0* to 45* as required in clinical procedures (this range is determined after ob-
serving professional radiologist, Dr. Anthony Samir, performing clinical procedures).
The servo motors were selected by their ability to supply torque and the allowable
rotational range.
The two servo models selected from Servocity were:
(1) SPG5485A Standard
Rotation (provides rotation about the longitudinal axis) and (2) SPT400 Tilt System
(provides rotation about the transverse axis). These servos operate with the HiTEC
HS5485HB motor, which has a 623 oz-in torque rating. Although this torque rating
is much greater than required, the allowable rotational range of 0* to 650 makes it
appropriate for the application. 0* is the ideal case, representing no rotation about the
37
longitudinal or transverse axes. By observation, the rotation about the longitudinal
or transverse axes will not be more than 45*, so an upper bound of 650 is sufficient.
A pulse width modulation (PWM) signal is sent to control the servos, varying
duty cycle and pulse frequency. Torque increases with voltage; the servos operate in
the range 4.8 V to 6 V. In order to support the stall current, the current supplied to
the motors must be at least 3 A. An adjustable 5 A, 12 V DC power source is used
to supply power. Details of the LabView code to control the servo motors are found
in Section 2.2.2.
2.1.3
Webcam
A USB (universal serial bus) camera (Macally IceCam2 webcam) is used to capture
skin images, which is the same camera used in Dr. Sun's experiments 1381. This model
is selected because it allows for manual focus and has a minimum focal distance of
2 cm, which is appropriate for the clinical applications as designed in Dr. Sun's work.
The camera must be calibrated at the working distance for optimum focused images
(described in Section 'Camera Calibration'). Macally IceCam2 is also Direct Show
compatible (common, filter-based framework that allows media to be controllable with
many programming languages), so camera control can be integrated with LabView
(see Section 2.2). The spatial resolution of the camera is 640 x 480 pixels, and for
a selected focus and working distance is mapped to a 28 mm x 21 mm field of view
(FOV) on the skin. As shown in Figure 2-3, the camera has: (1) a special 3D printed
backing and (2) the lens focus "fixed" with glue after being set to the correct working
distance.
Camera Calibration
To ensure quality images, the camera has to be calibrated against radial distortion,
translational error, and motion blur at a set working distance. The final working distance of 5 cm is set based on three factors: (1) ultrasound probe geometry limitations,
(2) findings from Dr. Sun, and (1) clinical observations. With the camera housing
38
Figure 2-3: USB \i\Tebcam Used for Experiments: The McCally IceCam2 shown here
has a 3D printed back, which is specifically designed for kinematic coupling to the
5-axis translational stage (see Section 2.1.4). The lens has been manually focused
and fixed in place with hot glue at a distance of 5 cm.
mounted on the ultrasound probe, the camera is always at least 2 cm away from the
skin surface. However, at such small distances, skin surface deformation is significant.
Findings from Dr. Sun concluded that skin surface deformation is insignificant when
the camera is more than 4 cm away from the skin surface, so the working distance
must be at least 4 cm [38). In clinical observations of ultrasound-camera system imaging of thyroid and forearm for Duchenne Muscular Dystrophy monitoring (two clinical
applications in which the skin based registration algorithms are used), the camera is
5 cm away from the patient skin surface. Therefore , to model the real world clinical
case and limit effects of skin surface deformation, a 5 cm working distance is set.
The camera calibration procedure is used to calculate the camera's intrinsic parameters (focal lengths and principal points) and radial lens distortion coefficients
[3 , 38). The calibration procedure is outlined below.
A black-and-white checkerboard pattern of known dimension is used. There are
13 squares, each 24 pixel x 24 pixel (or 1mmx1 mm when printed at 600 DPI), along
one side of the pattern. The inner 11 squares are used for calibration, with the outer
squares making corner localization more accurate and reliable [38). A small white
dot is placed in the middle of the top left black square , which serves as the reference
39
when extracting grid corners across all images, since extraction always starts with
the square with the white dot.
Next, the grid is imaged at least 15 times at a variety of angles and orientations,
to encompass the entire scanning space. The webcam is held at the working distance
(Figure 2-4a). A sample calibration image is provided in Figure 2-4b.
Once the images are acquired, the Camera Calibration Toolbox in Matlab is used
to determine the camera intrinsic parameters (used to determine the homogeous projection matrix) and the radial lens distortion coefficients 13, 381.
1. Load images as .PNG, .TIF, or .JPG
2. Manually select grid corners; the first one is at the reference square
3. Toolbox algorithmically determines inner boundary of calibration grid (should
be 11 x 11)
4. Enter the size of each square so that Matlab can extract the grid corners (this
attempts to calculate the distortion coefficient, valued between -1 and 1)
The calibration procedure generates information about the image coordinates, 3D
grid coordinates, and grid sizes and saves them in a file, calib_ data.mat. Matlab also
calculates and reports the calibration parameters: the focal length, prinicpal point,
skew, distortion, and pixel error.
2.1.4
Integration of Hardware
The components described above (3-axis translation stage, servo motors for rotational
motion, and camera) are integrated into a single experimental platform.
Integrating Servo Motors to the 3-Axis Linear Stage
The two rotational servos are mounted serially to one another as shown in Figure 26c. A bracket connects the servo motors to the y axis of the 5-axis scanning platform
(Figure 2-6a). Mounting on the y axis allows for images along the transverse and
40
(a) Camera Calibration Set Up: Images are taken at various orientations with the webcam
fixed at a predetermined working distance (5 cm). The calibration grid is secured to a solid
surface.
(b) Sample Image obtained during Camera Calibration: The image shows some rotation
about the plane of the image as well rotation about an axis perpendicular to the axis.
Figure 2-4: USB Camera Calibration
longitudinal axes of the limb as shown in Figure 2-5.
mounting axis for multiple reasons:
41
The y axis is chosen as the
1. It keeps the z axis stage free for patient limb placement, which is required for
experiments: the camera is on an elevated axis from the z axis plate
2. It saves space on the system with an already limited range:
the camera is
mounted separately and does not interfere with the leadscrew motion, so all
axes can travel the full capacity: 7" for x, z and 2" for y
3. It allows for imaging in all three translational directions at once (x, y, z) without
changing the location of the camera: the y axis is coupled to the x axis, so the
camera can move in one or both axes while the z stage is moving
Figure 2-5: Camera Scanning Arm While on Platform: By mounting on the y axis of
the desktop platform, the transverse and longitudinal axes of the arm can be scanned
simultaneously (moving in x and y axes of the 3-axis translational stage).
The final bracket is shown in Figures 2-6 and 2-7. For the CAD drawing, please
see Figure A-1.
42
(a) Y Axis: Hole pattern
of where the bracket can be
mounted is visible.
(b) Servo Mounted to Y
The
Axis of Platform:
bracket lines up with the
y axis mount. It does not
prohibit leadscrew motion
because it is sufficiently offset from the y axis frame.
(c) Servos Mounted One on
Top of Another: This orientation allows for the critical rotational DOF to be
imaged. It also conserves
space.
Figure 2-6: Servo Motor Connection to 3-Axis Linear Stage
(b) Servo In Mount-Side View: The Rotation servo nests comfortably in the bracket,
with just enough clearance in the back for
the wires connecting to the microcontroller.
(a) Servo In Mount-Top View: The hole pattern for the Rotation servo is used to connect
the servo to the mount. - - 20 screws are
used.
Figure 2-7: 3D Printed Bracket to Connect Rotation Servo to 3-Axis Translational
Stage
Integrating the Webcam to the Servo Motors
A magnetic kinematic coupling mechanism is used to connect the webcam to the pan
and tilt servo, which is inspired by the ultrasound probe housing to camera mount as
43
Figure 2-8: 3D CAD Model of 5-Axis Scanning System
designed by Dr. Matthew Gilbertson 1381. The -" press fit magnet on the kinematic
coupling mechanism makes it easy to repeatably attach the webcam to the mount.
Using the existing hole pattern on the mounting plate of the pan and tilt servo, the
camera mount can be secured. The dimensioned drawing is found in Figure A-2.
Figure 2-8 shows the CAD model of the 5-Axis scanning system with all the
components. The actual, developed machine is illustrated in Figure 2-9.
2.1.5
Ergonomic Considerations
The ergonomics of the system was a consideration during design. The platform should
be comfortable enough to keep the scan region stationary for a five minute scan. For
all experiments presented in this thesis, the scan region was the arm. To support the
elbow and wrist, a Belkin gel pack is used. The gel pack is almost as long as the z
axis platform, which is the ideal length for scanning the forearm, and molds to the
forearm for optimum comfort. It is also simple to incorporate into the structure as
44
Camera to Servo Mount
Stepper
Motor
Servo to
CNC
Bracket
Webcam
2 Servos
IP
GlPc
Axis
Stage
Figure 2-9: 3-Axis Translational Stage with Servo Motors and Camera: The
3-axis translational stage is seen here with two additional servos (pan and
rotation systems), a 3D printed servo bracket, a USB camera, and a 3D
camera mount. A Belkin gel pack is put on the z axis platform for patient
during scans (see Section 2.1.5).
desktop
tilt and
printed
comfort
shown in Figure 2-9. Note that during experimentation, the gel pack is put on top of
elevated surfaces so that the arm on the gel pack is in the line-of-sight of the camera.
Several ideas were considered for an elevated, ergonomic support (see Table 2.1).
Eventually none were implemented as the design moved towards a free hand scanning
system (see Ch 3).
2.2
Mechanical System Control Using LabView
Simultaneous control of the 3-axis linear stage, servo motors, and camera image capture is done with LabView and the myRio microcontroller. The myRio can control
the five motors (three steppers and two servos) and the USB camera with an extendable USB port hub. LabView was selected because it is compatible with the myRio,
45
Table 2.1: Ergonomic Platform Choices
Idea
Belkin Gel Pack
Benefits
Appropriate length for forearm scanning, Easy to integrate into platform, Molds
to forearm
Half Metal Cylinder lined
with foam
Foam molds to forearm
Vertical arm hold for subject to hold onto
Space saver
Adjustable table/chair
Space saver, Brings camera and scan region to same
level
Drawbacks
Not appropriate height for
placing forearm in line of
sight of camera, Does not
allow for repeatable experiments
Bulky, Cannot scan other
regions (neck and thigh are
bigger than forearm)
Does not allow for repeatable experiments, Cannot
scan other regions
Extensive adjustment
can be easily programmed to control the motors, and has a Vision Express VI that
can be used for image capture.
User inputs to the LabView Front Panel are simple (as shown in Figure 2-10): the
desired travel distance (in mm) for the three Cartesian directions (x, y, z) and the
desired angle (between 0" and 650) for the two rotational DOF. The block diagram
is where the important parameters are calculated and from which the signals are
sent. The details of the block diagram are described in the following sections. First,
controlling the stepper motors is discussed; next, controlling the servo motors is
outlined. Note that this is a relative motion system, so distances traveled are relative
to the starting point.
2.2.1
Stepper Motor Control
The stepper motors are used to move a desired linear distance from any starting point.
Operating as open loop control (i.e no feedback system to determine relative distance
traveled), it is imperative that no steps are skipped since counting steps gives the
distance traveled. This assumption is verified (Section 2.3).
To ensure no steps are skipped, the correct step mode and the frequency at which
46
Twm rpa Rwr~nO z
zdketon
Cuneg Pan & T*
DMMW
RPblbDn M2k
0
Cuuent kutdiRn
(&9rees)
%dwrVoft"e
*Otabon)'
9
Figure 2-10: Front Panel Inputs to Control Mechanical System Using LabView
to send the pulse must be determined.
After much experimentation, the optimum
parameters to avoid skipping steps are: frequency (f) = 400 Hz and full stepping
(200 "'Ps). Using these parameters, the LabView code is constructed (Figure 2-13).
Figure 2-11 provides the high-level calculations for automating the stepper motors.
While Figure 2-11 outlines the procedure to move a distance that is an even
47
Distance (mm to move)
+ 1.25
rev
# of Revolutions to Turn
I
1
x
(200 stepsx MS microsteps
rev
step
# of microsteps = # of pulses
for desired distance
+f Hz
Time for loop to run
Figure 2-11: Stepper Automation Code Flow Chart: This is the pseudo code that is
implemented in LabView (see Figure 2-13). It takes the user input linear distance
and determines the number of revolutions the leadscrew must turn to achieve that
distance. The number of revolutions is converted to the number of steps required,
which provides the time the pulse is sent.
48
Turn On
myRio to
start sending
pulse
Send signal
for
Stop sending
pulse to
myRio
Wait to take
an image
with Express
Vision VI
Figure 2-12: Flat Sequence Loop to turn the leadscrew 1 revolution. When the
microcontroller (myRio) is sending a pulse (i.e. steps), the leadscrew is turning and
the linear stage is moving linearly. After the desired distance is traveled (determined
by the time the signal is sent), no more signals are sent to the microcontroller and
the leadscrew stops moving. This sequence is placed in a For Loop to allow for more
than 1 revolution.
multiple of 1.25 mm (leadscrew pitch), the LabView code can support any distance
input. The quotient resulting from the first step in Figure 2-11 provides the number
of full revolutions for the For Loop. The procedure is then repeated for the remaining
distance as seen in Figure 2-13.
A flat sequence loop structure is used to turn the lead screw and take a picture
with the camera as illustrated in Figure 2-12. In LabView, this algorithmic structure
is implemented as shown in Figures 2-13 and 2-14. Although shown for the y axis,
this code is implemented for all x, y, and z axes.
2.2.2
Servo Motor Control
Servo motors provide the 2 rotational DOF on the 3-axis linear stage and are used to
move to the desired relative angular position. Built-in potentiometers on the servo
motors are used for measuring and providing feedback for the relative angular position
by correlating voltage to angle. The resolution, or the smallest degree increment that
the servo motor can move, is calculated in E or Duty Cycle
Using the resolution of each servo motor, the voltage required to move to the
desired location from the initial position is calculated. A sample calculation to rotate
to a desired angle (0) on the pan and tilt system is provided in Equation 2.2. Given
the geometry of the servo motors and gears, the desired angle must be in the range
of 00 to 65*. This is more than sufficient for the clinical application as described in
49
x dircton+
400
Sts/Re
10r0
Time per Rev (ms) x
0 ic
iL Li Li 0 i Li Li Q_ i W-LMi LU Li i L IJ J UI UM LI LIM UI LI L 0 rd d LL ULAI d
Re
X~4~
turnin
Paused for 5 seconds
take a picture
L
I-
Send ulse to
ste
Stepper turns 1 revolution |
er to move
Intermediate x
Step Stop TOP
Step Top
Stepper Top
Direction
EM .....................----
S
Vision
Top
19)
>
t/DIO4
(Pin
Imaes
Acquisition5
QUITO____
0
3Q1
1
LE
711
Figure 2-13: LabView Code for Translational Axis-Full Cycles: This code is implemented first. It allows the leadscrew to move
a distance that is an even multiple of 1.25 mm. The input distance is divided by the leadscrew pitch (1.25 mm) and the quotient
indicates the number of times the For Loop will be executed. Parameters for the number of steps per revolution (full stepping
- 200 steps/rev) and pulse frequency (400 Hz) are optimized and set.
y
x+
*1I.
direction
X
1.25
1.25
e
001000
200
Fr
ISteps/Rev
0W D0 9 0
OD
3 DOCC D0 313313,3133133
-
-
==-m
D13
0 3 013 3 130
13 1 J 2 1 1313 ,1313 1 1313
ryM
1
M1
RM
M P% M-F2
13 l 113 3 1 13131 ff
0 3 1 1313
M
1
991MirFM
9r1199M
U
Stepper turns for less than
1 revolution
Send pulse to get stepper to movek
Paused for 5 seconds
to take a picture
IStop turning
I,
Step Back
Step Side2
El0Q,0UQa1
Fna y Image
Vision
Acquisition4
0,MQTV
-'
M
M~r
r19919919919M
Fe1R1NMA
RM1
1939
9
1111
FMriai
riri
9 1
391919
9 1
1919
9
1 1
Figure 2-14: LabView Code for Translational Axis-Remaining Cycles: This code is implemented immediately afterwards if
there is a remainder when dividing the input number of cycles by 1.25. Parameters for the number of steps per revolution (full
stepping - 200 steps/rev) and pulse frequency (400 Hz) are optimized and set.
Section 2.1.2.
00 x 0.017-(Resolution of servo) = V required to move 0 from starting position
(2.2)
A feedback system is implemented into the LabView code to compare the current
angular position to the desired. The algorithm flow is outlined below followed by a
snapshot of the LabView code, which implements this algorithm (Figure 2-15).
1. Take current voltage reading from potentiometer
2. Subtract initial reading
3. Multiply the resulting voltage by resolution
(5)
4. Subtract current angle from desired
5. Multiply by resolution (Duty CYcle)
6. Add to previous duty cycle
7. Make sure duty cycle is between 0.03 and 0.09
2.3
Characterizing the Mechanical System
Design considerations greatly improve the performance of the mechanical platform.
Since the mechanical platform is used as a ground truth system for validating skin
mapping algorithms, the set up must be repeatable and consistent between experiments. The set up includes the camera that is secured in place and the tension in the
cable eliminated. The camera captures images of a pattern that represents artificial
skin features (called a "tattoo"). The tattoo is mounted to a rigid surface, which is
kept vertically upright by a rigid block behind the tattoo. The plane of the lens and
tattoo are parallel to one another. This set up is seen in Figure 2-16.
52
45'4el'd4|IPl.UIISIMMillH61JWil'Ulll@IlliI
eIIIIItall.llie
liliindit
il'hiki.dli
6IL 101h
Ub
ds.Elvi
IH914t|MFil<IIt
i101
41611
Ilailli
i
-'ik
'a
rd61AllEpill-1
Turn to Desired An
Ma~
Servo M otor Control
Paused for 5 seconds
I
to take a pkiture
i,
E
-
2
PWMT*
--EE;=
Desired Pan & Tilt Angle
(degrees)
AN"lgIpt
Vis*Mn
Slider Voltage
(Pan & Tilt)
C urrent Pan & Tilt
Nle (degrees)
I error in
MPAtno
On DC
o-l~e50091
--- Subtract
li"ia Vokage
D
l
Dy
yde/
|
Max
Rotated
mnW
'Image Logging I
PImage Logging I
Log Image
error out
Image Logging Ii
Image -Out
Intermediate z
li s 2
DC
00305
Desired Rotation Angle
(degrees)
m-
P FI
Pin
Slider Voltage
(Rotation)
0
r i
Current Rotation
Angle (degrees)
E1B
4.76
st
[Subtract lhnit" Vote
lReading
Figure 2-15: LabView Code for Rotational Axes: This code is used to control the servo motors. A pulse width modulation
(PWM) signal is sent to the servos. The feedback system uses the potentiometers on the hardware and is indicated by level
shifters in the code. After the While Loop executes, an image is captured.
Figure 2-16: Rigid Support Behind Tattoo: The tattoo is mounted to a rigid surface,
which is kept vertically upright by a rigid block and metallic sheet. This helps keep
the webcam and tattoo parallel to each other and are separated by a distance of
5.88 cm. f'he camera is secured to the platform and the cable tension is eliminated
to avoid moving the camera as the platform moves.
With this rigid, repeatable set up, the mechanical platform is characterized. Characterizing the mechanical system is two-fold: determining the resolution of the system
and quantifying the error in the system. Each is described in further detail in the
following sections.
2.3.1
Resolution of the 3-Axis Linear Stage
The resolution of the mechanical system should be at least one order of magnitude
better than the algorithm performance since it serves as the ground truth platform.
There are two metrics that characterize the resolution of the system: bias and variance. The algorithm bias is the difference between the mean distance traveled as
estimated by the algorithm and mean distance traveled by the 3-axis linear stage.
The bias is on the order of mm. The variance describes how the algorithm esti54
mate differs from the mean estimate and is on the order of sub-mm 1381. Thus, the
resolution of the mechanical system should be at most sub-mm.
2.3.2
Quantifying Error of the System
To quantify the error in the set up, the one direction (move forward) and two direction (move forward and backward) repeatability are tested. This is an important
calculation since the linear distance traveled is determined by the number of steps.
Each is discussed further in the following sections. Note that since a feedback system
is incorporated in the LabView code for the servo motor motion (Section 2.2.2), only
the repeatability of the stepper motors has to be tested.
One Direction Repeatability
The one direction repeatability experiments quantify the open loop errors by determining if the actual distance traveled in one direction is the desired distance traveled.
Multiple parameters (frequency, step mode, travel distance) were varied in the LabView code to determine how these variables affect repeatability.
A high precision
glass position grid is used for experiment. The grid has 10 mm separation between
squares and markers with fine resolution near the square center. Figure 2-17 shows
the experimental set up. The following is a step-by-step procedure describing the one
direction repeatability experiment for a single axis of the 3-axis linear stage.
1. Capture image at starting point, with the center of the square in the center of
the image (image center has least amount of distortion)
2. Turn x revolutions (where x x 1.25mm is a desired linear distance in mm)
3. Capture image at this new location
4. Vary x to determine how distance affects mechanical platform performance;
keep some trials with same x revolutions to characterize repeatability
The results of the one direction repeatability experiments are found in Table 2.2.
55
Figure 2-17: Precision Grid Used for Measuring Repeatability: The webcam is adjusted to the height of the center of one of the squares on the glass precision grid. It
moves x x 1.25 mm. The initial and final images are compared.
Table 2.2: Characterizing Repeatability with Position Grid: Note that a negative
distance difference means that the actual travel distance fell short of the desired
distance.
Trial Distance
Input
[mm]
1
10
2
5
2.5
3
4
10
5
10
6
5
7
10
8
5
9
10
10
5
11
10
12
5
Revs
Stops
8
4
2
8
8
4
8
4
8
4
8
4
8
4
2
8
8
4
8
4
8
4
8
4
Actual
Travel
[mm]
10.000
4.93
2.407
9.858
9.905
4.929
9.953
4.884
9.815
4.884
9.862
4.815
56
Distance Actual is%
Diff
of Desired
[mm]
0.000
100
-0.070
98.6
-0.093
96.296
-0.142
98.578
-0.095
99.048
-0.071
98.578
-0.047
99.533
-0.116
97.674
-0.185
98.148
-0.116
97.674
-0.138
98.618
-0.185
96.296
P\i\TM
Freq
[Hz)
400
400
400
400
200
200
800
800
400
400
200
200
Steps/ Rev
200
200
200
200
200
200
3200
3200
3200
3200
3200
3200
Table 2.2 indicates that changing PWM frequency and step mode (full stepping
vs. microstepping) has no bearing on the error in open loop motion. Desired travel
distances of 10 mm (trials 1, 4, 5, 7, 9, 11) provide the lowest absolute distance
difference. This is expected since the grid is created with a 10 mm separation between
the squares. Across all trials, the distance differences are consistently 0.1 mm. These
errors can be attributed to at least one of the following:
* Webcam quality: the images are not high enough resolution to appropriately
pinpoint the center of the square
" Inaccuracies of determining the square center: when sufficiently zoomed into
the image, multiple pixels make up the target feature
* Distortion around the side of the lens: could lead to an inaccurate
" count,
which is used to determine the actual distance traveled
While the results are consistent, 0.1 mm errors are too high for a precision control
machine. To check if the errors are a consequence of a poor quality camera, a Basler
high-speed, black-and-white camera is used. The camera is mounted via double stick
tape to the y axis mount, looking down on the glass position grid. Four trials are
conducted with images captured at the initial and final positions. The results are
summarized in Table 2.3.
Table 2.3: Characterizing Repeatability with Position Grid and High Precision Camera
Trial Distance
Input
Revs
Stops
[mm]
Actual
Travel
Distance
Diff
[mm]
[mm]
Actual is % PWM
of Desired
Freq
Steps/Rev
[Hz]
1
2
3
1.25
5
10
1
4
8
1
4
8
1.187
4.964
10.000
-0.063
-0.036
0.000
94.964
99.281
100.000
400
400
400
200
200
200
4
7.5
6
6
7.518
0.018
100.240
400
200
These images confirmed the results obtained by the webcam.
At 10 mm, the
position accuracy is very good, displaying only a 0.01 mm - 0.03 mm variation. This
57
is expected since the separation between the centers of the two squares is 10 mm.
The short travel distance of 1.25 mm still has a large error; even though the actual
distance is only 0.06 mm from the desired, the error is a larger percentage of the
overall distance.
This was comparable to the shortest distance analyzed with the
webcam previously (2.5 mm). The variability across all tested distances ranged from
0.018 mm - 0.063 mm, generally much lower than the results obtained by the webcam.
This further confirms that characterization of the 3-axis linear stage performance is
limited by webcam resolution.
Two Direction Repeatability
The two direction repeatability experiments quantify the closed loop errors by determining if the camera can return to the original position after traveling forward
a set distance and backward by the same distance. The following is a step-by-step
procedure describing the two direction repeatability experiment for a single axis of
the 3-axis linear stage. Figure 2-18 shows the experimental set up.
1. Capture image at starting point
2. Turn 1 revolution (1.25 mm linear travel)
3. Capture image at this new location
4. Turn 1 revolution backwards (to get back to starting location)
5. Capture image at this final location
The first and third images are compared. If no steps are skipped, the two images
will be identical. As expected, the two images have slight variations, confirming that
stepper motors are susceptible to open loop errors (no feedback). This may result
from: (1) backlash errors, or (2) moving too fast, missing steps. Effects of each can
be reduced. An anti-backlash nut (built into the system) reduces backlash. Moving
at a slow enough pace avoids skipping steps, which is verified with the algorithm
(see Section 2.5). Therefore, a more costly solution of incorporating an encoder for
absolute position is not required.
58
Figure 2-18: Hardware Set Up-Quantifying Two Direction Errors: The camera and
pattern are parallel to each other. The pattern is mounted to a rigid surface to ensure
it is vertically upright. The camera captures an image, moves forward 1.25 mm,
captures an image, moves backward 1.25 mm, and captures a final image.
2.4
Experiments
To validate the mechanical test system performance against the algorithm performance, the experiments of Dr. Sun were replicated 1381. The algorithm performance
was tested against artificial skin features (also called a tattoo) and natural skin features. This section starts with a description of the tattoo and how it is used to mimic
variable lighting conditions. The experimental set up for three parameters is outlined:
linear motion traveled, defocus blur, and motion blur. This section concludes with an
extension to underwater experiments using the translational scanning system. The
results of these experiments are found in Section 2.5.
2.4.1
Lighting and Artificial Skin Features
In his experiments, Dr. Sun uses a pattern (or tattoo) that mimics skin features.
A Matlab script generates a high-contrast binary random pattern with a square of
known dimensions in the top left corner (Figure 2-19).
The pattern is printed on
white paper for high contrast.
Illumination intensity effects can be mimicked by modifying the contrast of the
59
Figure 2-19: Tattoo: binary random pattern with a square of known dimensions
(3 mm x 3 mm) in upper left corner. The square of known dimensions is critical as
it aids the algorithm in determining the distance traveled, so it should be visible in
at least two images.
tattoo. Low contrast patterns correspond to low light conditions and vice versa for
high contrast patterns. A "losscontrast" parameter is introduced that varies from 0
to 1, where 0 corresponds to a full contrast pattern and 1 corresponds to no contrast.
The square of known dimensions is kept in high contrast throughout for measurement
purposes.
Natural Skin Features
Dr. Sun found that the camera could resolve various skin features (such as melanin
and hemoglobin pigments) at sufficiently short distances (27 mm) from the skin surface
138]. To test the limits at which skin features can be resolved with the 3-axis
linear scanning system, multiple trials are conducted with varying camera to forearm
separation distances: 5.75 mm and 2.75 mm (closest to Dr. Sun's set up). The travel
distance is 40 cm (approximately the length of the forearm).
Lighting effects are
observed by varying ambient lighting: using fluorescent room lighting, a flashlight,
and a fluorescent desk lamp. Figure 2-20 shows some sample images obtained during
experiments.
60
(a) Natural Skin Features, camera is 5.75 cm
away from pattern, scan covers 40 mm distance: This is a sample experimental image
from the first trial. The image does not appear to be in focus, perhaps due to the large
working distance. A flashlight was used to
illuminate the forearm.
(b) Natural Skin Features , camera is 2.75 cm
away from pattern, scan covers 40 mm distance: This is a sample experimental image
from the second trial. The image is in focus. A flashlight was used to illuminate the
forearm.
(c) Natural Skin Features, camera is 2.75 cm
away from pattern, scan covers 40 mm distance, desktop lamp is used: This is a sample experimental image from the third trial.
This image is in focus (can make out hair
follicles), but is over-saturated. The desktop lamp provides more uniform lighting.
Figure 2-20: Experimental Images from Initial Experiments
61
2.4.2
Linear Motion Experiments
The distance traveled in the linear axis by the system is used to validate the distance
traveled as estimated by the algorithm. The experimental set up for translational motion is modeled after Dr. Sun's experimental procedure [38]. The calibrated webcam
is 27 mm away from the tattoo. An image is captured every 1.25 mm (1 revolution)
for 12.5 mm (10 revolutions). The square of known dimensions appears in at least two
frames. This process is repeated for each pattern with the losscontrast parameter
(Section 2.4.1) ranging from 0 to 0.7 in increments of 0.1, giving a total of eight different contrast patterns. Since the images are taken consecutively, consistent lighting
is ensured.
2.4.3
Defocus Blur
It is important to know how well the volume registration algorithm performs in the
presence of defocus blur. Defocus blur occurs when the camera is out of focus, when
the camera is rotated about the longitudinal or transverse axes (i.e. camera is not
perpendicular to the skin surface), or if the curved surface of the scan region (i.e.
curve of forearm) is in the image frame.
There are multiple ways to induce defocus blur in the experimental platform. The
camera can be rotated such that the plane of the lens is not perpendicular to the
pattern. Another method (which was implemented) uses only the translational axes
to induce defocus blur by moving closer to and further away from the pattern so that
the image is out of focus. The pattern is printed with the losscontrast parameter set
to 0.6, which means the pattern is printed with only 40% contrast. The experimental
procedure follows:
1. Take a picture of the pattern at zero offset (27 mm away from pattern)
2. Take a picture every 1.25 mm, moving towards the pattern (forward direction)
3. Repeat step 2 seven times
62
4. Repeat steps 1-3 after moving back to the original position (27 mm from pattern), this time moving away from the pattern (backward direction). Image
every 3.75 mm instead of 1.25 mm to get non-focused images.
Using the above procedure, a total range of 8.75 mm is traveled in the forward
direction (towards pattern) and 26.25 mm is traveled in the backward direction (away
from pattern). The pattern starts to lose focus when the camera is 5 mm away from
the pattern and when the camera is 18.75 mm away from the pattern. This indicates
that the camera focus is better when it is further from pattern than when it is closer
As the camera moves closer to the pattern, shadows are cast on the
to pattern.
pattern; these effects are incorporated into the results.
In practice, the radiologist starts imaging in focus, but throughout the scan, the
images start losing focus. To better capture the clinical scenario of some images in
focus and some blurred, a slight modification to the experimental procedure is made:
take the first 2 images fully focused and the remaining 9 images at the desired defocus
level (see Figure 2-21 for a schematic of the experimental set up).
Y
z
Pattern
A: Camera
1.25 mm
E
E
d: Defocus Distance
4
3
1
A
11
5
2
1.25 mm
Figure 2-21: Graphic of Set Up for Defocus Blur Experiments Mimicking Clinical
Situations: Shown here is the procedure for forward defocus blur, with the first 2
images in focus and the remaining 9 out of focus. Camera travels 13.75 mm in z and
(27 + d) mm or (27 - d) mm in y (depending on if camera is moving away from or
towards pattern).
63
2.4.4
Motion Blur
Motion blur occurs when the probe mounted camera moves faster than the camera
exposure time 1381. Experimentally, motion blur can be induced by overlaying two
images which are taken a short distance apart. The distance between images is set to
0.5 mm based on clinical observations. This distance satisfies the constraint that the
separation distance must be a multiple of 0.00625
" (the smallest linear distance
the leadscrew can travel when full stepping). With the camera 27 mm away from the
pattern, 12 images are taken every 0.5 mm, corresponding to 11 image pairs (or 11
blur images). The algorithm averages the images to provide 10 data points.
Motion blur can also be introduced in the algorithm itself. Convolution of the
image with a variable kernel introduces motion blur. This was the preferred method
for validating the algorithm for motion blur.
2.4.5
Underwater Experiments
Being able to accurately scan underwater has clinical applications, such as in prosthetic fitting as well as tissue imaging [171. The challenges with underwater imaging
are protecting the electronics and keeping the optical target in focus. Overcoming
these challenges is discussed in the following sections.
Waterproof Webcam
To protect the electronics, the plastic back cover of the webcam is removed and
silicone poured between the front and back covers. Since the silicone secures the lens
in place, the lens must be adjusted to the right focal length before this process. When
plugged in for a sufficiently long time, the silicone gets warmer from heat generated
by the electrical components.
A circular glass piece (18 mm diameter) is cut on the waterjet and used to cover
the lens. Fast drying epoxy secures the glass to the webcam rim. Mounting in a cool
place is important to prevent condensation in the air gap between the lens and glass.
Silicone sealant is layered to prevent water leaking in the air gap between the lens
64
and glass. The final result is shown in Figure 2-22.
(a) Water Proof Camera Front View: Layers of silicone sealant to prevent water from
entering the airgap between lens and glass is
visible.
(b) Water Proof Camera Side View:
Silicone in between the front and
back covers to protect the electronics.
Figure 2-22: Waterproof Camera Used for Underwater Experiments
Experimental Set Up for Underwater Experiments
For the experimental set up, a waterproof camera and a tank that can hold water are
required. The camera has to be re-calibrated since the camera intrinsic parameters
are a function of the fluid medium [3). The grid pattern used for camera calibration
is printed on a transparent, waterproof polymer. Figure 2-23 shows the experimental
set up.
The image quality is dependent upon water temperature. If the water temperature
is too hot, the images are of poorer quality (not focused). At room temperature, the
images are clearer. If the pattern is submerged underwater and the camera is above
the water surface, the images are crisp as illustrated in Figure 2-24.
\Vhen both camera and pattern are submerged , covering the lens appropriately
is important. Although ultimately a glass piece was used to cover the lens, other
ideas were considered and are briefly mentioned here.
65
Clinical sheaths (used by
(b) Set Up for Underwater Experiments: Camera Below Surface of Water
(a) Set Up for Underwater Experiments: Camera Above Surface of Water
Figure 2-23: Underwater Experimental Set Up: Plastic bucket filled with room temperature water at the working distance depth (5 cm). The grid is secured on a
transparent plate and submerged underwater.
Figure 2-24: Pattern Submerged and Camera Above Surface: The pattern is printed
on a transparent polymer so that it is waterproof. With the webcam above the water
surface, the image is clear.
sonographers to protect the ultrasound probe from the gel) are too opaque, making
it impossible to see the pattern. Clear bags are not rigid; the creases affect image
quality and result in unfocused images. Condensation forms in the air gap between
the webcam lens and a transparent lens cover.
Two lens covers were considered:
circular transparent polymer mounted with hot glue over the lens and packing tape
66
(Figures 2-25a and 2-25b).
(a) Lens Covered with a
Transparency:
Circular
Hot Glue is Used to Mount
on Rim
(b) Lens Covered in PackCondensation
ing Tape:
forms in the air gap between lens and packing tape
when lowered into the water
(c) Sample Image of Underwater Experiment: Both
camera and pattern are underwater. Lens is covered
with packing tape.
Figure 2-25: Water-proofing the Webcam: Figures 2-25a and 2-25b show two different
methods to cover the webcam lens. Figure 2-25c shows the image obtained with the
camera shown in Figure 2-25b.
Using the waterproof webcam and the same procedure outlined in Section 2.1.3,
the underwater camera calibration error is 1.7 pixels (0.07 mm). This is nearly triple
the calibration errors of the webcam in air.
However, 1 - 2 pixel errors for the
underwater results are sufficient because the fluid properties are so different than in
air.
2.5
Validation Results
The images obtained during experimentation are sent to Dr. Sun for validation with
his algorithm. The algorithm flow that describes the reconstruction process is reproduced from Dr. Sun's thesis and can be found in Figure 2-26
1381. The results from
the linear motion, defocus blur, and motion blur experiments are summarized below.
67
extension stage
H
feature matching
add map points &
set of kframes
outlier rejection &
bunde adjustment
between frame
Iand
the nearest keyframe
tracking stage
I *- 1+1
YES
in itializationstage
~twzon~liatintae initialization
feature tracking
between frame
and-1
-0
camera pose
estimation within a
RANS-AC scheme
-
pose refinement in a
Bayeflan framework
(SectionY2.3)
NO
end f
SC
cee(ecin23
enandma
scan?
map
extension?
~~YES
INPUT
skin feature video,
ultrasound video
and calibration files
N0
I +- 1+1
OUTPUT
localized 2D US images and *skin map points
transformation
from camera 0
US coordinates
scale
calibration
Figure 2-26: Reconstruction Algorithm Flow. Note image is reproduced from Dr. Sun's PhD Thesis 1381.
00
2.5.1
Linear Motion
When comparing stage motion and algorithm results, the algorithms provide a translational error of 1 mm per cm scanned. This is a 10% error per 1 cm translation when
evaluating the algorithms against natural skin features. Compared to the translational
error obtained with the artificial pattern (2% -3%), natural skin features estimate an
error five times the error of the artificial pattern. However, his freehand scanning also
incurs some rotational error (0.60 error per cm scanned) since freehand scanning is
not strictly linear.
Contrast
The pattern contrasts are varied linearly with the "losscontrast" parameter varying
from 0 to 0.7 in increments of 0.1. Reconstruction serves to estimate the frame-byframe distance, which is 1.25 mm nominally. A plot of the travel distance estimate
as a function of contrasts is provided in Figure 2-27.
1.4
.
............ ....
-cc 1.35
1.3
1.25
1.2
_ _ _
... .__
0.2
...
.
_ ..
...
.....
.
T 1.15
0.6
0.4
0.8
1
Contrasts
Figure 2-27: Algorithm performance of travel distance compared to the nominal value
of 1.25 mm with varying contrast patterns
As seen from Figure 2-27, the mean error is less than 2%, indicating that the
influence of contrast is not statistically significant. This confirms the algorithm can
69
be used against low-contrast patterns (such as natural skin features).
2.5.2
Defocus Blur
The performance of the algorithm against defocus blur is dependent upon the feature
sizes in the pattern since the different kernels span the features. As seen clearly in
Figure 2-28, the algorithm is not robust beyond o- = 1.5 pixels, which is related to the
major feature size. At - values less than 1.5 pixels, there is little variation between
the kernel sizes, indicating the algorithm performs sufficiently well for small defocus
blur. The bias is noted at less than 2%.
2.51
+71
.............. ........
--+-3
- -5
........
I
1.51-.
.. .... ... ..
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
C
. . .
r
.... ........
.
2[
-
U)
U
a..
a
...........
1IF
............
I-S
0.510
.............
0
0.5
1
1.5
2
CF
Figure 2-28: Algorithm Estimates of the Travel Distance with Varying o- and Pixel
Size: Gaussian smoothing is applied to generate the blurred images, with the size of
the kernel changing between 3, 5, and 7 pixels.
2.5.3
Motion Blur
To introduce motion blur in the algorithm, a "boxcar filter" is used in the horizontal
and perpendicular directions 138]. The kernel is changed from 3, 5, and 7 pixels.
The graphs in Figure 2-29 indicate the algorithm performance is not robust to
motion blur in either the parallel or perpendicular camera motion directions. With
70
,,,
-
" 14ww
,
-
.. I .
-_
-.
increasing kernel length, the variance increases.
-
-
Bias is determined by the scale
calibration and has less than 2% error. Beyond 5 pixels, a major feature size, the
algorithm performs especially poorly.
2.6
Summary
In summary, a translational scanning system has been developed and characterized
for skin based body registration algorithms. Errors are on the order of sub-mm (maximum error of 0.063 mm) and depend on the distance traveled. By using this scanning
system to validate the algorithms, the algorithm performance can be characterized.
It is concluded that the algorithm can correctly perform skin based body registration
volumes in the presence of lighting effects and defocus blur, but it is not robust to
the effects of motion blur.
71
-
'
I
- -
'. -,
2
Parallel
-+- Perpendicular
:9
E
1.51
0
C
*1
T
T
LI
LL
I
YT ~
~
M-M
1
Cu
L.
0.5[
2
0
6
4
Kernel Length
.
K
8
(a) Motion Blur Results with Kernel of 3 Pixels
2
1.5
fT
LA
C
C
AL
1F
LI
. ..
....
.. ..
.
U,
76
0.5
2
0
6
4
Kernel Length
8
(b) Motion Blur Results with Kernel of 5 Pixels
2
a)
Cu
U,
w
a)
0
C
Cu
.0~
1.5
LI
LI
1
0
Cu
I-
I-
0.5
0
2
6
4
Kernel Length
8
(c) Motion Blur Results with Kernel of 7 Pixels
Figure 2-29: Results of Motion Blur: Algorithm vs. Experimental Results
72
Chapter 3
Handheld Skin Scanning Device
The impetus for studying the stability of skin features all over the body with a handheld device arose from a natural extension of the mechanical scanning system, which
is convenient for only select limbs. After characterizing the error of the platform and
validating the skin based body registration algorithms, it was important to determine
the stability of skin features over time. If stable, natural skin features can be used to
aid in longitudinal (over time) reconstructions.
We intend to study the stability of skin features at various length and time scales
over many regions of the body. At present, not much literature exists to describe
the feature stability in specific individuals over time. The feature length scales to be
investigated are:
1. Melanin/Pigment variation/Moles (order of 10 mm [371)
2. Hair follicles (order of 50 pm - 160 pm [281)
3. Micro-Relief structures (order of 20 pm - 8 mm
1441)
4. Vascular structure (order of 2 mm - 0.25 cm [35])
We intend to study them over the period of hours, days, weeks, months, and years.
The translational scanning system, while great for controlled scans, is limiting in
its ability to vary parameters as well the areas that can be scanned. With a small,
73
elevated platform, it is difficult to get larger limbs (i.e. the thigh or abdomen) or
curved surfaces (i.e. the neck) onto the machine. Furthermore, the distance from the
camera and lighting are parameters that cannot be varied on the existing platform, but
are important variables to get high resolution images. These challenges are overcome
by creating a handheld scanning device.
This chapter discusses the design and fabrication of the handheld skin scanning
platform. Section 3.1 provides an in depth analysis of the camera selection process
and the important optical parameters, Section 3.2 showcases the design decisions
between the various iterations to get the final prototype, and Section 3.3 describes the
challenges in controlling and acquiring uniform lighting for the system. The chapter
concludes with an outline of the experimental procedure and closing comments on
current work.
3.1
Camera
A camera that has sufficient resolution to resolve the various desired features is required. It must also have a manually adjustable focus lens to better estimate the
focal length compared to an auto-focusing lens [381. The notion of using non-USB
interface cameras was entertained (such as the GoPro or smaller cell phone cameras),
but quickly discarded due to incompatibilities with the existing LabView code.
In order to use the myRio and existing LabView code, the camera must be "DirectShow" compatible 1271. To use the Vision Acquisition Module and connect to
the myRio, the camera needs to have USB3 Vision support running on a USB2.0
port. USB Video Device Class (USB UVC) cameras are also supported by the Vision
Acquisition Module (Section 3.1.2).
The camera sensor and lens resolution is a critical parameter since resolving the
desired skin features depends on the resolution. The Basler black-and-white USB
camera discussed in Section 2.3 was used as a starting point in the camera search to
determine the resolution required for imaging skin features. The camera was mounted
to the 5-axis mechanical system (described in Chapter 2) and captured images of the
74
fore arm. In these initial images, the center of the forearm was focused to the lens
center. The edges of the images, corresponding to the curved surfaces of the forearm
were less in fo cus. A sample image is shown in Figure 3-1. Since microrelief structure
and pigment variations were visible, the camera for the handheld device would have
to be at least l.3MP (the resolution of the Basler black-and-white camera).
Center of forearm is
aligned with lens center
and in focus
Figure 3-1: Initial Image Acquired with Basler Camera: The center of the image is
in focus , but the edges of the image are not.
The Basler color camera (acA2040-90uc) with a C-Mount Mitutoyo lens from
Computar was chosen. Some key features are highlighted here:
• The resolution of the camera is 4 MP, 2040 pixel x 2046 pixels (2x the desired
resolution) , with the resolution of each pixel being 5.5 µm x 5.5µm
• The camera is smaller compared to other scientific cameras at 29.3mm x 29mm x 29mm
• The camera requires a simple, lightweight USB to micro-USE connection cable
• It is a color camera, capturing RGB data in 3 channels (used for determining
pigment variations)
75
3.1.1
Variable Optical Parameters
Varying optical parameters, such as depth of field, field of view, object to camera
distance (working distance), and optical zoom is important to image skin features
adequately. The resolution of the camera is also critical. Each parameter is defined
below. Figure 3-2 shows the parameters in respect to one another. Figure 3-3 shows
the parameters as they pertain to the application. For first experiments, we imaged
the forearm.
" Field of View: The viewable area of object under inspection (fills sensor)
" Working Distance: The distance from the front of the lens to the object
" Depth of Field: The amount an object is expected to move while still maintaining focus
" Resolution: The minimum feature size that can be distinguished by the camera
Object
Plane
k
Angular Field of View
Field of
View = FOV
-
-
--
~0
Working Distance = WD
Figure 3-2: Optical Parameters
Determining Field of View: The FOV is important in the y direction (the
height of the forearm/scan region) since the translation is in the x direction (along
76
Forearm
FOV
y
3Support
Camera
Figure 3-3: Optical Parameters as they Pertain to the Set Up
the length of the forearm) (Figure 3-3). After imaging forearms of various subjects
and determining the appropriate imaged region where skin features are visible, the
FOV was estimated to be 3 in or 8 cm. Note that although this estimates the ideal
FOV, the actual FOV was much smaller since the camera had to satisfy all the other
parameters as well. The usable, experimental FOV was 0.077 sq.in. (or 0.33" in the
y direction).
Determining Working Distance: For the handheld scanning device, the working distance is the distance from the camera to the skin surface at which skin features
can be adequately resolved. The working distance must be at least 5 cm, which is the
clinical application when the camera is mounted on ultrasound probe. Experimentally, using the Basler acA2040-90uc camera and Mitutoyo lens, the working distance
was 6.8 cm.
77
Determining Depth of Field: Depth of field is harder to estimate since amount
of rotation of the scan region is region and patient dependent. Example of region
dependence is the forearm; because of its small diameter, the forearm may rotate more
than a scan region with larger diameter (i.e. the bicep). Patient variability is expected
as some patients are more able to hold their limb stationary compared to others. For
this reason, a 10* rotation (which accounts for involuntary patient movement) is
considered when calculating the depth of field (DOF). The corresponding geometry
is seen in Figure 3-4 and the calculations leading to a DOF range of 0.55 cm - 3.2 cm
is provided (Equations 3.1a, 3.1b, 3.1c).
Depth of
Field =DOF
~Q1DOF/2
o
DOF/2
WD
WD
Figure 3-4: Geometry of Set Up to determine Depth of Field
DOF
tan(0")
(3. la)
2
WD
Convert angle from degrees to radians: 00 x 1800 = a
(3.1b)
DOF = 2 x WD x tan(a)
(3.1c)
Determining Resolution: The required resolution was also obtained via experimentation. As a Basler (1.3 MP resolution, 1280x1024) camera had been used to
distinguish the microrelief structures, the resolution of the final camera should be at
least 2 MP.
As mentioned in Section 3.1, the Basler camera model acA2040-90uc satisfied all
these requirements.
78
3.1.2
Camera Control with LabView
When continuously streaming, the frame rate ranges from 1.9 fps to 50.7 fps, varying
inversely with the exposure time. However, since we are only interested in image
capture, the video capture rate is not important.
The camera is controlled with
LabView as seen in Figure 3-5. A pseudocode is provided below.
1. Open and initialize the camera
2. Continuously stream the image to the front panel for user feedback
3. When the user hits 'stop,' come out of the While Loop, close the camera
4. Save image to specified location as a PNG file
Camera Name
77maVe
ii
File Path
nl
-U
Delay Time (s)
Ti
(sTake
Pic
Figure 3-5: LabView Code for Acquiring Skin Images: The camera is selected and
initialized. A While Loop is used for continuous streaming. A time delay is inserted
to mitigate errors arising from having an AC light source (see Section 3.3). When the
user hits 'stop,' the camera is closed and the image is saved to a specified location as
a PNG file.
79
Handheld Scanning Device Design
3.2
The purpose of designing an all-inclusive scanning mechanism is to control the parameters that influence image quality as described in Section 3.1.1.
There are also
other requirements for the handheld scanning device, which are listed below.
The
device:
e Needs to be ergonomic: this is achieved by the ring stand, which provides an
even surface when the handheld device is pressed against the skin (see section
3.2.3)
o Needs to be portable to scan over many the body: this is achieved by having a
compact system, with a small camera, that can be used to image various parts
of the body
o Must allow the user to perform manual, freehand scans repeatably:
this is
achieved by (1) framing the image with the ring stand and (2) the experimental
procedure (see sections 3.2.3 and 3.4)
o Must take high quality images: this is achieved by the hardware (camera) and
the design - (1) the device is constructed to keep the lens perpendicular to the
scan region so that the center of the scan region is in focus, and (2) cameralens system is an appropriate working distance away from the skin surface to
adequately resolve the features, which is accomplished by stiff, optomechanical
rods (Section 3.2.2)
o Should prevent unrealistic deformation of the skin surface: this is accomplished
by the ring stand (Section 3.2.3) and by imaging at a working distance greater
than 4 cm where surface deformation is negligible
1381
The final design is seen in Figure 3-6. The details of individual features (frame,
rods, ring stand) of the scanning device and the various iterations of the device are
detailed in the following sections.
80
Figure 3-6: Handheld Scanning Device-Frame, LED Light Ring, Optomechanical
Rods, and Camera
3.2.1
Frame
The frame design had to incorporate an external LED light ring (see Section 3.3)
and the camera. Filleted flanges were designed to keep the light ring in place. The
camera requirements (hardware and optical) provided the geometrical limitations of
the mount, which are listed below. Due to the intricacies of the mount design, the
mount was 3D printed.
1. Hardware:
" diameter had to be wide enough to encompass the objective lens with the
set screws used for adjusting optical zoom and exposure
" length of the frame was limited by length of the objective lens (at maximum
)
zoom, length is 32.77 mm or approximately 3.3 cm
2. Optical Requirements: body length determined by the working distance
81
3.2.2
Set Working Distance
The camera lens had to be a set distance away from the skin surface in order to
properly resolve the skin features. To set the working distance, an optical target was
used. The camera exposure was experimentally set to F8 and the optomechanical,
stiff metal rod length adjusted until the target center was in focus. Threaded on both
ends, the rods were screwed into the side of the camera mount and ring stand with
4-40 screws. Shown in Figure 3-7 are the rods screwed into the ring stand.
Figure 3-7: Optomechanical stiff rods screwed into ring stand to prevent excessive
bending while still providing the appropriate working distance
The mount was then used to image the skin surface and the working distance fine
tuned such that the microrelief structure was in focus. The working distance was
68.81 mm.
3.2.3
Ring Stand
A 3D printed ring was fabricated to allow for even distribution of forces across the
skin surface when scanning (compared to the alternative of three rods poking into the
skin, locally deforming only a triad of regions). It also served to mount the reflective
lining (described in Section 3.3.1). Furthermore, the ring helped with repeatability
as the scan region could be approximately centered within the ring during each scan.
Holes in the ring stand were used to connect the ring stand to the prongs for the
first iteration (see Section 3.2.4). The holes were dimensioned exactly matching the
82
dimensions of the prongs in the 3D model and filed to size for a press-fit. Figure 3-8
shows the ring stand attached to the rest of the frame for the first iteration.
Figure 3-8: Handheld Scanning Device V1-Ring fits perfectly onto the tripod extrusions with a press fit
The third iteration of the ring was matched to fit the stiff rods. Instead of having
cutouts, holes were aligned with the rods and countersunk to allow the screws to lie
flush with the ring. From an ergonomic standpoint, this was an important design
decision because the patient would be more comfortable during scanning. The ring
also served to ensure the ends of all the rods would lie on the same plane (see Figure
3-9).
3.2.4
Iteration 1 of Handheld Scanning Device
The first iteration of the device included three appropriately sized extrusions to provide the correct working distance. The tips were domed for ergonomic comfort during
scans.
However, the 3D printed extrusions were fairly compliant (acting as can-
tilevered beams), affecting repeatability of images.
83
They were not considered for
Figure 3-9: Ring Stand for Iterations 2-4 of Frame: Countersunk holes for ergonomic
benefits and better assembly
future iterations.
Figure 3-10: Version 1 of handheld scanning device-the tripod extensions with domed
tips are now highlighted. The working distance is 75.98 mm.
A tight-fitting hole for the light ring cable added a constraint to secure the light
in place (see Figure 3-11). In this iteration, the hole was not cut deep enough in the
84
back for the light to lie fiat. This was rectified in the next iteration (see Section 3.2.5)
of the mount.
Figure 3-11: Version 1 of handheld scanning device with features, such as the flange
to hold the light in place and the cutout for the cable, highlighted
3.2.5
Iteration 2 of Handheld Scanning Device
Major changes in the second iteration allowed for a better assembly. These are outlined below.
• The extruded cut for the cable was moved to a different location on the circumference of the frame, allowing the device to take images right side up when
resting on the table.
• All the extruded parts were now filleted (removing stress concentrations).
• The length of the snap fit flanges were reduced (keeping the light ring secure).
• Pockets to insert rods of varying lengths were designed (see Figure 3-12a). This
allowed for various working distances since rods of different lengths could be
85
inserted.
The pockets also prevented torquing of the rods, which were can-
tilevered out. However, as the rods are not used to transmit force, torquing was
not a major concern.
A drawback of this iteration was in the pockets.
Once printed and the rods
inserted, the thin walls of the holder began to plastically deform (see Figure 3-12b).
The wall thicknesses would have to be increased to support inserting the rods.
3.2.6
Iteration 3 of Handheld Scanning Device
Iteration 3 incorporated many features that made the system more robust to changes.
These are outlined below.
" Slot cuts were made into the frame to allow for real time adjustment of the
exposure and optical zoom.
Benefit: Real time adjustments help to obtain
better quality images not only by allowing in more light (changing exposure),
but also to experimentally determine the best working distance at which to
resolve skin features (changing optical zoom).
" The side pockets, which were originally tunnel shaped to keep the overall shape
of the camera mount, were changed to circular cutouts. Benefit: This allows for
rod length adjustment, which means the working distance can change, creating
an overall robust structure.
" Thicker walls (recommendation from Iteration 2 to prevent plastic deformation):
Using the rule of thumb that 5 threads need to be engaged per 4-40 set screw,
the thickness of the light rod extrusions was determined to be 3.175 mm.
A major design decision in this iteration was in placing the rod extrusions. If
placed right behind the flanges to hold the light in place, the structural integrity of
the rods would be increased (reduced cantilevered effects), but the compliance of the
flanges would be decreased (problem for putting the light ring in place). If placed right
below the start of the flanges, longer rods are required, which have to be specially
86
(a) Handheld Scanning Device Iteration 2-Pockets, Shorter
Flanges, Bigger Extruded Cut for Cable
(b) Plastic Deformation of Pockets after Inserting Stiff Rods
Figure 3-12: Handheld Scanning Device Version 2
87
Figure 3-13: Handheld Scanning Device Version 3-Circular Extrusions, Thicker walls,
Cutouts for Easy Adjustment of Optical Zoom and Exposure Settings
manufactured (expensive solution) in order to accommodate the working distance.
Eventually, the halfway point of the flange was selected to keep the appropriate
working distance with the existing rods while also keeping the structural integrity.
The third iteration of the frame is seen in Figure 3-13.
3.2.7
Final Design of Handheld Scanning Device
The most recent version of the mount, although allowing for better image capture,
does not address the issue of all mount designs - design for assembly. With screws
of different lengths required to mount the camera to the frame, the system is not
very modular. Using a fully encircled design around the camera, while aesthetically
pleasing, makes it difficult to attach the camera to the mount. However, the frame
88
slot cuts have been modified for easier access to real time adjustments of the exposure
and optical zoom. Similar in geometry to iteration 3, the final design can be seen in
Figure 3-14.
(a) Handheld Scanning Device Final Version-Side View
(b) Handheld Scanning Device Final Version-Front View
Figure 3-14: Handheld Scanning Device-Final Version
89
Assembly Instructions
Assembling the handheld scanning device is fairly intuitive. However, mounting the
camera and rods to the frame are slightly more challenging. Step-by-step assembly
instructions are provided below to ensure proper alignment:
1. Camera: hold front of lens and back of camera to align the camera holes with
the screw holes of the mount; insert all screws loosely, then tighten in place
2. Optical Rods: start by placing one end of the rods flush against the top of the
rod holder and keep in place by set screw (correct working distance will be set
afterwards by imaging and using the ring stand to keep all rods on same plane)
3. Ring Stand: align the holes of the ring stand to the rods and secure in place by
screw
4. Light Ring: insert the light ring sideways between two rods and snap in place
by aligning the cable with the cutout
5. Light Cover: insert the cover to the light from the opening created by the ring
stand and snap in place
6. Reflective Lining: Insert from the top into the inner diameter of the ring, sliding
it over the light holder flanges; tape to ring stand to keep it secure
3.3
Lighting
Ambient lighting greatly influences image quality and the ability to resolve skin features. For example, taking images in a dark room late at night provides more focused
images in which the skin features are more prominent. However, the handheld scanning system should not depend on ambient light (sunlight, fluorescent lights or white
lights in the room) to obtain quality images. Therefore, an external, repeatable light
source is needed to both enhance the skin features and mitigate ambient lighting
variation that results from taking images at various times of day.
90
In the first set of experiments , a desktop light was used to illuminate the skin
surface , attempting to provide uniform lighting (see Figure 3-15). This highlighted
the areas of the forearm that were more planar and cast in shadows the curved edges
of the forearm.
Fluorescent
Desk Light
Figure 3-15: Desktop Lamp Used During Initial Experiment to Illuminate the Skin
Surface: The center of the forearm (which is viewed as planar) is saturated by the
light source. The curved edges of the forearm are shadowed.
Having a lighting system that integrates with the handheld device (instead of
the external desktop light) provides for consistent data collection.
A white LED
light ring was purchased from Mainland Mart Corp. Although its inner diameter is
slightly larger than the diameter of the objective lens , the low price point made it
an attractive solution. However, a drawback is the AC voltage, which is noticed as
flickering light when continuously streaming images. This is compensated for in the
Lab View code , which incorporates a time delay between images to prevent the flickers
to show up in the image. The learning curve and time required to fabricate a DC
LED light ring that would have the appropriate geometry did not provide sufficient
cost-benefit analysis to merit a fully engineered light source from scratch for an initial
prototype.
91
3.3.1
Uniform Lighting
Using the light ring, a strong, circular shadow forms in the center of the scan region
(see Figure 3-16). For accurate RGB data and high quality images, this needs to be
eliminated.
Figure 3-16: Circular Shadow on Scan Region Resulting from Light Ring
Shadows
Positioning the hardware affects the shadow induced and influences the design of the
handheld device. If the objective lens is flush with or behind the LED light ring
(Figure 3-16) , the shadow is not affected by the objective lens. If the objective lens
is in front of the light ring, the shadow induced on the scan region is darker since the
shadow is caused by both the objective lens and the light ring.
Reflective Lining
In addition to an external light source, a reflective lining is used to control image
quality by creating more diffuse lighting and eliminating shadows. A thin, reflective
material lines the inner· circumference of the ring stand and rods , trapping the light
92
emitted by the light ring (see Figure 3-17). The opaque polymer, with an enhanced
reflectivity substrate, is used to reflect ambient light. The other options considered
are seen in Table 3.1.
Table 3.1: Reflective Lining Choices
Idea
Aluminum Foil
Printer Paper
Metallic polymer layered on
paper
Benefits
Reflective; Allows for Lambertian scattering due to
wrinkles (uniform light)
Smooth, so no hot spots are
Drawbacks
Wrinkles too easily; High
and low points on the scan
region are prominent
filters
light
Ambient
identified
through
Polymer prevents filtering
of ambient light and reflects
light from outside; printer
paper on inner surface provides uniform lighting
affecting
RGB
data
Two materials required (paper and metallic polymer)
(b) Reflective Lining - Paper Side
(a) Reflective Lining - Metallic Side
Figure 3-17: Reflective Lining: The paper side traps the light from the light ring and
the metallic polymer keeps ambient light from filtering through the lining.
Figuring out how to attach the reflective lining to the handheld scanning device
was a challenge. Functionally, attaching the lining to the device for each scan had
to be repeatable. From the design perspective, it had to be aesthetically pleasing
when integrated with the entire device. The latter constraint was satisfied by cutting
strips of paper and metallic polymer to appropriate dimensions and minimizing the
visibility of attachment tape. For repeatability, the inner diameter of the ring (see
93
Section 3.2.3) and overall support structure were used to keep the reflective foil in
place (Figure 3-18). Lining a tube with this material may be an elegant solution for
future iterations.
Figure 3-18: Reflective Foil Attached to Mount in Pleasing Manner: Sleek, Tape Not
Visible
Light Cover
In addition to a reflective lining, a light cover provides uniform lighting over the scan
region, which provides better images. The light cover acts as a diffuser, scattering
light from the LED point source. The different light cover options are summarized in
Table 3.2 and pictured in Figure 3-19.
The final design is inspired by light bending principles. The underlying idea is
to distribute the light from the point source, bending or scattering the light [11].
A cover is designed and 3D printed to snap fit onto the light ring. Its thickness is
experimentally determined by comparing light intensity with and without the cover.
\Ve do not want to lose too much light intensity by applying a cover. The minimum
thickness is 0.05" (resolution of the 3D printer). The rough texture of the 3D printed
part enables light scattering, which provides more uniform lighting. The final light
94
Table 3.2: Light Cover Choices
Idea
SemiOpaque
Plastic
Benefits
Semi-opaque so no light intensity is lost
Paper
Uniform lighting achieved; papers of
various thickness are available (printer
paper, cardstock, tissue paper); can increase exposure to allow in more light
(risk making images less focused), but
need to incorporate a way to adjust ex-
Drawbacks
Wrinkles easily (not uniform thickness
across all bulbs); difficult to attach to
light source; shadow still apparent on
scan region
Opaque so light intensity is modified
(dark images); difficult to achieve uniform thickness since paper strips overlap
posure in real time (Section 3.2.6)
Wrinkles
Tape =
Difficult to
Attach
(a) Plastic Bag Light Cover
Overlapping
Layers, Non-
uniform
Thickness
(b) Paper Light Cover
Figure 3-19: Light Cover Options
95
cover is seen in Figure 3-20.
(b) Light Cover on Light Ring: Snap fits to
ring with cutouts for precision fit
(a) 3D Printed Light Cover
Figure 3-20: Light Cover Final Design
For a more polished look, future iterations of the light cover may include thermoformed plastics of varying opacity, thickness, and materials.
3.3.2
Directional Lighting
Low angle, directional lighting casts shadows over the valleys, accentuating the microrelief structure. This lighting scheme cannot be used for round objects, such as the
hand and fingers [22]. Since the middle of the forearm is considered planar
1381, direc-
tional lighting can be used for accentuating features (verified by initial experiments
with a fluorescent desk lamp as described in Section 3.3).
But there is a trade-off
with directional lighting as it adversely highlights other areas of the image (see Figure
3-22). A desk lamp alone is not intense enough to illuminate all the features; a light
ring is still needed. To properly illuminate the scan region, the optimal angle between
a light source and camera is 450 (see Figure 3-21)
96
1331.
Light
Camera
450
--
0
Scan Region
Figure 3-21: Directional Light Schematic: Optimal Angle of 45* Between Light Source
and Camera
Figure 3-22: Directional Light Applied: Mirco-relief structure is visible, but hot spots
affect RGB data of overall image
3.3.3
Calibrating the Light
To quantify the RGB data of the image (which will be used to provide information
about the melanin content in the skin), the "whiteness" of the light source is evaluated.
97
When using just the LED light ring, the images indicated a clear "blue" bias. When
imaging with the LED light ring covered by the 3D printed light cover, the images
had more "yellow" hues. For comparison, a medical light source with multiple ring
lights was purchased.
Experimentally, each light source (medical light ring and LED light ring) imaged
a piece of white printer paper. The light sources were enclosed by an opaque paper
lining at a set distance away from a stack of printer paper (see Figure 3-23). A stack
was required to ensure the paper was "white," avoiding color from the underlying
surface to bleed through.
(b) LED Light Calibration Set
Up
(a) Medical Light Calibration
Set Up
Figure 3-23: Light Calibration Set Up: The light is a set distance away from a stack
of printer paper to ensure "whiteness." Each light is encircled with opaque lining.
The images were then processed in Matlab (see Appendix B-5), obtaining the
mean and standard deviation for each channel (R, G, B). These parameters were
used to determine which light source is better. A higher mean value indicates bias
towards that particular channel. With more variance across the image, the standard
deviation increases (not desirable). The results are seen in Table 3.3.
The results indicate that the medical light ring and LED light ring with cover give
98
Table 3.3: Comparison of Light Sources
Medical Light
LED Light
LED Light with Cover
Red
Mean Std Dev
45.71
194.74
16.46
152.37
6.06
113.35
Green
Mean Std Dev
9.87
249.51
238.52
18.46
6.74
197.60
Blue
Mean Std Dev
2.42
253.98
1.99
254.18
5.49
162.16
more uniform lighting than the LED light ring alone. As expected, the higher mean
values for the green and blue channels suggest these colors are more pronounced in
the images illuminated by the medical light and LED light. The LED light with cover
is more uniform across the color spectrum, with similar mean values for all channels.
The standard deviations are high, suggesting that the color is not uniform throughout
the image. This may be due to the variations in the white paper, which is not the
same as a true calibration target. Thus, results of this experiment provide some
insight, but more work is still required to fully characterize the light. A calibration
target has been purchased for this reason.
3.4
Skin Scanning Experiments
The scanning device geometry and structure allows for little variation in how the
images can be taken; the ring stand must always be flush against the skin surface to
capture an image. The freedom of the subject during the scans introduces variability
in the data collection, affecting repeatability. Since the device is handheld, the subject
has freedom in how they place their limb to get imaged, which may not be consistent
from one image to the next.
In first experiments, we focus on imaging the skin features on the forearm. Minimizing the effects of individual variability from one test to another is accomplished by
having the subject place their forearm on a hard surface in a way that is comfortable
for them. Individual comfort allows consistency from one scan to the next. The ring
of the scanning device is then rotated and elevated until it touches the skin surface
99
and the lens is perpendicular to the skin surface (see Figure 3-24a). The real time
streaming display in LabView is monitored until the image is roughly in the center of
the display. For testing purposes, a Belkin gel hand rest is used under the forearm,
preventing extraneous movement and rotation. This is especially important for subjects with small wrists (see Figure 3-24b). Without the support, in order to see the
entire wrist, the hand has to be kept mid-air to be in the center of the image. Small
motions are perceived in the image frame and lead to blurred images (i.e. microrelief
structure is not visible).
Device Rotated
rs
Aike
Rn
Ring stand
flush againstflsagit
skin
(a) Skin Scanning Experimental Setup: Subject places arm in a comfortable position on
top of a solid surface. The device is rotated
and aligned with the scan region, such that
the ring touches the skin surface.
tn
si
(b) Skin Scanning Experiment on Small
Wrist: Need to use a hand rest to support
the arm and prevent minor movements
Figure 3-24: Skin Scanning Experimental Setups
Wrists were imaged and scans were acquired by taking a series of overlapping
images from the wrist to the elbow. A sample experimental image is provided in
Figure 3-25.
For particularly hairy subjects, scan regions with less hair were initially scanned
in hopes of providing better quality images.
100
However, this approach was quickly
Figure 3-25: Sample Image Acquired from Skin Scanning Experiment: Uniform lighting, microrelief, hair follicles, and pigment variations visible. Image is centered within
frame.
abandoned after realizing that not all subjects would be hairless and a robust image
processing algorithm would still need to be developed.
Thus, in developing a hair
removing algorithm, the hairy subjects were imaged four times with the hair in four
different orientations (up, down, left, and right) as shown in Figure 3-27. This allows
the underlying microrelief structure to surface for future image processing.
The effects of the following parameters will be studied. Note that experimental
images of some of the parameters (imprints, stretching the skin, light, hand rest) have
been acquired, but further image processing is required to understand the effects.
e Imprints (nail, watch): to analyze how long it takes for marks to disappear
e Goosebumps: to analyze similar effects as with imprints
101
Figure 3-26: Sample Images Acquired from Skin Scanning Experiments: Skin Features identified are hair follicles, moles, pigment variations, vascular structure, and
microrelief structure. Note: two images are from two different subjects.
" Stretching the skin in lateral and vertical directions/Flexing: to analyze how
local microrelief structures change, if at all
" Deyhdration: to analyze vascular structure changes, if any
" Showering/Scrubbing with pumice stone: to analyze if microrelief structure is
less pronounced
" Light: sunlight streaming through the window affects RGB data, but not feature
recognition; take images in the dark, at night, and with lights off to have effects
of light ring only
" Hand rest:
to test for further deformations and repeatability, also mitigate
effects of minor movement
3.5
Preliminary Image Analysis
Preliminary image analysis is made possible by SSIM (Structural Similarity Index),
a built-in Matlab image algorithm [2]. It is used to analyze the day to day stability of
the various feature points. Two images are mapped on top of each other and points
102
(a) Hair Orientation Down
(b) Hair Orientation Left
(c) Hair Orientation Right
(d) Hair Orientation Up
Figure 3-27: Sample Images for Hair Removing Algorithm-Hair oriented in four different directions to highlight underlying microrelief structure. Hair does not stay in
combed direction.
are compared based on the grayscale values. Based off these results, it can be assumed
that skin features are not stable with time. The highest correlation between images
(largest R value) was obtained with a one day difference between image capture
whereas the lowest R value resulted between two images that were five days apart
(see Table 3.4).
103
Table 3.4: SSIM Tests for Feature Stability-Shows correlation between two images
taken on different days. A "Days Between Scans" number of 0 indicates images are
taken on the same day. Repeatability of taking images is critical.
Number of Days Between Scans
0
1
4
5
0
1
Subject
Ina
Ina
Ina
Ina
Steve
Nigel
Correlation Between Images (R Value)
0.7308
0.8724
0.8284
0.8122
0.8293
0.8856
However, no definite conclusion can be made based purely on these results as
the algorithm is extremely sensitive to ambient lighting conditions and repeatability
of image capture (repeatability of lighting, arm orientation, and camera position
in the experimental set ups). Thus, SSIM is an unsuitable package to use for this
application. Instead, a more rigorous image processing algorithm must be used as
described in Section 3.6.
3.6
Skin Studies: Closing Comments and Ongoing
Work
The objective of this investigation is to observe the stability of skin features over
time and assess overall skin health. To adequately address this issue, the hardware
and image processing algorithms have to be developed. This thesis work focused on
developing the imaging platforms. Through successful iterations, a handheld scanning
device has been developed that can adequately image the four feature points (pigment
variation, hair follicles, microrelief structure, and vascular structure) in the visible
spectrum.
Now the image processing algorithms have to be developed and implemented to
address the longitudinal stability of skin features and their correlation (if any) to
overall skin health. In a more rigorous image processing algorithm, the experimental
images are first preprocessed and then registered and mapped, using matching al104
gorithms. During preprocessing, the RGB images are converted to grayscale images
and the microrelief furrows are identified. To date, Dr. Brian Anthony and Dr. Xian
Du have been able to preprocess the images and correctly determine the microrelief
furrows (Figure 3-28). The intersection points of the valleys, called the "bifurcation
points," have also be identified. The challenge now lies in the skin deformation measurement: how can we globally register the images and locally map the feature points?
To address this challenge, constellation mapping techniques will be used as described
in 1181 for accurate registration and robust mapping of skin features. The process
flow is outlined in Figure 3-29. This process is commonly used for imaging biometrics, such as fingerprints, which rely on the image quality and accurately extracting
and detecting the feature landmarks to guide prealignment 121].
105
A
[NO
U,1
3
3
2.1945 mm
(a) RGB Image of Skin Microreliefs changed to Grayscale Image
L
to
3
3
2.1945 mm
(b) Skin microreliefs identified as seen with the red dots
Figure 3-28: Skin Image Preprocessing: Convert RGB images to grayscale and identify the microrelief furrows.
106
Skin images and ROls
V0
aleys
BfrItopin
Valleys
Bifurcation points
Figure 3-29: Skin Analaysis Work Flow: Optical Imaging Technique for Pathological Skin Monitoring
108
Chapter 4
Conclusion
This thesis focused on the design and development of hardware that can be used to
image skin, with two applications: (1) use skin features for skin based body registration algorithms and (2) study the longitudinal stability of skin features at various
length scales to assess overall skin health. The hardware for tracking skin features
has been developed, both in a controlled environment (5-axis scanning system) as
well as a free hand scanning device.
The handheld scanning system can still be modified for ensuring high quality
images and a compact design. A small, DC light source will be used instead of the
AC LED light ring to eliminate the effects of flickering light on the images. For better
assembly, the geometry of the camera insertion point will be changed to incorporate
a square cutout.
While iterations can still be made on the handheld system for a more polished
and compact look, there are many possible experiments that can be carried out with
the current system as described in Section 4.1.
4.1
Future Work
In order to assess the stability of skin features over time and its impact on overall
skin health, more diverse data is required. Currently, only thirteen subjects have been
imaged. There is little variance in the subjects. Primarily all in their early to mid109
twenties, the only diversity in the current population lies in gender and ethnicity.
Having a larger sample size would provide an increased variance in age, race, and
ethnicity, allowing for broader observations across different demographics.
Studying the influence of various environmental factors on skin is also an interesting future study. In clinical practice, this means observing patients who are subject
to therapies (such as laser or radiation) that penetrate the skin 1241. Other environmental effects include allergens and pollution 1121.
Yet another area of further investigation is in studying goosebumps. Karmanos
breast imaging has recognized the emergence of goosebumps in their patients when
submerged in water. Studying the relaxation time of the skin (i.e. how quickly the
goosebumps disappear) and stimulating goosebumps are all areas of experimental
interest. Preliminary research and experimentation have indicated that goosebumps
occur at times of intense pleasure/emotion or when the body core is cold.
This
means that submerging the hand in ice water or locally cooling a limb is insufficient
to induce goosebumps. Finding a repeatable way to induce goosebumps and imaging
the relaxation time is an interesting challenge.
Currently, the handheld device (and 5-axis scanning platform) use optical imaging.
The next iteration of the device could incorporate a near-infrared (NIR) camera for
subdermal imaging. NIR images are also used in many diagnostic devices (i.e. skin
cancer detection), so this would be a logical next step as the project moves towards
transforming the handheld device into a diagnostic tool. While a powerful change,
this would not require much modifications on the existing frame design as the NIR
version of the camera has a similar geometry to the Basler acA2040-90uc.
The applications of these devices are far-reaching: from clinician's, who are interested in the diagnostic and reconstruction applications, to the cosmetic industry,
which is more focused on anti-aging skin health and hydration. With continued iterations on the mechanical structure and optical hardware, the effects of this research
can have significant repercussions.
110
Appendix A
Figures
The hole patterns of the y axis and rotation servos are mapped onto the bracket. The
rotation servo nests in the bracket to optimize the x axis travel distance. As shown
in the dimensioned drawings in Figure A-1, the bracket is as long as the servo to hold
it in place. It was 3D printed with high density settings, since a solid structure is
required to hold the 8.2 oz motor. Post machining of the mount included drilling and
tapping the - - 20 holes to mount the rotation servo. The mount is connected to the
y axis with 1" long ' screws and secured with steel nuts.
111
10
1-
0
IQ
3.60
o
L0
LO
NC
C6)
-4
3.47
6x #27 hole,
2 cm depth
.1
IL
I
4
0
0
L
4x #27 hole through all
UNLESS OTHERWISE SPECIFIED:
NAME
DIMENSIONS ARE IN INCHES
DRAWN
.RACTIONAL
ANGULAR: MACH
BEND
TWO PLACE DECIMAL
CHECKED
THREE PLACE DECvIMAL
ENG APPR.
FA
MFG APPR.
INTERPRET GEOMETRIC
Q.A.
TOLERANCING PER:
COMMENTS:
TOLERANCES:
PRPRIEAY
So
AND CCFENA
TO CONDAIN
FVA
<INSERT COMPANY NAME HERE> IS
PROHIBITED.
'
5
nt Edition.
ontyASSY
AP PUCATION
MATERIAL
USED
ON
FINISH
DO NOT SCALE DRAWING
DATE
TITLE:
CNC Servo Connection
SIZE DWG. NO.
3D Printed Part-solid D
ost machining
or holes
SCALE: 1:2
WEI GHT:
REV
SHEET 1 OF 1
3
machining of 6-32 holes
Figure A-1: CAD Model of Servo Connection to CNC: 3D printed part that connects to Y axis. Post
for connection to the Y axis and the 1/4-20 holes for the servo connection.
0.7 2
(p).95
(N
0i
CIO3
e
e.
UNLESS OTHERWISE SPECIFIED:
DIMENSIONS ARE IN INCHES
TOLERANCES:
PROPIETARY
Soio
AND CONFIDENIIAL
PN
Fq
"..
ent Edition.
NAeln
<INSERT COMPANY NAME HERE> IS
PROHIBITED.
5
FRACTIONAL
ANGULAR: MACH
BEND
TWO PLACE DECIMAL
THREE PLACE DECIMAL
CHECKED
INTERPRET GEOMETRIC
TOLERANCIN PER:
MATPR IA I
Q.A.
FINISH
yAOSSY
USED ON
AP PUCATION
4
NAME
DO NOT SCALE DRAWING
3
DATE
DRAWN
TITLE:
ENG APPR.
MFG APPR.
COMMENTS:
3D printed part:
kinematic
coupling
SIZE
DWG. NO.
REV
0kmera Mountl
SCALE: 2:1 WEIGHT:
SHEET
1OF
1
2
Figure A-2: CAD Model of Webcam Mount to CNC: 3D printed part that connects to Pan and Tilt servo. Tapping the hole is
required.
114
Appendix B
Matlab Codes
115
% Image Processing of Skin Images for Longitudinal Skin Study
% Make Images Gray Scale
clear all
close all
clc
% Read the image using the file path in imread
% Make the image grayscale using rgb2gray
% Show the image using imshow
steve = imread(['C:\Users\Ina\Dropbox (MIT)\Research\Longitudinal Skin
'Study\Experimental Images\01-21-Skin Results Day 5\stevelOx2.png'1,...
'png');
S=rgb2gray(steve);
%figure('Name','Steve'),imshow(S)
%title(gca,'Steve grayscale')
ina = imread(['C:\Users\Ina\Dropbox (MIT)\Research\Longitudinal Skin
'Study\Experimental Images\01-21-Skin Results Day 5\ina1ox.png'],...
'png');
I = rgb2gray(ina);
%figure('Name','Ina'),imshow(I)
%title(gca,'Ina grayscale')
% Average over the grayscale image to get the "quality" of the image to
% relate to melanin (and order images accordingly)
%SteveMelanin = mean(mean(S(1:end,1:end)))
%InaMelanin = mean(mean(I(1:end,1:end)))
% Show the color map (higher number = lighter skin)
imshow(2*ones(100,100), [0 255]);
imshow(255*ones(100,100), [0 255]);
% User input to get region of interest and the position
imshow (I);
figure('Name','Ina'),imshow(I);
title(gca,'Ina grayscale')
h = imrect;
posI = h.getPosition(;
imshow (S);
figure('Name','Steve'),imshow(S);
title(gca,'Steve grayscale')
hS = imrect;
posS = hS.getPosition();
Ina_Melanin = mean(mean(I(posI(:,l):posI(:,2),posI(:,3):posI(:,4))))
% Compare image qualities
ref = imread(['C:\Users\Ina\Dropbox (MIT)\Research\Longitudinal Skin
'Study\Experimental Images\01-15-Skin Results Day2\Inalx.png']);
ID3 = imread(['C:\Users\Ina\Dropbox (MIT)\Research\Longitudinal Skin '..
'Study\Experimental Images\01-16-Skin Results Day 3\Ina.png']);
Figure B-1: Quantifying Melanin Code: This code was utilized to determine the
effects of lighting on the skin and to characterize the melanin content in the skin. The
RGB color images were changed to grayscale images for the melanin characterizations.
116
subplot(1,2,1); imshow(ref);
subplot(1,2,2); imshow(ID3);
[ssimval,
ssimmap]
title('Reference Image Ina 1x Day2');
title('Image Ina 1x Day3');
= ssim(ID3,ref);
fprintf('The SSIM value is %0.4f.\n',ssimval)
figure, imshow(ssimmap, []);
title(sprintf('ssim Index Map - Mean ssim Value is %0.4f',ssimval))
Warning: Image is too big to fit on screen; displaying at 331
Warning: Image is too big to fit on screen; displaying at 33*
Warning: Image is too big to fit on screen; displaying at 33%
Warning: Image is too big to fit on screen; displaying at 33%
Warning: Integer operands are required for colon operator when used as index
operands are required for colon operator when used as index
Warning: Integer
Ina_Melanin =
NaN
The SSIM value is 0.9126.
Warning: Image is
too big to fit
on screen; displaying at
33*
Figure B-2: Skin Image Comparisons Using SSIM: This code was utilized to determine
if the skin features were stable by comparing two images that were days apart.
(b) Skin Image Grayscale-Steve
(a) Skin Image Grayscale-Ina
Figure B-3: Grayscale Skin Image: Used to understand the reflective spots for various
lighting choices and to quantify the melanin levels in skin (darker spots indicate more
melanin).
117
Reference Image Ina 1x Day2
Imege Ina 1 x Day3l
(a) Skin Images Between Days: First one is reference image for SSIM algorithm
maim Indx Map - Mean asim Value Is 0.9126
~*
,.~I
~v~4
14
11
(b) SSIM Index Map: Two images mapped on top of each other. The white spots indicate
the parts that map identically; darker spots are differences between the images.
Figure B-4: SSIM for Skin Images
118
% Quanitifying lighting of images
% For use with longitudinal skin study experiments
% clear all variables,
clear all
close all
clC
close all windows,
clear command window
% Read the image from the appropriate location
promptImageName = 'What is the image name? (Include folder destination' ...
'and file extension)
';
ImageName = input (promptImageName, 's');
ImageLocation = strcat('C:\Users\Ina\Dropbox (MIT)\Research\Longitudinal'
'Skin Study\Experimental Images' , ImageName);
Image = imread(ImageLocation);
% Pop up the image for user to select the appropriate ROI
imagesc (Image);
% Prompt user for edges of ROI
promptYl = 'What is the Y pixel coordinate for the first location? ';
promptX1 = 'What is the X pixel coordinate for the first location? ';
promptY2 = 'What is the Y pixel coordinate for the second location? ';
promptX2 = 'What is the X pixel coordinate for the second location? ';
Y1 = input(promptYl);
X1 = input(promptXl);
Y2 = input(promptY2);
X2 = input (promptX2) ;
ROI = Image(Yl:Y2, X1:X2, :);
imagesc(ROI);
Figure B-5: Quantifying Lighting Code: This code was utilized to determine the
appropriate region of interest (ROI) among images during various light calibration.
The idea is to select the area the ROI appropriately such that the two images have
similar lighting conditions. Furthermore, due to the nature of the handheld device
(with a circular ring visible in the image), the ROI of the skin would be similar to
the ROI selected when calibrating the lights.
119
120
Bibliography
I11
[2013]Shiseido discovered that dryness-induced marked irregularityof the skin microrelief is attributableto shrinking of cornified cells of the skin \textbar Research
and Development Topics \ textbar Research and Development \textbar Shiseido
group website.
[21 SSIM: Structural Similarity Index (SSIM) for measuring image quality, 1994.
131 Camera Calibration Toolbox for Matlab, December 2013.
[41 AM4815t Dino-Lite Edge, 2014.
[5] Scientific Devices: Skin-Visiometer SV 700 USB. May 2014.
16] Scientific Devices: Skin-Visiometer SV 700 USB, May 2014.
17] CBS Skin Analyzer. 2015.
18] Hair follicle anatomy: MedlinePlus Medical Encyclopedia Image, May 2015.
191 Melanin: MedlinePlus Medical Encyclopedia Image, May 2015.
110] Zen Toolworks CNC DIY Kit 7x7, 2015.
111] David Bailey and Edwin Wright. PracticalFiber Optics. Newnes, August 2003.
[121 Robert A. Barbee, Walter Kaltenborn, Michael D. Lebowitz, and Benjamin Burrows. Longitudinal changes in allergen skin test reactivity in a community population sample. Journal of Allergy and Clinical Immunology, 79(1):16-24, January
1987.
113] A. Barel, M. Calomme, A. Timchenko, K. De Paepe, N. Demeester, V. Rogiers,
P. Clarys, and D. Vanden Berghe. Effect of oral intake of choline-stabilized orthosilicic acid on skin, nails and hair in women with photodamaged skin. Archives
of DermatologicalResearch, 297(4):147-153, October 2005.
[141 Adrian Buganza-Tepole and Ellen Kuhl. Systems-based approaches toward
wound healing. Pediatric research, 73(0):553-563, April 2013.
1151 Oana G. Cula, Kristin J. Dana, Frank P. Murphy, and Babar K. Rao. Skin
Texture Modeling.
InternationalJournal of Computer Vision, 62(1-2):97-119,
April 2005.
121
[161 O.G. Cula, K.J. Dana, F.P. Murphy, and B.K. Rao. Bidirectional imaging
and modeling of skin texture. IEEE Transactions on Biomedical Engineering,
51(12):2148-2159, December 2004.
1171 Tania Douglas, Stephan Solomonidis, William Sandham, and William Spence.
Ultrasound imaging in lower limb prosthetics. IEEE transactions on neural systems and rehabilitation engineering: a publication of the IEEE Engineering in
Medicine and Biology Society, 10(1):11-21, March 2002.
1181 Xian Du and Brian W. Anthony. Grid-based matching for full-field large-area
deformation measurement. Optics and Lasers in Engineering, 66, 2015.
[19] E. A. Grice, H. H. Kong, S. Conlan, C. B. Deming, J. Davis, A. C. Young,
NISC Comparative Sequencing Program, G. G. Bouffard, R. W. Blakesley, P. R.
Murray, E. D. Green, M. L. Turner, and J. A. Segre. Topographical and Temporal
Diversity of the Human Skin Microbiome. Science, 324(5931):1190-1192, May
2009.
1201 Philippe G. Humbert, Marek Haftek, Pierre Creidi, Charles Lapiere, Betty Nusgens, Alain Richard, Daniel Schmitt, Andre Rougier, and Hassan Zahouani.
Topical ascorbic acid on photoaged skin. Clinical, topographical and ultrastructural evaluation: double-blind study vs. placebo. Experimental Dermatology,
12(3):237-244, June 2003.
[211 Paul Kwan Joshua Abraham. Fingerprint Matching using A Hybrid Shape and
Orientation Descriptor. pages 25-56, 2011.
122] Robert KrupiSki. Small-Size Skin Features for Motion Tracking. PRZEGLD
ELEKTROTECHNICZNY, 1(2):46-48, February 2015.
[23] Jean-Luc Leveque, Emmanuelle Xhauflaire-Whoda, and Gerard Pierard. Skin
capacitance imaging, a new technique for investigating the skin surface. European
Journal of Dermatology, 16(5):500-506, October 2006.
[24] Li Li, Sophie Mac-Mary, David Marsaut, Jean Marie Sainthillier, Stephanie Nouveau, Tijani Gharbi, Olivier de Lacharriere, and Philippe Humbert. Age-related
changes in skin topography and microcirculation. Archives of Dermatological
Research, 297(9):412-416, December 2005.
1251 Kenneth C. Littrell, James M. Gallas, Gerry W. Zajac, and Pappannan Thiyagarajan. Structural Studies of Bleached Melanin by Synchrotron Small-angle
X-ray Scattering. Photochemistry and Photobiology, 77(2):115-120, February
2003.
[26] Mark E. Lockhart, Michelle L. Robbin, Naomi S. Fineberg, Charles G. Wells,
and Michael Allon. Cephalic Vein Measurement Before Forearm Fistula Creation
Does Use of a Tourniquet to Meet the Venous Diameter Threshold Increase the
Number of Usable Fistulas? Journal of Ultrasound in Medicine, 25(12):15411545, December 2006.
122
1271 Thomas Niewiara. (Reference# 7427795) What USB cameras are suitable with
Vision Acquisition module for myRio?, October 2014.
1281 Nina Otberg, Heike Richter, Hans Schaefer, Ulrike Blume-Peytavi, Wolfram
Sterry, and Jurgen Lademann. Variations of Hair Follicle Size and Distribution in Different Body Sites. Journal of Investigative Dermatology, 122(1):14-19,
January 2004.
1291 Gerald E Pierard, Isabelle Uhoda, and Claudine Pierard-Franchimont. From
skin microrelief to wrinkles. An area ripe for investigation. Journal of Cosmetic
Dermatology, 2(1):21-28, January 2003.
1301 C Pierard-Franchimont, F Cornil, J Dehavay, F Deleixhe-Mauhin, B Letot, and
G. E Pierard. Climacteric skin ageing of the face-a prospective longitudinal
comparative trial on the effect of oral hormone replacement therapy. Maturitas,
32(2):87-93, June 1999.
1311 Adrian Podoleanu, J. Rogers, David Jackson, and Shane Dunne. Three dimensional OCT images from retina and skin.
2000.
Optics Express, 7(9):292, October
[321 P. Quatresooz, L. Thirion, C. Pierard-Franchimont, and G.e. Pierard. The riddle of genuine skin microrelief and wrinkles. InternationalJournal of Cosmetic
Science, 28(6):389-395, December 2006.
133] Michele Setaro and Adele Sparavigna. Irregularity skin index (ISI): a tool to evaluate skin surface texture. Skin Research and Technology, 7(3):159-163, August
2001.
1341 Steven P. Sparagana and E. Steve Roach. Tuberous sclerosis complex. [Review].
Current Opinion in Neurology, 13(2):115-119, April 2000.
1351 Dan E. Spivack, Patrick Kelly, John P. Gaughan, and Paul S. van Bemmelen.
Mapping of Superficial Extremity Veins: Normal Diameters and Trends in a
Vascular Patient-Population. Ultrasound in Medicine & Biology, 38(2):190-194,
February 2012.
[361 Georgios N. Stamatas, Janeta Nikolovski, Michael A. Luedtke, Nikiforos Kollias, and Benjamin C. Wiegand. Infant Skin Microstructure Assessed In Vivo
Differs from Adult Skin in Organization and at the Cellular Level. Pediatric
Dermatology, 27(2):125-131, March 2010.
1371 Jiuai Sun, Melvyn Smith, Lyndon Smith, Louise Coutts, Rasha Dabis, Christopher Harland, and Jeffrey Bamber. Reflectance of human skin using colour photometric stereo: with particular application to pigmented lesion analysis. Skin
researchand technology: official journal of InternationalSociety for Bioengineering and the Skin (ISBS) [and] International Society for Digital Imaging of Skin
(ISDIS) [and] InternationalSociety for Skin Imaging (ISSI), 14(2):173-179, May
2008.
123
1381
Shih-Yu Sun. Ultrasound probe localization by tracking skin features. Thesis,
Massachusetts Institute of Technology, 2014.
[391 Adrian Buganza Tepole, Michael Gart, Chad A. Purnell, Arun K. Gosain, and
Ellen Kuhl. Multi-view stereo analysis reveals anisotropy of prestrain, deformation, and growth in living skin. Biomechanics and Modeling in Mechanobiology,
pages 1-13, January 2015.
1401 John R. Vacca. Handbook of Sensor Networking: Advanced Technologies and
Applications. CRC Press, January 2015.
141] Patricia Casarolli Valery, Rachel Neale, Gail Williams, Nirmala Pandeya, Greg
Siller, and Adele Green. The Effect of Skin Examination Surveys on the Incidence
of Basal Cell Carcinoma in a Queensland Community Sample: A 10-Year Longitudinal Study. Journal of Investigative Dermatology Symposium Proceedings,
9(2):148-151, March 2004.
1421
Jeanette M. Waller and Howard I. Maibach. Age and skin structure and function,
a quantitative approach (I): blood flow, pH, thickness, and ultrasound echogenicity. Skin Research and Technology, 11(4):221-235, November 2005.
1431 Lin Zhang, Lei Zhang, David Zhang, and Hailong Zhu. Online finger-knuckleprint verification for personal authentication. Pattern Recognition, 43(7):25602571, July 2010.
1441
Yaobin Zou, Enmin Song, and Renchao Jin. Age-dependent changes in skin
surface assessed by a novel two-dimensional image analysis. Skin research and
technology: official journal of InternationalSociety for Bioengineering and the
Skin (ISBS) [and] InternationalSociety for DigitalImaging of Skin (ISDIS) [and]
InternationalSociety for Skin Imaging (ISSI), 15(4):399-406, November 2009.
[451 Yaobin Zou, Enmin Song, Guokuan Li, and Renchao Jin. Automatic Detection
of Fine Hairs in Skin Aging Analysis. In The 2nd International Conference on
Bioinformatics and Biomedical Engineering, 2008. ICBBE 2008, pages 23492352, May 2008.
124
Download