ChongLingChee_FYP2010

advertisement
SIM UNIVERSITY
SCHOOL OF SCIENCE AND TECHNOLOGY
AUTOMATED DETECTION OF DIABETIC
RETINOPATHY USING DIGITAL FUNDUS
IMAGES
STUDENT
: E0604276 (PI NO.)
SUPERVISOR
: DR RAJENDRA ACHARYA
PROJECT CODE
UDYAVARA
: JAN2010/BME/0016
A project report submitted to SIM University
in partial fulfilment of the requirements for the degree of
Bachelor of Biomedical Engineering
November 2010
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
TABLE OF CONTENTS
Page
ABSTRACT
5
ACKNOWLEDGEMENT
6
LISTS OF FIGURES
7
LIST OF TABLES
10
CHAPTER ONE
AIMS AND INTRODUCTION
11-12
1.1
Background
11
1.2
Objectives
12
1.3
Scope
12
CHAPTER TWO
LITERATURE
13-22
2.1
Anatomy structure of the human eye
13
2.1.1
The Cornea
14
2.1.2
The Aqueous Humor
14
2.1.3
The Iris
14
2.1.4
The Pupil
14
2.1.5
The Lens
15
2.1.6
The Vitreous Humor
15
2.1.7
The Sclera
15
2.1.8
The Optic Disc
15
2.1.9
The Retina
15
2.1.10 Macula
16
2.1.11 Fovea
16
2.2
Diabetic Retinopathy (DR) and stages
16
2.3
Diabetic Retinopathy (DR) features
18
2.3.1
Blood Vessels
18
2.3.2
Microaneurysms
19
2.3.3
Exudates
20
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
2
2.4
2.5
Diabetic Retinopathy (DR) examination methods
20
2.4.1
Opthalmoscopy (Indirect and Direct)
20
2.4.2
Fluorescein Angiography
21
2.4.3
Fundus Photography
21
Diabetic Retinopathy (DR) treatment
21
2.5.1
Scatter Laser treatment
21
2.5.2
Vitrectomy
22
2.5.3
Focal Laser treatment
22
2.5.4
Laser Photocoagulation
22
CHAPTER THREE
METHODS AND MATERIALS
23-54
3.1
System block diagram
23
3.2
Image processing techniques
24
3.2.1
Image preprocessing
24
3.2.2
Structuring Element
25
3.3
3.2.2.1 Disk shaped Structuring Element (SE)
26
3.2.2.2 Ball shaped Structuring Element (SE)
26
3.2.2.3 Octagon shaped Structuring Element (SE)
27
3.2.3
27
Morphological image processing
3.2.3.1 Morphological operations
28
3.2.3.2 Dilation and Erosion
28
3.2.3.3 Dilation
29
3.2.3.4 Erosion
29
3.2.3.5 Opening and Closing
30
3.2.4
Thresholding
31
3.2.5
Edge detection
32
3.2.6
Median filtering
35
Feature extraction
36
3.3.1
Blood vessels detection
36
3.3.2
Microaneurysms detection
40
3.3.3
Exudates detection
44
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
3
3.3.4
3.4
3.5
Texture analysis
47
Significance test
48
3.4.1
48
Student’s t-test
Classification
49
3.5.1
Fuzzy
51
3.5.2
Gaussian Mixture Model (GMM)
54
CHAPTER FOUR
RESULTS
58-60
4.1
60
Graphical User Interface (GUI)
CHAPTER FIVE
CONCLUSION AND RECOMMENDATION
61
CHAPTER SIX
REFLECTIONS
62-63
REFERENCES
64-66
APPENDIX A: BOX PLOT FOR FEATURES (AREA)
67-68
APPENDIX B: BLOOD VESSELS MATLAB CODE
69-70
APPENDIX C: MICROANEURYSMS MATLAB CODE
71-72
APPENDIX D: EXUDATES MATLAB CODE
73-74
APPENDIX E : TEXTURES MATLAB CODE
75
APPENDIX F : MEETING LOGS
76-79
APPENDIX G: GANTT CHART
80-81
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
4
ABSTRACT
Diabetic retinopathy (DR) is the resultant cause of blindness due to diabetes. The main
aim for this project is to develop a system to automate the detection of DR using the fundus
images. This is first achieved by using the fundus images to image processed them using
morphological processing techniques and texture analysis to extract features such as areas of
blood vessels, exudates, microaneurysms and textures. Using the significance test on the
features to determine which features have statistically significance of around p ≤ 0.05. The
selected features are then input to fuzzy and GMM classifier for automatic classification.
After which, the best classifier is then used for the final graphical user interface (GUI) based
on percentage of correct data rate of 85.2% and average classification rate of 85.2%.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
5
ACKNOWLEGEMENTS
I would like to thank my family for their support and encouragement.
I would like to thank NUH of Singapore for providing me the fundus images for this
project.
I would like to thank Unisim for the school facilities.
I would like to thank Fabian Pang for his patience and guidance on fuzzy and GMM
classification.
I would like to thank Jacqueline Tham, Vicky Goh, Mabel Loh, Brenda Ang and
Audrey Tan for their moral support and encouragement.
Last but most importantly, I would like to thank my project supervisor, Dr Rajendra
Acharya Udyavara for his kindness, patience, guidance, advice and enlightenment.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
6
LIST OF FIGURES
Page
Figure 2.1: Anatomy structure of the eye
13
Figure 2.1.10: Location of macula, fovea and optic disc
16
Figure 2.3.1: Retinal blood vessels
19
Figure 2.3.2: Microaneurysms in DR
19
Figure 2.3.3: Exudates in DR
20
Figure 3.1: System block diagram for the detection and
classification of diabetic retinopathy
24
Figure 3.2.1a: Original image (left) and its histogram (right)
25
Figure 3.2.1b: Image after CLAHE (left) and its histogram (right)
25
Figure 3.2.2.1: Disk shaped structuring element
26
Figure 3.2.2.2: Ball shaped structuring element (nonflat ellipsoid)
27
Figure 3.2.2.3: Octagon shaped structuring element
27
Figure 3.2.3.3a: Original image
29
Figure 3.2.3.3b: Image after dilation with disk shaped SE
29
Figure 3.2.3.4a: Original image
30
Figure 3.2.3.4b: Image after erosion with disk shaped SE
30
Figure 3.2.3.5a: Opening operation with disk shaped image
31
Figure 3.2.3.5b: Closing operation with disk shaped SE image
31
Figure 3.2.4a: Original image
32
Figure 3.2.4b: Image with too high threshold value
32
Figure 3.2.4c: Image with too low threshold value
32
Figure 3.2.5a: Original image
34
Figure 3.2.5b: Sobel
34
Figure 3.2.5c: Prewitt
34
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
7
Figure 3.2.5d: Roberts
34
Figure 3.2.5e: Laplacian of Gaussian (LoG)
34
Figure 3.2.5f: Canny
35
Figure 3.2.6a: Illustration of a 3 x 3 median filter
35
Figure 3.2.6b: Original image (left) and image after median filtering (right)
36
Figure 3.3.1a: System block diagram for detecting blood vessels
36
Figure 3.3.1b: Normal retinal fundus image
37
Figure 3.3.1c: Green component
37
Figure 3.3.1d: Inverted green component
37
Figure 3.3.1e: Image after CLAHE
38
Figure 3.3.1f: Image after opening operation
38
Figure 3.3.1g: Image after subtraction
38
Figure 3.3.1h: Image after thresholding
39
Figure 3.3.1i: Image after median filtering
39
Figure 3.3.1j: Final image
39
Figure 3.3.1k: Final image (inverted)
39
Figure 3.3.2a: System block diagram for detecting microaneurysms
40
Figure 3.3.2b: Abnormal retinal fundus image
40
Figure 3.3.2c: Red component
41
Figure 3.3.2d: Inverted red component
41
Figure 3.3.2e: Image after Canny edge detection
41
Figure 3.3.2f: Image with boundary
41
Figure 3.3.2g: Image after boundary subtraction
41
Figure 3.3.2h: Image after filling up the holes or gaps
42
Figure 3.3.2i: Image after subtraction
42
Figure 3.3.2j: Blood vessels detection
42
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
8
Figure 3.3.2k: Blood vessels after edge detection
42
Figure 3.3.2l: Image after subtraction
43
Figure 3.3.2m: Image after filling holes or gaps
43
Figure 3.3.2n: Final image
43
Figure 3.3.3a: System block diagram for detecting exudates
44
Figure 3.3.3b: Abnormal retinal fundus image
44
Figure 3.3.3c: Green component
45
Figure 3.3.3d: Image after closing operation
45
Figure 3.3.3e: Image after column wise neighbourhood operation
45
Figure 3.3.3f: Image after thresholding
46
Figure 3.3.3g: Image after morphological closing
46
Figure 3.3.3h: Image after Canny edge detection
46
Figure 3.3.3i: Image after ROI
46
Figure 3.3.3j: Image after removing optic disc
46
Figure 3.3.3k: Image after removing border
46
Figure 3.3.3l: Final image
47
Figure 3.5: Block diagram of training and testing data
50
Figure 3.5.2: Block diagram of GMM method
55
Figure 4: Graphical plot for average percentage
classification results from two classifiers
59
Figure 4.1: GUI
60
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
9
LIST OF TABLES
Page
Table 2.2: Summary of the features of diabetic retinopathy
18
Table 3.2.3.2: Rules for dilation and erosion
28
Table 3.2.5: Methods and description of various edge detection algorithms
33
Table 3.4.1: Student’s t-test results
49
Table 3.5.1a: testing1, testing2 and testing3 data output using fuzzy classifier
52
Table 3.5.1b: testing1 data output calculation using fuzzy classifier
53
Table 3.5.1c: testing2 data output calculation using fuzzy classifier
53
Table 3.5.1d: testing3 data output calculation using fuzzy classifier
54
Table 3.5.2a: testing1, testing2 and testing3 data output using GMM classifier
56
Table 3.5.2b: testing1 data output calculation using GMM classifier
56
Table 3.5.2c: testing2 data output calculation using GMM classifier
57
Table 3.5.2d: testing3 data output calculation using GMM classifier
57
Table 4a: Fuzzy classification results
58
Table 4b: GMM classification results
59
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
10
CHAPTER ONE
AIMS AND INTRODUCTION
1.1 BACKGROUND
Diabetes mellitus or commonly known as diabetes is a chronic systemic disease of
disordered metabolism of carbohydrate, protein and fat[21]; most notably known for its
condition in which a person has a high blood sugar (glucose) level and as a result of the
body either not able to produce enough insulin (type 1 insulin-dependent diabetes mellitus or
IDDM[48]) or insulin resistance (type 2 non-insulin-dependent diabetes mellitus or
NIDDM[48]). Diabetes is always a disease burden[32], especially in developed countries.
According to Ministry Of Health (MOH) in Singapore, 8.2% of total population suffered
from diabetes in 2004[32].
Diabetic retinopathy (DR) is one of the complications resulted from prolonged diabetic
condition usually after ten to fifteen years of having diabetes. In the case of DR, the high
glucose level or hyperglycemia causes damage to the tiny blood vessels inside the retina.
This tiny blood vessels will leak blood and fluid on the retina, forming features such as
microaneurysms, haemorrhages, hard exudates, cotton wool spots, or venous loops [47]. DR
affects about 60% of patients having diabetes for 15 years or more and a percentage of these
are at risk of developing blindness[44] in Singapore. Despite these intimidating statistics,
research indicates that at least 90% of these new cases could be reduced if there was proper
and vigilant treatment and monitoring of the eyes[50].
Laser photocoagulation is an example of surgical method that can reduce the risk of
blindness in people who have proliferative retinopathy[9]. However, it is of vital importance
for diabetic patients to have regular eye checkups. Current examination methods use to
detect and grade retinopathy include ophthalmoscopy (indirect and direct)[23], photography
(fundus images) and fluorescein angiography. These methods of detection and assessment
of diabetic retinopathy is manual, expensive and require trained ophthalmologists.
Therefore, it is important to have an automatic detection method for diabetic
retinopathy in an early stage to retard the progression in order to prevent blindness, thus
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
11
encouraging improvement in diabetic control. It can also reduce the total annual economic
cost of diabetes significantly.
1.2 OBJECTIVES
The objective of this project is to implement an automated detection of diabetic
retinopathy (DR) using digital fundus images. By using MATLAB to extract and detect the
features such as blood vessels, microaneurysms, exudates and textures which will
determine two general classifications: normal or abnormal (DR) eye. An early detection of
diabetic retinopathy enables medication or laser therapy to be performed to prevent or delay
visual loss.
1.3 SCOPE
The scope of this project involves using various MATLAB imaging techniques (eg;
converting image to binary format, erosion, dilation, boundary detection, etc) to obtain the
desire final image and area of the features (eg: blood vessels, microaneurysms, exudates and
textures) before using the values to do significance test (eg: student’s t-test) to determine the
accuracy of the results obtained mentioned earlier. Next, using the chosen results obtained
from student’s t-test to insert into the classifier (eg: Fuzzy and Gaussian Mixture Model or
GMM) to obtain the average classification rate, sensitivity and specificity and to classify
them into normal and abnormal classes.
Lastly, using the data collected to develop a graphical user interface (GUI) for
displaying normal or abnormal (DR) eye images based on the best classifier.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
12
CHAPTER TWO
LITERATURE
This section will discuss about the structure of the eye, definition of diabetic
retinopathy (DR) and stages, examination and treatment methods and DR features.
2.1 ANATOMY STRUCTURE OF THE HUMAN EYE
The eye is a hollow, spherical organ about 2.5cm in diameter. It has a wall composed
of three layers, and its interior spaces are filled with fluids that support the walls and
maintain the shape of the eye[45]. Figure 2.1 shows the cross-sectional structure of the eye.
The eyes are so important that four-fifth of all of the information the brain receives, come
from the eyes. Section 2.1 will explain some of the important parts of the eye.
Figure 2.1: Anatomy structure of the eye[3]
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
13
2.1.1 THE CORNEA
The cornea is a transparent medium situated in the front of the eye covering the iris,
pupil and anterior chamber that helps to focus incoming light[20] with a water content of
78%[38]. The cornea is elliptical in shape with a vertical and horizontal diameter of 11 and
12mm, respectively[38]. The cornea is supplied with oxygen and nutrients through tear-fluid
and not through blood vessels[28]. Therefore, there are no blood vessels in it. The function of
the cornea is to refract and transmit light[38].
2.1.2 THE AQUEOUS HUMOR
The aqueous humor contains aqueous fluid in the front part of the eye between the lens
and the cornea. The aqueous fluid’s main function is to supply the cornea and the lens with
nutrients and oxygen[28].
2.1.3 THE IRIS
The iris is a thin, pigmented, circular structure in the eye which regulates the amount
of light that enters the eye[28]. The function of the iris is to control the size of the pupil by
adjusting it to the intensity of the lighting conditions[38]. By expanding the size of the pupil,
more light can then enter. This reflex known as the Accommodation Reflex [28] expands the
pupil to allow more light to enter when focusing on distant objects or in the darkness.
2.1.4 THE PUPIL
The pupil is a hole in the center of the iris. The size of the pupil determines the amount
of light that enters the eye. The pupil size is controlled by the dilator and sphincter muscles
of the iris[42]. It appears black because most of the light entering the pupil is absorbed by
the tissues inside the eye[36].
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
14
2.1.5 THE LENS
The lens is a transparent, biconvex structure in the eye that, along with the cornea,
helps to refract light to be focused on the retina[27]. By changing the shape of the lens, the
lens is able to change the focal distance of the eye so it can focus on objects at different
distances, thus allowing sharp image to form on the retina.
2.1.6 THE VITREOUS HUMOR
The vitreous humor contains clear fluid which fills the eyeball (between the lens and
the retina). It is the largest domain of the human eye. The fluid contains more than 95% of
water.
2.1.7 THE SCLERA
The sclera is the white opaque tissue that acts as the eye protective outer coat. Six
tiny muscles connect to it around the eye and control the eye's movements. The optic nerve
is attached to the sclera at the very back of the eye[42].
2.1.8 THE OPTIC DISC
The optic disc, also known as the optic nerve head or the blind spot, the optic disc is
where the optic nerve attaches to the eye[28]. There are no light sensitive rods or cones to
respond to a light stimulus at this point. This causes a break in the visual field called "the
blind spot" or the "physiological blind spot"[35]. Figure 2.1.10 shows the location of the optic
disc.
2.1.9 THE RETINA
The retina is a thin layer of neural cells[38] that lines in the inner back of the eye. It is
light sensitive and absorbs light. The image signals are received and send to the brain. The
retina contains two kinds of light receptors; rods and cones. The rods absorb light in black
and white. The rods are responsible for night vision. The cones are colour sensitive and
absorb stronger light. The cones are responsible for colour vision.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
15
2.1.10 MACULA
The macula is the area around the fovea[28]. It is an oval-shaped highly pigmented
yellow spot near the center of the retina[31] as shown in Figure 2.1.10. It is a small and
highly sensitive part of the retina responsible for detailed central vision.
Fovea
Macula
Optic disc
Figure 2.1.10: Location of macula, fovea and optic disc
2.1.11 FOVEA
The fovea is the most central part of the macula. The visual cells located in the
fovea are packed tightest, resulting in optimal sharpness of vision. Unlike the retina, it has
no blood vessels to interfere the passage of light striking the foveal cone mosaic[15]. Figure
2.1.10 shows the location of fovea.
2.2 DIABETIC RETINOPATHY (DR) AND STAGES
Diabetes is the chronic state caused by an abnormal increase in the glucose level in the
blood and which causes the damage to the blood vessels. The tiny blood vessels that nourish
the retina are damaged by the increased glucose level[47]. Diabetic retinopathy (DR) is one of
the complications that affect retinal capillaries. This effect causes thickening of arterial wall
and blockage of blood flow to the eye occurs.
DR can be broadly classified as non-proliferative diabetic retinopathy (NPDR) and
proliferative diabetic retinopathy (PDR)[47] as shown in Figure 2.2. There are four DR
stages:
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
16
1.Stage 1 – Background diabetic retinopathy (also termed mild or moderate nonproliferative retinopathy). At least one microaneurysm with or without the presence
of retinal haemorrhages, hard exudates, cotton wool spots or venous loops will be
present[6,7].
2. Stage 2 – Moderate non-proliferative retinopathy. Numerous microaneurysms and
retinal haemorrhages will be present. Cotton wool spots and a limited amount of
venous beading can also be seen[47]. Some blood vessels are starting to become
blocked.
3. Stage 3 – Severe non-proliferative retinopathy. Many features such as haemorrhages
and microaneurysms are present in the retina. Other features are also present except
less growth of new blood vessels; many more blood vessels are now blocked and
these areas of the retina start to send signals to the body to grow new blood vessels
for nourishment[38].
4. Stage 4 – Proliferative retinopathy. PDR is the advanced stage where the fluids sent by
the retina for nourishment trigger the growth of new blood vessels[22]. The main
blood vessels become stiff and blockage of blood flow occurs. Small pockets of
blood begin to form around the boundary of the main blood vessels. These fragile
blood vessels have thin walls and when the walls burst, blood spatters form.
Exudates (proteins and other lipids) and blood from the leakage forms around the
retina and in some cases, leakage may form on the fovea, resulting in sudden severe
vision loss and blindness.
Figure 2.2: Stages of DR fundus images[51]
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
17
The features of each stage are summarised in Table 2.2.
Classification
Alternative terminology
Features
Background diabetic
retinopathy
Mild/moderate nonproliferative diabetic
retinopathy
Haemorrhages
Oedema
Microaneurysms
Exudates
Cotton wool spots
Dilated viens
Pre-proliferative diabetic
retinopathy
Severe/very severe nonproliferative retinopathy
Deep retinal haemorrhages in
four quadrants
Venous abnormalities
Intraretinal microvascular
abnormalities (IRMA)
Multiple cotton wool spots
Proliferative diabetic
retinopathy
Proliferative diabetic
retinopathy (PDR)
New vessels on optic disc
Advanced diabetic eye
disease
Complications of
proliferative diabetic
retinopathy
Vitreous haemorrhage
New vessels elsewhere
Retinal detachment
Neovascular glaucoma
Table 2.2: Summary of the features of diabetic retinopathy[18]
2.3 DIABETIC RETINOPATHY (DR) FEATURES
There are many features which are present in a DR eye. However, since the main
objective of this project is to have an automated system for early DR detection on some of
the extracted features. Therefore, features such as blood vessels, microaneurysms, exudates
and textures (in feature extraction section) will be discussed.
2.3.1 BLOOD VESSELS
In normal retina, the main function of the blood vessels is to send nutrients such as
oxygen and blood to the eye (Figure 2.3.1). In the case of DR, the simulation to the growth
of new fragile blood vessels is due to the blockage and thickening of the main blood
vessels. When the main blood vessels are blocked, new vessels are triggered to grow in an
attempt to send oxygen and nourishment to the eye. However, these new blood vessels are
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
18
very fragile and abnormal. They are prone to rupture and leak fluids (proteins and lipids)
and blood into the eye. This may not hinder the patient’s sight if the leakage does not occur
on the fovea or macula. However, if the blood spatters happen to be on the fovea or macula,
sudden loss of vision in that eye occurs as the spatters block all light entering into the eye.
Retinal
Blood
Vessels
Figure 2.3.1: Retinal blood vessels
2.3.2 MICROANEURYSMS
Microaneurysms are small saclike out pouching in the small vessels and
capillaries[25] as shown in Figure 2.3.2. They are an early feature of DR and it appears as
small red dots due to the ballooning of capillaries. They represent a small weakness in the
retinal capillary wall that leaks blood and serum[18]. They appear as tiny red dots in fundus
photographs.
Microaneurysms
Figure 2.3.2: Microaneurysms in DR
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
19
2.3.3 EXUDATES
Exudates often described as hard exudates, these are deposits of extravasated
plasma proteins, especially lipoproteins as shown in Figure 2.3.3. They leak into retinal
tissue with serum, and are left behind as oedema fluid is absorbed. Eventually exudates are
cleared from the retina by macrophages[18]. They appear as yellow-white dots within the
retina. The yellow deposits may be seen as either individual spots or clusters[25], usually
near optic disc.
Sometimes the exudates may be formed on macula or fovea, as a result, there will
be sudden loss of vision in that eye, regardless of the diabetic retinopathy stages.
Exudates
Figure 2.3.3: Exudates in DR
2.4 DIABETIC RETINOPATHY (DR) EXAMINATION METHODS
There are few types of DR examination methods but mainly ophthalmoscopy (indirect
and direct), fluorescein angiogram and fundus photography.
2.4.1 OPTHALMOSCOPY (INDIRECT AND DIRECT)
Direct opthalmoscopy is the examination method performs by the specialist in a
dark room. A beam of light is shined through the pupil using opthalmoscope. This allows
the specialist to view the back of the eyeball.
Indirect opthalmoscopy is performed with a head or spectacles-mounted source of
illumination positioned in the middle of the forehead[26]. A bright light is shined into the
eye using the instrument on the forehead. The condensing lens is placed on the eye to
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
20
intercept the fundus reflex. A real and inverted image of the fundus will form between the
examiner and the patient[26].
2.4.2 FLUORESCEIN ANGIOGRAPHY
Fluorescein angiography is a test which allows the blood vessels in the back of the
eye to be photographed as a fluorescent dye is injected into the bloodstream via the hand or
arm[49]. The pupils will be dilated with eye drops and the yellow dye (Fluorescein Sodium)
is injected into a vein in the arm[49]. It is used to examine the blood circulation of the retina
using the dye tracing method.
2.4.3 FUNDUS PHOTOGRAPHY
Fundus photography is the usage of fundus camera to photograph the regions of the
vitreous, retina, choroid and optic nerve[16]. Fundus photographs are only considered
medically necessary where the results may influence the management of the patient. In
general, fundus photography is performed to evaluate abnormalities in the fundus, follow the
progress of a disease, plan the treatment for a disease, and assess the therapeutic effect of
recent surgery[16]. In this report, the images for imaging processing were taken from fundus
camera.
2.5 DIABETIC RETINOPATHY (DR) TREATMENT
Treatment of diabetic retinopathy varies depending on the extent of the disease[10].
During the early stages of DR, no treatment is needed unless macular oedema is present.
However, for advanced DR such as proliferative diabetic retinopathy, surgery is necessary.
2.5.1 SCATTER LASER TREATMENT
Advanced stage diabetic retinopathy is treated by performing scatter laser treatment.
During scatter laser treatment, an ophthalmologist uses a laser to "scatter" many small
burns across the retina. This causes leaking and abnormal blood vessels to shrink[10]. This
surgical method is used to reduce vision loss. However, if there is significant amount of
haemorrhages, scatter laser treatment is not suitable.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
21
2.5.2 VITRECTOMY
A vitrectomy is performed under either local or general anesthesia. An
ophthalmologist makes a tiny incision in the eye and carefully removes the vitreous gel that
is clouded with blood. After the vitreous gel is removed from the eye, a clear salt solution is
injected to replace the contents[10].
2.5.3 FOCAL LASER TREATMENT
Leakage of fluid from blood vessels can sometimes lead to macular oedema, or
swelling of the retina. Focal laser treatment is performed to treat macular oedema. Several
hundred small burns are placed around the macula in order to reduce the amount of fluid
build-up in the macula[10].
2.5.4 LASER PHOTOCOAGULATION
Laser photocoagulation is a powerful beam of light which, combined with
ophthalmic equipment and lenses, can be focused on the retina[41]. Small bursts of laser are
used to seal leaky blood vessels, destroy abnormal blood vessels, seal retinal tears, and
destroy abnormal tissue in the back of the eye[41]. This procedure is used to treat diabetic
retinopathy patients in proliferative diabetic retinopathy stage. The main advantage of using
this surgical method is the short surgical duration and the patient usually can resume
activities immediately.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
22
CHAPTER THREE
METHODS AND MATERIALS
Total 60 fundus images from various demographics are used in this project. These
fundus images were taken from the ophthalmology department in National University
Hospital (NUH) of Singapore. The images were taken in 720 x 576 pixels.
3.1 SYSTEM BLOCK DIAGRAM
Figure 3.1 shows the system block diagram for identification of diabetic retinopathy.
Using the input image, the image is processed using the image processing techniques using
MATLAB. Features such as, areas of blood vessels, microaneurysms, exudates and textures
are extracted. The extracted features are then inserted into Student’s t-test to generate
significance test (probability of true significance). Using the Student’s t-test results (results
which have high probability of true significance) to the classifiers (eg: Fuzzy and Gaussian
Mixture Model or GMM), the average classification rate, sensitivity and specificity, etc are
generated. Lastly, at the final stage, using the results generated from the classifiers to
determine the diabetic retinopathy (DR) classes; normal and abnormal.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
23
Feature Extraction
Input Image
Image
Processing
Techniques
Areas of
1.
2.
3.
4.
Blood vessels
Microaneurysms
Exudates
Textures
Significance Test
Figure 3.1: System block diagram for
the detection and classification of
diabetic retinopathy
Student’s t-test
Normal
Classification
Fuzzy and GMM
Classifiers
Abnormal
3.2 IMAGE PROCESSING TECHNIQUES
The image processing techniques are used to enhance the images, morphological image
processing and texture analysis. They are also used to reduce image noise, contrast and
invert the images.
3.2.1 IMAGE PREPROCESSING
Before image processing is carried out, the fundus images need to be preprocessed to
remove non-uniform background. Non-uniform brightness and variation in the fundus
images are the main reasons for non-uniformity. Therefore, the error needs to be corrected
by applying contrast-limited adaptive histogram equalization (CLAHE) to the image before
applying the image processing operations[22].
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
24
A histogram is a graph which indicates the number of times each gray level occurs in
an image. For example, in bright images, the gray levels will be clustered at the upper end
of the graph. As for images that are darker, the gray levels will then be at the lower end of
the graph. For a gray level that is evenly spread out in the histogram, the image is wellcontrasted. CLAHE operates on small regions in the image, called tiles. Each tile's contrast
is enhanced, so that the histogram of the output region approximately matches a specified
histogram[2]. Figure 3.2.1a shows the fundus image before CLAHE and its histogram shows
more bright level regions than dark level regions. Figure 3.2.1b shows the fundus image
after CLAHE and its histogram shows an evenly distributed brightness.
Figure 3.2.1a: Original image (left) and its histogram (right)
Figure 3.2.1b: Image after CLAHE (left) and its histogram (right)
3.2.2 STRUCTURING ELEMENT
A structuring element (SE) is a binary morphology that is used to probe the image. It is
a matrix consisting of only 0's and 1's that can have any arbitrary shape and size. The pixels
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
25
with values of 1 define the neighbourhood[34]. There are two types of SE, two-dimensional
or flat SE usually consists of origin, radius and approximation N value. Three-dimensional
or nonflat SE usually consists of radius (x-y planes), height and approximation N value.
There are different types of SE shapes but in this project, disk shaped, ball shaped and
octagon shaped SE are used.
3.2.2.1
DISK SHAPED STRUCTURING ELEMENT (SE)
Disk shaped SE, SE = strel('disk', R, N) creates a flat, disk shaped structuring
element, where R specifies the radius. R must be a nonnegative integer. N must be 0, 4, 6,
or 8[8]. Figure 3.2.2.1 shows a disk shaped SE with radius 3 and its centre of origin.
Figure 3.2.2.1: Disk shaped structuring element
3.2.2.2
BALL SHAPED STRUCTURING ELEMENT (SE)
Ball shaped SE, SE = strel('ball', R, H, N) creates a nonflat, ball-shaped structuring
element (actually an ellipsoid) whose radius in the X-Y plane is R and whose height is H.
Note that R must be a nonnegative integer, H must be a real scalar, and N must be an even
nonnegative integer[8]. Figure 3.2.2.2 shows a ball shaped SE with x-y axis as radius and z
axis as height.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
26
Figure 3.2.2.2: Ball shaped structuring element (nonflat ellipsoid)
3.2.2.3
OCTAGON SHAPED STRUCTURING ELEMENT (SE)
Octagon shaped SE, SE = strel('octagon', R) creates a flat, octagonal structuring
element, where R specifies the distance from the structuring element origin to the sides of
the octagon, as measured along the horizontal and vertical axes. R must be a nonnegative
multiple of 3[8]. Figure 3.2.2.3 shows an octagon shaped SE with radius 3 and its centre of
origin.
Figure 3.2.2.3: Octagon shaped structuring element
3.2.3 MORPHOLOGICAL IMAGE PROCESSING
Morphological image processing is a branch of image processing that is particularly
useful for analyzing shapes in images[3]. Mathematical morphology is the foundation of
morphological image processing, which consists of a set of operators that transform images
according to size, shape, connectivity, etc.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
27
3.2.3.1
MORPHOLOGICAL OPERATIONS
Morphological operations are used to understand the structure or form of an
image. This usually means identifying objects or boundaries within an image. Morphological
operations play a key role in applications such as machine vision and automatic object
detection[33].
Morphological operations apply a structuring element to an input image, creating an
output image of the same size. In a morphological operation, the value of each pixel in the
output image is based on a comparison of the corresponding pixel in the input image with its
neighbours. By choosing the size and shape of the neighborhood, a morphological operation
can be created that is sensitive to specific shapes in the input image[34]. There are many types
of morphological operations such as dilation, erosion, opening and closing.
3.2.3.2
DILATION AND EROSION
Dilation and erosion are basic morphological processing operations. They are
defined in terms of more elementary set operations, but are employed as the basic elements
of many algorithms. Both dilation and erosion are produced by the interaction of structuring
element with a set of pixels of interest in the image[19].
Dilation adds pixels to the boundaries of objects in an image, while erosion removes
pixels on object boundaries. The number of pixels added or removed from the objects in an
image depends on the size and shape of the structuring element used to process the image.
In the morphological dilation and erosion operations, the state of any given pixel in the
output image is determined by applying a rule to the corresponding pixel and its neighbours
in the input image. The rule used to process the pixels defines the operation as a dilation or
an erosion[34]. Table 3.2.3.2 shows the operations and the rules.
Operation
Dilation
Rule
The value of the output pixel is the maximum value of all the pixels in the
input pixel's neighborhood. In a binary image, if any of the pixels is set to
the value 1, the output pixel is set to 1.
Erosion
The value of the output pixel is the minimum value of all the pixels in the
input pixel's neighborhood. In a binary image, if any of the pixels is set to
0, the output pixel is set to 0.
Table 3.2.3.2: Rules for dilation and erosion[34]
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
28
3.2.3.3
DILATION
Suppose 𝐴 and 𝐡 are sets of pixels. Then the dilation of 𝐴 by 𝐡, denoted 𝐴 ⊕ 𝐡, is
defined as 𝐴 ⊕ 𝐡 = ∪π‘₯∈𝐡 𝐴π‘₯ . This means that for every point π‘₯ ∈ 𝐡, 𝐴 is translated by
those coordinates. An equivalent definition is that 𝐴 ⊕ 𝐡 = {(x, y) + (u, v): (x, y) ∈
A, (u, v) ∈ 𝐡}. Dilation is seen to be commutative, that 𝐴 ⊕ 𝐡 = B ⊕ 𝐴[3]. Figure 3.2.3.3a
shows an original fundus image before dilation and Figure 3.2.3.3b shows the same image
after dilation with disk shaped SE of radius 8. Optic disc becomes more prominent and
exudates can also be seen near macula.
Figure 3.2.3.3a: Original image
3.2.3.4
Figure 3.2.3.3b: Image after dilation with
disk shaped SE
EROSION
Given sets 𝐴 and 𝐡, the erosion of 𝐴 by 𝐡, written 𝐴 ⊝ 𝐡, is defined as 𝐴 ⊝ 𝐡 =
{𝑀: 𝐡𝑀 ⊆ 𝐴}[3]. Figure 3.2.3.4a shows an original fundus image before dilation and Figure
3.2.3.4b shows the same image after erosion with disk shaped SE of radius 8. Blood vessels
become more prominent.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
29
Figure 3.2.3.4a: Original image
3.2.3.5
Figure 3.2.3.4b: Image after erosion with
disk shaped SE
OPENING AND CLOSING
Dilation and erosion are often used in combination to implement image processing
operations[34]. Erosion followed by dilation is called an open operation. Opening of an
image smoothes the contour of an object, breaks narrow isthmuses (“bridges”) and
eliminates thin protrusions[12]. Dilation followed by erosion is called a close operation.
Closing of an image smoothes section of contours, fuses narrow breaks and long thin gulfs,
eliminates small holes in contours and fills gaps in contours[12].
Opening operation of image is defined as 𝐴 ∘ 𝐡 = (𝐴 βŠ– 𝐡) ⊕ 𝐡 [3]. Since opening
operation of image consists of erosion followed by dilation, therefore it can also be defined
as 𝐴 ∘ 𝐡 =∪ {𝐡𝑀 : 𝐡𝑀 ⊆ 𝐴}[3].
Closing operation of image is defined as 𝐴 βˆ™ 𝐡 = (𝐴 ⊕ 𝐡) βŠ– 𝐡[3]. Figure 3.2.3.5a
and Figure 3.2.3.5b shows the difference between opening operation and closing operation
of fundus images.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
30
Figure 3.2.3.5a: Opening operation with
disk shaped image
Figure 3.2.3.5b: Closing operation with
disk shaped SE image
3.2.4 THRESHOLDING
Thresholding turns a colour or grayscale image into a 1-bit binary image. This is done
by allocating every pixel in the image either black or white, depending on their value. The
pivotal value that is used to decide whether any given pixel is to be black or white is
the threshold[17].
Thresholding is useful to remove unnecessary detail from an image to concentrate
on essentials[3]. In the case of the fundus image, by removing all gray level information, the
blood vessels are reduced to binary pixels. It is necessary to distinguish blood vessels
foreground from the background information. Thresholding can also be used to bring out
hidden detail. It is very useful in the image region which is obscured by similar gray levels.
Therefore, choosing an appropriate threshold value is important because a low value
may decrease the size of some of the objects or reduce the number and a high value may
include extra background information. Figure 3.2.4a shows original fundus image before
thresholding with CLAHE. Figure 3.2.4b shows the same image with too high threshold
value resulting in too much background information. Figure 3.2.4c shows the same image
with too low threshold value resulting in missing foreground information.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
31
Figure 3.2.4a: Original image
Figure 3.2.4b: Image with too high threshold value
Figure 3.2.4c: Image with too low threshold value
3.2.5 EDGE DETECTION
In an image, an edge is a curve that follows a path of rapid change in image intensity.
Edges are often associated with the boundaries of objects in a scene[4]. Edge detection refers
to the process of identifying and locating sharp discontinuities in an image[39]. It is possible
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
32
to use edges to measure the size of objects in an image, isolate particular objects from their
background, and to recognize or classify objects[3].
There are generally six edge detection algorithms and they are, Sobel, Prewitt, Roberts,
Laplacian of Gaussian (LoG), zero-cross and Canny. Table 3.2.5 shows the six edge
detection methods and their descriptions.
Methods
Descriptions
The Sobel method finds edges using the
Sobel approximation to the derivative. It
returns edges at those points where the
gradient of I is maximum.
The Prewitt method finds edges using the
Prewitt
Prewitt approximation to the derivative. It
returns edges at those points where the
gradient of I is maximum.
The Roberts method finds edges using the
Roberts
Roberts approximation to the derivative. It
returns edges at those points where the
gradient of I is maximum.
The Laplacian of Gaussian method finds
Laplacian of Gaussian (LoG)
edges by looking for zero crossings after
filtering I with a Laplacian of Gaussian
filter.
The zero-cross method finds edges by
zero-cross
looking for zero crossings after filtering I
with a filter the user specify.
The Canny method finds edges by looking
Canny
for local maxima of the gradient of I. The
gradient is calculated using the derivative of
a Gaussian filter. The method uses two
thresholds, to detect strong and weak edges,
and includes the weak edges in the output
only if they are connected to strong edges.
This method is therefore less likely than the
others to be fooled by noise, and more likely
to detect true weak edges.
Table 3.2.5: Methods and description of various edge detection algorithms[14]
Sobel
After comparing all six edge detection algorithms, the Canny method performs better
than the others due to the fact that it uses two thresholds to detect strong and weak edges and
for this reason, Canny algorithm is chosen for edge detection over the others for this project.
Figure 3.2.5a, b, c, d, e, f shows original image, Sobel edge detection, Prewitt edge
detection, Roberts edge detection, Laplacian of Gaussian (LoG) edge detection and Canny
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
33
edge detection methods respectively. It is apparent that by using Canny edge detection
method, the weak fine blood vessels can be detected.
Figure 3.2.5a: Original image
Figure 3.2.5b: Sobel
Figure 3.2.5c: Prewitt
Figure 3.2.5d: Roberts
Figure 3.2.5e: Laplacian of Gaussian (LoG)
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
34
Figure 3.2.5f: Canny
3.2.6 MEDIAN FILTERING
Median filtering is a nonlinear operation often used in image processing to reduce "salt
and pepper" noise. A median filter is more effective than convolution when the goal is to
simultaneously reduce noise and preserve edges[1]. Median of a set is the middle value when
values are sorted. For even number of values, the median is the mean of the middle of two[3].
Figure 3.2.6a shows an illustration of a 3 x 3 median filter for a set of sorted values to
obtain the median value.
55
70
57
68
260
63
66
65
62
55 57 62 63 65 66 68 70 260
65
Figure 3.2.6a: Illustration of a 3 x 3 median filter
This method of obtaining the median value means that very large or very small values
(noisy values) will be replaced by the value closer to its surroundings. Figure 3.2.6b shows
the difference before and after applying median filtering. The “salt and pepper” noise in the
original image have been clearly reduced after applying the median filtering.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
35
Figure 3.2.6b: Original image (left) and image after median filtering (right)
3.3
FEATURE EXTRACTION
Features namely, blood vessels, microaneurysms, exudates and textures are
extracted. The steps are explained below.
3.3.1 BLOOD VESSELS DETECTION
Figure 3.3.1a shows the system block diagram of blood vessels detection. The
detailed steps are explained below.
Original image
Green component
of original image
Inverting intensity
of green
component
Edge detection
(Canny)
Border detection
Morphological
opening using
disk SE of radius
8
Blood vessels
detection
Perform CLAHE
(adaptive
histogram
equalization)
Perform median
filtering
Image with
boundary is
obtained (after
subtracting image
with border)
Morphological
opening using ball
SE of radius and
height 8
Thresholding
Fiill holes and
remove boundary
Final image and
area extracted
Figure 3.3.1a: System block diagram for detecting blood vessels
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
36
All coloured images consist of RGB (red, green blue) primary colours channels. Each
pixel has a particular colour by the amount of red, green and blue. If each colour component
has a range of 0-255 then three components give 2553 = more than 16 million colours. Each
pixel consists of 24 bits and therefore is a 24-bit colour image. The fundus images used in
this project are 24-bit, 720 x 576 pixels. Normal images as shown in Figure 3.3.1b basically
consist of blood vessels, optic disc and macula without any other abnormal features. Blood
vessels detection is important in identification of diabetic retinopathy (DR) through image
processing techniques.
Retinal
Blood
Vessels
Fovea
Macula
Optic disc
Figure 3.3.1b: Normal retinal fundus image
Firstly, as part of the image preprocessing step, the green component of the image is
extracted as shown in Figure 3.3.1c and the green component’s intensity is inverted as
shown in Figure 3.3.1d.
Figure 3.3.1c: Green component
Figure 3.3.1d: Inverted green component
After inverting the green component’s intensity, edge detection is performed using
Canny method. The border is then detected and a disk shaped structuring element (SE) of
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
37
radius 8 is created with morphological opening operation (erosion then dilation). Next,
subtract the eroded image with the original image and the border or boundary is obtained.
Afterwards, adaptive histogram equalization is performed to improve the contrast of
the image and to correct uneven illumination as shown in Figure 3.3.1e. A morphological
opening operation (erosion then dilation) is performed using the ball shaped structuring
element (SE) to smooth the background and to highlight the blood vessels as shown in
Figure 3.3.1f.
Figure 3.3.1e: Image after CLAHE
Figure 3.3.1f: Image after opening operation
The image is then subtracted from the adaptive histogram equalized image (CLAHE).
As shown in Figure 3.3.1g, the resulting image shows higher intensity at the foreground
(blood vessels) as compared with the background – a contrast.
Figure 3.3.1g: Image after subtraction
From the subtracted image, the image is converted from grayscale to binary by
performing thresholding with value of 0.1 as shown in Figure 3.3.1h. Median filtering is
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
38
performed to remove "salt and pepper" noise as shown in Figure 3.3.1i. The boundary is
obtained after subtracting the border with disk shaped SE with image with median filtering.
Figure 3.3.1h: Image after thresholding
Figure 3.3.1i: Image after median filtering
The border is then eliminated after filling the holes that do not touch the edge to obtain
the final image as shown in Figure 3.3.1j. The pixel values of the image are inverted to get
only the blood vessels with black background as shown in Figure 3.3.1k. The detailed
MATLAB code is attached in Appendix B.
Figure 3.3.1j: Final image
Figure 3.3.1k: Final image (inverted)
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
39
3.3.2 MICROANEURYSMS DETECTION
Figure 3.3.2a shows the system block diagram of microaneurysms detection. The
detailed steps are explained below.
Original image
Red component
of original
image
Inverting
intensity of red
component
Edge detection
(Canny)
Border detection
Morphological
opening using
disk SE of radius
8
Remove
boundary
Fill holes
Subtract image
without
boundary with
blood vessels
after edge
detection
Subtract image
with holes from
image with filled
holes
Blood vessels
detection
Edge detection
(Canny)
Fill holes
Subtract image
with filled holes
from the image
with
microaneurysms
and unwanted
artifacts
Final image and
area extracted
Figure 3.3.2a: System block diagram for detecting microaneurysms
Microaneurysms appear as tiny red dots on retinal fundus image as shown in Figure
3.3.2b, therefore the red component of the RGB image are used to identify the
microaneurysms as shown in Figure 3.3.2c. Next, the intensity is then inverted as shown in
Figure 3.3.2d. Similar to blood vessels detection, Canny method is used for edge detection
for microaneurysms detection as shown in Figure 3.3.2e.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
40
Microaneurysms
Figure 3.3.2b: Abnormal retinal fundus image
Figure 3.3.2c: Red component
Figure 3.3.2d: Inverted red component
The boundary is detected by filling up the holes and a disk shaped structuring element
(SE) of radius 8 is created with morphological opening operation (erosion then dilation) as
shown in Figure 3.3.2f. The edge detected image is then subtracted from the image with
boundary to obtain image without boundary as shown in Figure 3.3.2g.
Figure 3.3.2e: Image after Canny edge
detection
Figure 3.3.2f: Image with boundary
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
41
Figure 3.3.2g: Image after boundary subtraction
After which, the holes or gaps are filled, resulting in microaneurysms and other
unwanted artifacts present as shown in Figure 3.3.2h. The image with filled holes or gaps
then subtracts the image before filled holes or gaps. The resulting image thus has
microaneurysms and other unwanted artifacts without the edge as shown in Figure 3.3.2i.
Figure 3.3.2h: Image after filling up the
holes or gaps
Figure 3.3.2i: Image after subtraction
The blood vessels are detected using the same method mentioned in section 3.3.1.
Figure 3.3.2j shows the blood vessels detected image. Edge detection Canny method is then
used on the blood vessels image to detect the edges as shown in Figure 3.2.2k. This image
is then subtracted from the image after boundary subtraction (Figure 3.3.2g). The resulted
image is shown in Figure 3.3.2l.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
42
Figure 3.3.2j: Blood vessels detection
Figure 3.3.2k: Blood vessels after
edge detection
Figure 3.3.2l: Image after subtraction
Finally, after filling the holes or gaps as shown in Figure 3.3.2m, this image is
subtracted with the image with microaneurysms and unwanted artifacts to obtain the final
image with only microaneurysms as shown in Figure 3.3.2n. The detailed MATLAB code is
attached in Appendix C.
Figure 3.3.2m: Image after filling
holes or gaps
Figure 3.3.2n: Final image
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
43
3.3.3 EXUDATES DETECTION
Figure 3.3.3a shows the system block diagram of exudates detection. The detailed
steps are explained below.
Original image
Green component of
original image
Morphological
closing using
cctagon shaped SE
of radius 9
Column wise
neighbourhood
operation
Thresholding
Morphological
closing using disk
SE of radius 10
Edge detection
(Canny)
ROI of radius 82
Remove border
Morphological
erosion operation
using disk shaped
SE of radius 3
Final image and area
extracted
Remove optic disc
Figure 3.3.3a: System block diagram for detecting exudates
Exudates appear as yellowish dots in the fundus images as shown in Figure 3.3.3b.
It is easier to spot them than microaneurysms. In order to detect exudates, firstly similar to
blood vessels detection, green component of the RGB image is extracted as shown in
Figure 3.3.3c and octagon shaped structuring element (SE) of size 9 is created. A
morphological closing is performed on the SE as shown in Figure 3.3.3d. As clearly
shown, the exudates become more prominent than the background although the optic disc is
also present, as their grey levels are similar.
Exudates
Figure 3.3.3b: Abnormal retinal fundus image
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
44
Figure 3.3.3c: Green component
Figure 3.3.3d: Image after closing operation
Column wise neighbourhood operation is performed to rearrange the image into
columns first. The parameter sliding indicates that overlapping neighbourhoods are being
used[3]. This operation is performed to remove most of the unwanted artifacts leaving only
the border, exudates and the optic disc as shown in Figure 3.3.3e.
Figure 3.3.3e: Image after column wise neighbourhood operation
Next, thresholding is performed to the image with the threshold value of 0.7 as
shown in Figure 3.3.3f. Morphological closing with disk shaped structuring element (SE)
of size 10 is used to fill up the holes or gaps of the exudates as shown in Figure 3.3.3g.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
45
Figure 3.3.3f: Image after thresholding
Figure 3.3.3g: Image after morphological
closing
The optic disc contains the highest pixel value in the image. Therefore, to remove
the optic disc, edge detection using Canny method (Figure 3.3.3h) is used together with
region of interest (ROI). First, a radius of 82 is defined as most optic disc is of size 80 x 80
pixels as shown in Figure 3.3.3i. Next, the optic disc is removed together with the border
as shown in Figure 3.3.3j and Figure 3.3.3k.
Figure 3.3.3h: Image after Canny edge
detection
Figure 3.3.3i: Image after ROI
Figure 3.3.3k: Image after removing
Figure 3.3.3j: Image after removing
border
optic disc
BME499
ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT
REPORT
46
Finally, by performing morphological erosion operation with disk shaped
structuring element (SE) of size 3 to obtain the final image with only exudates as shown in
Figure 3.3.3l. The detailed MATLAB code is attached in Appendix D.
Figure 3.3.3l: Final image
3.3.4 TEXTURE ANALYSIS
Texture describes the physical structure characteristic of a material such as
smoothness and coarseness. It is a spatial concept indicating what, apart from color and the
level of gray, characterizes the visual homogeneity of a given zone of an image[24]. Texture
analysis of an image is the study of mutual relationship among intensity values of
neighbouring pixels repeated over an area larger than the size of the relationship [22]. The
main types of texture analysis are structural, statistical and spectral.
Mean, standard deviation, third moment and entropy are statistical type. Mean,
standard deviation and third moment are concern with properties of individual pixels. Mean
N ο€­1 N ο€­1
is defined as: Mean = µ1 =
οƒ₯οƒ₯ iPi, j [6] and standard deviation is defined as: SD = σ1 =
i ο€½0 j ο€½0
N ο€­1 N ο€­1
οƒ₯οƒ₯ Pi, j i ο€­  
i ο€½0 j ο€½0
2 [6]
1
. Third moment is a measure of the skewness of the histogram and is
[37]
3
defined as: πœ‡3 (𝑧) = ∑𝐿−1
. Entropy is a statistical type of texture that
𝑖=0 (𝑧𝑖 − π‘š) 𝑝(𝑧𝑖 )
measures randomness in an image texture. An image that is perfectly flat will have entropy
of zero. Consequently, they can be compressed to a relatively small size. On the other hand,
high entropy images such as an image of heavily cratered areas on the moon have a great
deal of contrast from one pixel to the next and consequently cannot be compressed as much
as low entropy images[7]. Entropy is defined as: − ∑ 𝑃 log 2 𝑃. The texture features used in
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
47
this project are mean, standard deviation, third moment and entropy. The detailed
MATLAB code is attached in Appendix E.
3.4
SIGNIFICANCE TEST
Significance test is to calculate statistically whether set(s) of data is occurred by
chance or true occurrence or the level of true occurrence. The significance level is defined
as p-value. The lower the p-value, the more statistically significant a set of data is. For
example, given set A has a p-value of 0.1 and set B has a p-value of 0.05, then set B is said
to be more statistically significant than set A as there is only 5% chance that it could occur
by chance or coincidence. Unlike set B, set A has 5% more chance than set B that it could
occur by chance or coincidence. The typical level of significance is 5% or p-value ≤ 0.05.
Significance test is done prior to classification.
3.4.1 STUDENT’S T-TEST
Student’s t-test deals with the problems associated with inference based on “small”
samples[46]. When independent samples are available from each population the procedure is
often known as the independent samples t-test and the test statistic is: 𝑑 =
π‘₯Μ… 1 −π‘₯Μ… 2
1
1
+
𝑛1 𝑛2
𝑠√
where π‘₯Μ…1
and π‘₯Μ…2 are the means of samples of size 𝑛1 and 𝑛2 taken from each population[5].
Using the area of the features for blood vessels, microaneurysms, exudates, mean,
standard deviation, third moment and entropy into Student’s t-test, the significance test
results are generated. Appendix A shows the box plot for various features (area) with high,
median and low values. Table 3.4.1 shows the p-values of each set of features (area). The
highlighted (yellow) rows indicate that data is statistically significant. Therefore, only the
statistically significant sets of data are used in the classification (ie: blood vessels,
microaneurysms, mean and third moment).
After selecting the features, normalization of the data is then processed prior to
classification. Normalization is done by dividing each value in the particular feature by the
highest value of that particular feature. This is to ensure each value is ≤ 1 > 0 to improve
the classification as it will have less distribution among the data.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
48
Mean ± Standard Deviation
Features
P-Value
Normal
Abnormal
Blood Vessels
31170 ± 7989
35950 ± 10430
0.051
Exudates
1909 ± 1224
1477 ± 957
0.13
Microaneurysms
330±238
884±564
<0.0001
Textures
Mean
74.1± 17.0
83.3±21.4
0.072
Textures
Standard
Deviation
37.0±6.42
39.5±8.36
0.20
Textures
Third Moment
0.139±0.609
-0.400±0.389
0.0001
Textures
Entropy
4.04±0.320
4.13±0.305
0.30
Table 3.4.1: Student’s t-test results
3.5
CLASSIFICATION
For this project, Fuzzy and Gaussian Mixture Model (GMM) are used for automatic
classification of diabetic retinopathy (DR). There are 42 training data and 18 testing data.
Figure 3.5 shows the block diagram of training and testing data processing prior to
inputting to the classifier. Normalized data is first split into 70% and 30%. Step I consists
of 70% of normal and abnormal data and 30% of normal and abnormal data. They are then
grouped into step III. Train1 and test1 is then further split into set A, B, C and D (step IV).
They are then mixed and split into train2, test2 and train3, test3 (step V). Lastly, the training
and testing data is exported to MATLAB as variables to load into the classifier.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
49
Normalized
data
Step I
70%
Step II
30%
Step III
70% normal
70%
abnormal
30% normal
30%
abnormal
Step IV
70% train1
normal
70% train1
abnormal
30% test1
normal
30% test1
abnormal
train1
A - 30%
B - 30%
D - 10%
train1
A - 30%
B - 30%
D - 10%
test1
C - 30%
test1
C - 30%
train2
C - 30%
B - 30%
D - 10%
train2
C - 30%
B - 30%
D - 10%
test2
A - 30%
test2
A - 30%
train3
C - 30%
A - 30%
D - 10%
train3
C - 30%
A - 30%
D - 10%
test3
B - 30%
test3
B - 30%
Step V
Figure 3.5: Block diagram of training and testing data
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
50
3.5.1 FUZZY
A fuzzy classifier is any classifier which uses fuzzy sets either during its training or
during its operation[29].
Fuzzy pattern recognition is sometimes identified with fuzzy
clustering or with fuzzy if-then systems used as classifiers[29].
In a fuzzy classification system, a case or an object can be classified by applying a
set of fuzzy rules based on the linguistic values of its attributes. Every rule has a weight,
which is a number between 0 and 1, and this is applied to the number given by the
antecedent. It involves 2 distinct parts. The first part involves evaluating the antecedent,
fuzzifying the input and applying any necessary fuzzy operators[40] such as, union: πœ‡ 𝐴 ∩
𝐡 (π‘₯) = Min[πœ‡π΄ (π‘₯), πœ‡π΅ (π‘₯)], intersection: πœ‡ 𝐴 ∩ 𝐡 (π‘₯) = Min[πœ‡π΄ (π‘₯), πœ‡π΅ (π‘₯)], complement:
πœ‡π΄ − (π‘₯) = 1 − πœ‡π΄ (π‘₯) where πœ‡ is the membership function[40]. The second part requires
application of that result to the consequent, known as inference. A fuzzy inference system
is a rule-based system that uses fuzzy logic, rather than Boolean logic, to reason about
data[40]. Fuzzy Logic (FL) is a multivalued logic, that allows intermediate values to be
defined between conventional evaluations like true/false, yes/no, high/low, etc[30]. These
fuzzy rules define the connection between input and output fuzzy variables[40].
Table 3.5.1a shows the output of 3 set of testing data from fuzzy classifier. The
correct (true) Boolean rule from nos 1-9 is supposed to be [0, 1] (true positives for normal
data) so [1, 0] (false positives) is incorrect (false). Therefore, there are some errors.
Likewise for data from nos 10-18, the correct (true) Boolean rule is supposed to be [1, 0]
(true negatives for abnormal data) so [0, 1] (false negatives) is incorrect (false). Therefore,
there are some errors too. Label 1 denotes normal data and label 2 denotes abnormal data.
The correct labeling should be 1 from nos 1-9 and 2 from nos 10-18.
Table 3.5.1b-d shows fuzzy testing data for positive predictive value, negative
predictive value, sensitivity and specificity calculation. TP denotes true positives, TN
denotes true negatives, FP denotes false positives and FN denotes false negatives. Using
the formula: Specificity =
Sensitivity =
π‘›π‘’π‘šπ‘π‘’π‘Ÿ π‘œπ‘“ π‘‘π‘Ÿπ‘’π‘’ π‘›π‘’π‘”π‘Žπ‘‘π‘–π‘£π‘’π‘ 
π‘›π‘’π‘šπ‘π‘’π‘Ÿ π‘œπ‘“ π‘‘π‘Ÿπ‘’π‘’ π‘›π‘’π‘”π‘Žπ‘‘π‘–π‘£π‘’π‘  + π‘›π‘’π‘šπ‘π‘’π‘Ÿ π‘œπ‘“ π‘“π‘Žπ‘™π‘ π‘’ π‘π‘œπ‘ π‘–π‘‘π‘–π‘£π‘’π‘ 
π‘›π‘’π‘šπ‘π‘’π‘Ÿ π‘œπ‘“ π‘‘π‘Ÿπ‘’π‘’ π‘π‘œπ‘ π‘–π‘‘π‘–π‘£π‘’π‘ 
π‘›π‘’π‘šπ‘π‘’π‘Ÿ π‘œπ‘“ π‘‘π‘Ÿπ‘’π‘’ π‘π‘œπ‘ π‘–π‘‘π‘–π‘£π‘–π‘’π‘  + π‘›π‘’π‘šπ‘π‘’π‘Ÿ π‘œπ‘“ π‘“π‘Žπ‘™π‘ π‘’ π‘›π‘’π‘”π‘Žπ‘‘π‘–π‘£π‘’π‘ 
∗ 100% [43] and
∗ 100% [43]. A specificity of
100% means that the test recognizes all actual negatives[43] and a sensitivity of 100% means
that the test recognizes all actual positives[43]. Positive predictive value denotes positive test
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
51
results which are correctly diagnosed and Negative predictive value denotes negative test
results which are correctly diagnosed.
No
Fuzzy
comparing
testing1
Label
Error
Fuzzy
comparing
testing2
Label
Error
Fuzzy
comparing
testing3
Label
1
0
1
1
0
1
1
0
1
1
2
0
1
1
0
1
1
0
1
1
3
1
0
2
0
1
1
0
1
1
4
0
1
1
0
1
1
0
1
1
5
0
1
1
1
0
2
0
1
1
6
1
0
2
0
1
1
1
0
2
7
0
1
1
0
1
1
0
1
1
8
1
0
2
Error
0
1
1
0
1
1
9
1
0
2
Error
0
1
1
0
1
1
10
1
0
2
1
0
2
1
0
2
11
1
0
2
1
0
2
1
0
2
12
1
0
2
1
0
2
1
0
2
13
1
0
2
1
0
2
1
0
2
14
0
1
1
1
0
2
1
0
2
15
1
0
2
0
1
1
1
0
2
16
1
0
2
1
0
2
0
1
1
17
1
0
2
1
0
2
1
0
2
18
1
0
2
1
0
2
0
1
1
Error
Error
Error
Error
Error
Error
Error
Error
Error
Table 3.5.1a: testing1, testing2 and testing3 data output using fuzzy classifier
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
52
Fuzzy comparing testing1
POSITIVE NEGATIVE
POSITIVE
NEGATIVE
FP = 4
Positive
predictive
value = TP
/ (TP + FP)
= 5 / (5 +
4) = 5 / 9
5 / 9 * 100 =
55.6%
FN = 1
TN = 8
Negative
predictive
value = TN
/ (FN +
TN)
= 8 / (1 +
8)
=8/9
8 / 9 * 100 =
88.9%
Sensitivity =
TP / (TP +
FN) = 5 / (5 +
1) = 5 / 6
Specificity
= TN / (FP +
TN) = 8 / (4
+ 8) = 8 / 12
TP = 5
5 / 6 * 100 =
8 / 12 * 100
83.3%
= 66.7%
Table 3.5.1b: testing1 data output calculation using fuzzy classifier
Fuzzy comparing testing2
POSITIVE NEGATIVE
POSITIVE
NEGATIVE
FP = 1
Positive
predictive
value = TP
/ (TP + FP)
= 8 / (8 +
1) = 8 / 9
8 / 9 * 100 =
88.9%
FN = 1
TN = 8
Negative
predictive
value = TN
/ (FN +
TN)
= 8 / (1 +
8)
=8/9
8 / 9 * 100 =
88.9%
Sensitivity =
TP / (TP +
FN) = 8 / (8 +
8) = 8 / 16
Specificity
= TN / (FP +
TN) = 8 / (1
+ 8) = 8 / 9
TP = 8
8 / 16 * 100 = 8 / 9 * 100 =
50%
88.9%
Table 3.5.1c: testing2 data output calculation using fuzzy classifier
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
53
Fuzzy comparing testing3
POSITIVE
POSITIVE
NEGATIVE
NEGATIVE
FP = 1
Positive
predictive
value = TP /
(TP + FP) =
8 / (8 + 1) =
8/9
8 / 9 * 100 =
88.9%
FN = 2
TN = 7
Negative
predictive
value = TN
/ (FN + TN)
= 7 / (2 + 7)
=7/9
7 / 9 * 100 =
77.8%
Sensitivity =
TP / (TP +
FN) = 8 / (8 +
2) = 8 / 10
Specificity
= TN / (FP +
TN) = 7 / (1
+ 7) = 7 / 8
TP = 8
8 / 10 * 100 = 7 / 8 * 100 =
80%
87.5%
Table 3.5.1d: testing3 data output calculation using fuzzy classifier
3.5.2 GAUSSIAN MIXTURE MODEL (GMM)
A Gaussian Mixture Model (GMM) is a parametric probability density function
represented as a weighted sum of Gaussian component densities. GMMs are commonly
used as a parametric model of the probability distribution of continuous measurements or
features in a biometric system[11]. A GMM is a weighted sum of 𝑀component Gaussian
densities as given by the equation: 𝑝(π‘₯|πœ†) = ∑𝑀
𝑖=1 𝑀𝑖 𝑔(π‘₯|πœ‡π‘– , ∑𝑖 ) where π‘₯ is a Ddimensional continuous-valued data vector, 𝑀𝑖 , 𝑖 = 1, … , 𝑀, are the mixture weights, and
𝑔(π‘₯|πœ‡π‘– , ∑𝑖) , 𝑖 = 1, … , 𝑀, are the component Gaussian densities. Each component density is
a D-variate Gaussian function of the form: 𝑔(π‘₯|πœ‡π‘– , ∑𝑖) =
1
𝐷
1
(2πœ‹) 2 | ∑𝑖 |2
1
exp{− 2 (π‘₯ −
πœ‡π‘– )′ ∑−1
𝑖 (π‘₯ − πœ‡π‘– )}, with mean vector πœ‡π‘– and covariance matrix ∑𝑖. The mixture weights
[11]
satisfy the constraint that ∑𝑀
.
𝑖=1 𝑀𝑖 = 1
Figure 3.5.2 shows the GMM classification method. Table 3.5.2a shows the output
of 3 set of testing data from GMM classifier. Column No of incorrect normal data denotes
false positives and there are 2 incorrect normal data in testing1. Column No of incorrect
abnormal data denotes false negatives and there are 6 incorrect abnormal data in testing1
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
54
and testing3. The Classification rate denotes percentage of correct data. The higher the
classification rate, the higher the accuracy.
Table 3.5.2b-d shows GMM testing data for positive predictive value, negative
predictive value, sensitivity and specificity calculation. TP denotes true positives, TN
denotes true negatives, FP denotes false positives and FN denotes false negatives. Using
the formula: Specificity =
Sensitivity =
π‘›π‘’π‘šπ‘π‘’π‘Ÿ π‘œπ‘“ π‘‘π‘Ÿπ‘’π‘’ π‘›π‘’π‘”π‘Žπ‘‘π‘–π‘£π‘’π‘ 
π‘›π‘’π‘šπ‘π‘’π‘Ÿ π‘œπ‘“ π‘‘π‘Ÿπ‘’π‘’ π‘›π‘’π‘”π‘Žπ‘‘π‘–π‘£π‘’π‘  + π‘›π‘’π‘šπ‘π‘’π‘Ÿ π‘œπ‘“ π‘“π‘Žπ‘™π‘ π‘’ π‘π‘œπ‘ π‘–π‘‘π‘–π‘£π‘’π‘ 
π‘›π‘’π‘šπ‘π‘’π‘Ÿ π‘œπ‘“ π‘‘π‘Ÿπ‘’π‘’ π‘π‘œπ‘ π‘–π‘‘π‘–π‘£π‘’π‘ 
π‘›π‘’π‘šπ‘π‘’π‘Ÿ π‘œπ‘“ π‘‘π‘Ÿπ‘’π‘’ π‘π‘œπ‘ π‘–π‘‘π‘–π‘£π‘–π‘’π‘  + π‘›π‘’π‘šπ‘π‘’π‘Ÿ π‘œπ‘“ π‘“π‘Žπ‘™π‘ π‘’ π‘›π‘’π‘”π‘Žπ‘‘π‘–π‘£π‘’π‘ 
∗ 100% [43] and
∗ 100% [43]. A specificity of
100% means that the test recognizes all actual negatives[43] and a sensitivity of 100% means
that the test recognizes all actual positives[43]. Positive predictive value denotes positive test
results which are correctly diagnosed and Negative predictive value denotes negative test
results which are correctly diagnosed.
Normalized
data
Train
GMM
Test
Classifier
Output
Figure 3.5.2: Block diagram of GMM method
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
55
GMM
comparing
testing1
GMM
comparing
testing2
GMM
comparing
testing3
No of
correct
normal
data
No of
incorrect
normal
data
No of
correct
abnormal
data
No of
incorrect
abnormal
data
Classification
rate
7
2
6
3
72.2%
9
0
9
0
100%
9
0
6
3
83.3%
(testing1+testing2
+ testing3) / 3 =
85.2%
Average
classification
rate
Table 3.5.2a: testing1, testing2 and testing3 data output using GMM classifier
GMM comparing testing1
POSITIVE
POSITIVE
NEGATIVE
NEGATIVE
FP = 2
Positive
predictive
value = TP /
(TP + FP) =
7 / (7 + 2) =
7/9
7 / 9 * 100 =
77.8%
FN = 3
TN = 6
Negative
predictive
value = TN
/ (FN + TN)
= 6 / (3 + 6)
=6/9
6 / 9 * 100 =
66.7%
Sensitivity =
TP / (TP +
FN) = 7 / (7 +
3) = 7 / 10
Specificity
= TN / (FP +
TN) = 6 / (2
+ 6) = 6 / 8
TP = 7
7 / 10 * 100 = 6 / 8 * 100 =
70%
75%
Table 3.5.2b: testing1 data output calculation using GMM classifier
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
56
GMM comparing testing2
POSITIVE
POSITIVE
NEGATIVE
TP = 9
FN = 0
NEGATIVE
FP = 0
Positive
predictive
value = TP /
(TP + FP) =
9 / (9 + 0) =
9/9=1
9 / 9 * 100 =
100%
TN = 9
Negative
predictive
value = TN
/ (FN + TN)
= 9 / (0 + 9)
=9/9=1
9 / 9 * 100 =
100%
Specificity
Sensitivity =
= TN / (FP +
TP / (TP +
TN) = 9 / (0
FN) = 9 / (9 +
+ 9) = 9 / 9 =
0) = 9 / 9 = 1
1
9 / 9 * 100 = 9 / 9 * 100 =
100%
100%
Table 3.5.2c: testing2 data output calculation using GMM classifier
GMM comparing testing3
POSITIVE
POSITIVE
NEGATIVE
TP = 9
FN = 3
NEGATIVE
FP = 0
Positive
predictive
9 / 9 * 100
value = TP /
=
(TP + FP) =
100%
9 / (9 + 0) =
9/9=1
TN = 6
Negative
predictive
value = TN
/ (FN + TN)
= 6 / (3 + 6)
=6/9
6 / 9 * 100
=
66.7%
Specificity
Sensitivity =
= TN / (FP +
TP / (TP +
TN) = 6 / (0
FN) = 9 / (9 +
+ 6) = 6 / 6 =
3) = 9 / 12
1
9 / 12 * 100 = 6 / 6 * 100 =
75%
100%
Table 3.5.2d: testing3 data output calculation using GMM classifier
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
57
CHAPTER FOUR
RESULTS
The features such as blood vessels area, microaneurysms area, exudates area and
textures corresponding to three features were extracted using the proposed algorithms and
methods. Table 4a shows the results of fuzzy classification and Table 4b shows the results
of GMM classification. The percentage of correct data for fuzzy classification over total
data shows a significantly high percentage of 81.5% which makes a good classifier choice
for the final graphical user interface (GUI). However, Table 3.5.2 shows the average GMM
classification rate of 85.2% over the three testing data which makes an even better classifier
choice than fuzzy. Therefore, GMM classifier will be used for the final GUI. Figure 4
shows the graphical plot for average percentage classification results for fuzzy and GMM
classifiers.
Total no
Total no of
of
correct
incorrect
data
data
% of
correct
data over
total data
(44 / 54)
* 100 =
81.5%
Testing1
Testing2
Testing3
13
16
15
5
2
3
55.6%
88.9%
88.9%
77.8%
88.9%
88.9%
77.8%
85.2%
Sensitivity
83.3%
50%
80%
71.1%
Specificity
66.7%
88.9%
87.5%
81%
No of
correct
data
No of
incorrect
data
Positive
predictive
value
Negative
predictive
value
44
Average
10
Table 4a: Fuzzy classification results
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
58
Testing1 Testing2 Testing3
Total no
of
correct
data
Total no
of
incorrect
data
% of
correct
data over
total data
(46 / 54) *
100 =
85.2%
Average
No of correct
data
13
18
15
No of
incorrect
data
5
0
3
72.2%
100%
83.3%
85.2%
77.8%
100%
100%
92.6%
66.7%
100%
66.7%
77.8%
Sensitivity
70%
100%
75%
81.7%
Specificity
75%
100%
100%
91.7%
Classification
rate
Positive
predictive
value
Negative
predictive
value
46
8
Table 4b: GMM classification results
100
90
80
92.6
81.5
85.2
91.7
85.2
77.8
81.7
77.8
81
71.1
70
60
50
Fuzzy
40
GMM
30
20
10
0
Avg classification Avg positive
Avg negative
rate
predictive value predictive value
Avg sensitivity
Avg specificity
Figure 4: Graphical plot for average percentage classification results from two classifiers
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
59
4.1
GRAPHICAL USER INTERFACE (GUI)
Figure 4.1: GUI
Graphical User Interface or GUI is a type of user interface that allows users to
interact with the program by clicking or typing. It allows the image features to display for
both normal and abnormal classification.
Figure 4.1 shows the print screen of the GUI, the list box shows the list of fundus
images. By clicking ‘Extract Features’ button, blood vessels, microaneurysms, textures
mean and textures third moment are displayed together with their areas. The corresponding
patient’s data can be shown for every fundus image by clicking ‘Patient’s Data’. Clicking
the ‘Diagnosis’ button will display either normal or abnormal classification.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
60
CHAPTER FIVE
CONCLUSION AND RECOMMENDATION
In this report, the system developed have demonstrated a reasonably accuracy of
classification rate of 85.2% (GMM average classification rate) with sensitivity and
specificity of 81.7% and 91.7% respectively (GMM average sensitivity and specificity). The
algorithms and methods used for significance test and classification were fairly fast in
computation speed; a good choice for comparing and computing for two classes of fundus
images. The results have also demonstrated that the system can help to detect diabetic
retinopathy (DR) at early stage for any DR abnormalities. This is important for
ophthalmologist to detect DR and perform necessary treatments to prevent or delay vision
loss.
However, the system can be improved further by using more than two classifiers to
improve sensitivity and specificity, more input features, diverse demographics and most
importantly, the quality of the original fundus images (ie: even background illumination)
need to be improved to show more detailed features as well as to improve the overall
accuracy for significance test and classification.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
61
CHAPER SIX
REFLECTIONS
Doing capstone project has been a whole new, exciting and ‘thrilling’ experience for
me. Although I’ve learned quite a lot from biomedical engineering degree, however, my
choice of capstone project made me felt all awed and bewildered at the beginning. I had no
prior knowledge or experience in MATLAB programming nor did I know anything about
image processing. I began to doubt whether I could complete my project successfully.
In order to begin my project, I needed firstly, to know more about diabetes and diabetic
retinopathy, the complication of diabetes. By finding more information from the internet,
journals and as well as books, I gained better understanding regarding the disease. Most
importantly, the literature review enabled me to start on my proposal.
The greatest hurdle was starting on MATLAB programming. I needed to find materials
and information to practice on programming. I had to juggle between practicing the
programming and reading the journals. Lastly, I needed to start writing the image processing
codes. I spent most of my time practicing on MATLAB programming and understanding
simple debugging. It was difficult to understand all codes and had to look for help from the
materials as well as from my supervisor. It was quite depressing and frustrated when hitting
brick walls and stuck at some point. However, it was very rewarding when I have resolved
the problems.
Starting on image processing was not all that smooth. Bits of problems surfaced during
this time. I had to find solutions to solve / debug these coding problems. I also had to find
and explore right threshold and structure element values. After some struggling and advice
from my supervisor, I was able to finish feature extraction codes. Initially, I had tried and
wanted to include haemorrhages feature for my project, however I was not able to
implement it successfully, but it was then dawned on me that using the texture (together with
the others) features to differentiate between normal and abnormal retinas were adequate as
normal and abnormal retinas have different texture values.
Next, I had to find out about various significant tests and with the advice from my
supervisor, decided to use Student’s t-test method. I also learned about the significance pBME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
62
values which features are more significant than the others, then using the data to generate
normalized values for the classifiers.
The learning curve for creating classifiers had been confusing and frustrating. Luckily
with the help of my supervisor and Fabian, I was able to understand how to create the
training and testing data for my classifiers. Lastly, I needed to learn to create a graphical
user interface (GUI) for my project presentation. It was a fun and enjoyable experience
which was reminiscent of my Visual Basic lessons from my poly days. All in all, the
capstone project was a priceless experience for me and which I was quite satisfied with my
efforts and outcomes.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
63
REFERENCES
[1] 2-D median filtering – MATLAB
http://www.mathworks.com/help/toolbox/images/ref/medfilt2.html.
[2] Adjusting Pixel Intensity Values :: Analyzing and Enhancing Images (Image Processing
Toolbox™). http://www.mathworks.com/help/toolbox/images/f11-14011.html.
[3] Alasdair McAndrew Introduction to Digital Image Processing With Matlab.
[4] Analyzing Images :: Analyzing and Enhancing Images (Image Processing Toolbox™).
http://www.mathworks.com/help/toolbox/images/f11-11942.html.
[5] B.S. Everitt. The Cambridge Dictionary of Statistics in the Medical Sciences.
[6] C.M.R. Caridade, A.R.S. Marcal & T. Mendonca. The use of texture for image
classification of black & white air-photographs.
[7] Cassini Lossy Compression.
http://www.astro.cornell.edu/research/projects/compression/entropy.html.
[8] Create morphological structuring element (STREL) – MATLAB.
http://www.mathworks.com/help/toolbox/images/ref/strel.html.
[9] Diabetic Retinopathy. http://www.hoptechno.com/book45.htm.
[10] Diabetic Retinopathy Treatment - Treatment of Diabetic Retinopathy.
http://vision.about.com/od/diabeticretinopathy/a/Diabetic_Retinopathy_Treatment.htm.
[11] Douglas Reynolds. Gaussian Mixture Models.
[12] Dr Hanno Coetzer. Morphological Image Processing Lecture 21.
[13] Eye - Wikipedia, the free encyclopedia. http://en.wikipedia.org/wiki/Eye.
[14] Find edges in grayscale image – MATLAB.
http://www.mathworks.com/help/toolbox/images/ref/edge.html.
[15] Fovea centralis - Wikipedia, the free encyclopedia. http://en.wikipedia.org/wiki/Fovea.
[16] Fundus Photography. http://www.aetna.com/cpb/medical/data/500_599/0539.html.
[17] generation5 - Thresholding and Segmentation.
http://www.generation5.org/content/2003/segmentation.asp.
[18] Gillian C. Vafidis. Features of diabetic eye disease.
[19] Harvey Rhody, Chester F. Carlson Center for Imaging Science, Rochester Institute of
Technology. Lecture 3: Basic Morphological Image Processing.
[20] How the Eye Works - Singapore National Eye Centre. http://www.snec.com.sg/eyeconditions-and-treatments/Pages/how-the-eye-works.aspx.
[21] Ida G. Dox, B. John Melloni, Gilbert M. Eisner, June L. Melloni. Melloni’s Illustrated
Medical Dictionary (4th ed).
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
64
[22] Jagadish Nayak, P Subbanna Bhat, Rajendra Acharya U, C M Lim, Manjunath
Kagathi. Automated Identification of Diabetic Retinopathy Stages Using Digital Fundus
Images.
[23] James L. Kinyoun, Donald C. Martin, Wilfred Y. Fujimoto, Donna L. Leonetti.
Opthalmoscopy Versus Fundus Photographs for Detecting and Grading Diabetic
Retinopathy.
[24] Jean-Pascal Aribot. Texture Segmentation.
[25] John Paul Vetter. Biomedical Photography.
[26] K R Bishai An inexpensive method of indirect opthalmoscopy.
[27] Lens (anatomy) - Wikipedia, the free encyclopedia.
http://en.wikipedia.org/wiki/Lens_(anatomy).
[28] LensShopper. Anatomy of the eye.
[29] Ludmila Ilieva Kuncheva. Fuzzy classifier design.
[30] M. Hellmann. Fuzzy Logic Introduction.
[31] Macula of retina - Wikipedia, the free encyclopedia.
http://en.wikipedia.org/wiki/Macula.
[32] Ministry of Health: Disease Burden.
http://www.moh.gov.sg/mohcorp/statistics.aspx?id=23712.
[33] Morphological Operations.
http://www.viz.tamu.edu/faculty/parke/ends489f00/notes/sec1_9.html.
[34] Morphology Fundamentals: Dilation and Erosion :: Morphological Operations (Image
Processing Toolbox™). http://www.mathworks.com/help/toolbox/images/f18-12508.html.
[35] Optic disc - Wikipedia, the free encyclopedia. http://en.wikipedia.org/wiki/Optic_disc.
[36] Pupil - Wikipedia, the free encyclopedia. http://en.wikipedia.org/wiki/Pupil.
[37] Rafael C. Gonzalez, Richard Eugene Woods. Digital image processing.
[38] Rajendra Acharya U, Eddie Y. K. Ng, Jasjit S. Suri. Image Modeling of the Human
Eye.
[39] Raman Maini, Dr. Himanshu Aggarwal. Study and Comparison of Various Image
Edge Detection Techniques.
[40] Ravi Jain, Ajith Abraham. A Comparative Study of Fuzzy Classification Methods on
Breast Cancer Data.
[41] Retina-Vitreous Center | Procedures.
http://www.retinavitreouscenter.com/procedures_laser_photocoagulation.html.
[42] Scott & Christie and Associates Eye Diagram.
http://www.scottandchristie.com/eye.cfm?noflash=1.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
65
[43] Sensitivity and specificity - Wikipedia, the free encyclopedia.
http://en.wikipedia.org/wiki/Sensitivity_and_specificity.
[44] Singapore Association of the Visually Handicapped.
http://www.savh.org.sg/info_cec_diseases.php.
[45] Stanley E. Gunstream. Anatomy and Physiology with Integrated Study Guide (3rd ed).
[46] Student's t-Tests. http://www.physics.csbsju.edu/stats/t-test.html.
[47] U R Acharya, C M Lim, E Y K Ng, C Chee and T Tamura. Computer-based detection
of diabetes retinopathy stages using digital fundus images.
[48] Vinod Patel. Diabetes mellitus: the disease.
[49] Wendy Strouse Watt, O.D. Fluorescein Angiogram.
[50] What is Diabetic Retinopathy? http://www.news-medical.net/health/What-is-DiabeticRetinopathy.aspx.
[51] Wong Li Yun, Rajendra Acharya U, Y V. Venkatesh, Caroline Chee, Lim Choo Min,
E.Y.K.Ng. Identification of Different Stages Of Diabetic Retinopathy Using Retinal Optical
Images.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
66
APPENDIX A
BOX PLOT FOR FEATURES (AREA)
Box plot for blood vessels, exudates and microaneurysms respectively
Box plot for mean, standard deviation and third moment respectively
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
67
Box plot for entropy
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
68
APPENDIX B
BLOOD VESSELS MATLAB CODE
clear all
clc
% Read original retinal image
b = imread(‘file name');
b = imresize(b,[576 720]);
% b(:,:,1) = red component, b(:,:,2) = green component, b(:,:,3) = blue
% Assigning green component to g1
g1 = b(:,:,2); % Extract green component
%============ Figure 3.3.1c =============%
% Inverting the green component
g2 = 255-g1;
%============ Figure 3.3.1d =============%
% Edge detection using canny method
ed = edge(g2, 'canny');
%============ Border detection (NEW) =============%
Border = imfill(ed,'holes');
[row col] = size(Border);
for x = 2:5
for y = 100:650
Border(x,y) = 0;
end
end
for x = 573:575
for y = 100:650
Border(x,y) = 0;
end
end
% Morphological opening using the disk structuring element
s1 = strel('disk',8);
e1 = imerode(Border,s1); % Perform erosion
d1 = imdilate(Border,s1);
% Perform dilation
f1 = d1-e1; % Border created
%===============================================%
%============== Blood vessel from background ===========%
% Assigning new green component to g3
g3 = 255-g1; % Create new extacted green component
a = adapthisteq(g3); % Perform adaptive histogram equalization
%============ Figure 3.3.1e =============%
s2 = strel('ball',8,8); % Perform morphological opening operation with
structuring element 'ball'
e2 = imerode(a,s2);
d2 = imdilate(e2,s2);
%============ Figure 3.3.1f =============%
f2 = a-d2; % Subtract from original image to show blood vessels vividly
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
69
%============ Figure 3.3.1g =============%
th = ~im2bw(f2,0.1);
%============ Figure 3.3.1h =============%
mf = medfilt2(th,[3 3]); % Perform median filtering to lessen noise
%============ Figure 3.3.1i =============%
f3 = mf-f1; % Image with boundary attained
Ifill = imfill(f3,'holes'); %Fill holes NOT touching edge
for x = 1:50
% eliminate top border
for y = 1:80
f3(x,y) = 1;
end
end
%================= Calculate area =================%
H = Ifill+f1;
Final = unwanted(H); %final image
figure, imshow(Final);
%============ Figure 3.3.1j =============%
Final1 = ~Final;
figure, imshow(Final1);
%============ Figure 3.3.1k =============%
% Area Calculation
L = 0;
for i = 1:size(Final)
for j = 1:size(Final)
if Final(i,j) == 0
L = L+1;
end
end
end
L
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
70
APPENDIX C
MICROANEURYSMS MATLAB CODE
clear all
clc
% Read original retinal image
mi1 = imread('file name');
mi1 = imresize(mi1,[576 720]);
% mi1(:,:,1) = red component, mi1(:,:,2) = green component, mi1(:,:,3) =
blue
r1 = mi1(:,:,1); % Extract red component
%============ Figure 3.3.2c =============%
% Inverting the red component
r2 = 255-r1;
%============ Figure 3.3.2d =============%
% Edge detection using canny method
ed = edge(r2,'canny');
%============ Figure 3.3.2e =============%
[row col] = size(ed);
for x = 2:5
for y = 100:650
ed(x,y) = 1;
end
end
for x = 573:575
for y = 100:650
ed(x,y) = 1;
end
end
%============= Border detection (NEW) =============%
Border = imfill(ed,'holes');
s1 = strel('disk',5);
e1 = imerode(Border,s1);
% Perform erosion with disk of radius = 5
d1 = imdilate(Border,s1); % Perform dilation with disk of radius = 5
f1 = e1+(~d1); % Border created
%============ Figure 3.3.2f =============%
%===============================================%
G = f1-(~ed); % Edge detection without border
%============ Figure 3.3.2g =============%
K = imfill(G,'holes'); % Fill holes
%============ Figure 3.3.2h =============%
P = K - ed; % With unwanted artifacts
%============ Figure 3.3.2i =============%
%============= Blood vessel detection =============%
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
71
t3 = adapthisteq(r2);
se = strel('ball',8,8);
BW4 = imerode(t3,se);
BW5 = imdilate(BW4,se);
Im = t3-BW5;
BW3 =~ im2bw(Im,0.08);
B = BW3-(~f1);
Ifill = imfill(B,'holes');
%============ Figure 3.3.2j =============%
L = im2double(Ifill);
L1 = edge(L,'canny');
%============ Figure 3.3.2k =============%
%===================================================%
%================ Final improvisations =====================%
K = G-L1;
%============ Figure 3.3.2l =============%
Final = imfill(K,'holes');
%============ Figure 3.3.2m =============%
Final2 = Final-(~P);
figure, imshow(Final2);
%============ Figure 3.3.2n =============%
% Area Calculation
L=0;
for i=1:size(Final2)
for j=1:size(Final2)
if Final2(i,j) == 1
L=L+1;
end
end
end
L
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
72
APPENDIX D
EXUDATES MATLAB CODE
clear all
clc
% Read original retinal image
ex1=imread('file name');
ex1=imresize(ex1,[576 720]);
% ex1(:,:,1) = red component, ex1(:,:,2) = green component, ex1(:,:,3) =
blue
% Assigning green component to g1
g1 = ex1(:,:,2); % Extract green component
%============ Figure 3.3.3c =============%
% Morphological opening using the octagon structuring element
s1 = strel('octagon',9);
imc = imclose(g1,s1); % Morphological closing
%============ Figure 3.3.2d =============%
imc = double(imc);
fun = @var;
im2 = uint8(colfilt(imc,[11 11],'sliding',fun));
%============ Figure 3.3.2e =============%
th = im2bw(im2,0.7);
%============ Figure 3.3.2f =============%
s2 = strel('disk',10);
d1 = imdilate(th,s2); %dilation
e1 = imerode(d1,s2); %erosion
%============ Figure 3.3.2g =============%
ed = edge(uint8(e1),'canny');
%============ Figure 3.3.2h =============%
%===================================================%
G1 = rgb2gray(ex1); % Convert RGB image to grayscale
G2 = imadjust(G1); % Adjust image intensity values
% Detection of Optical Disk
max_Ie = max(max(G2)); % Finding maximum value on the image
[r, c] = find(G2 == max_Ie);
Rmed = median(r);
Cmed = median(c);
R = floor(Rmed);
C = floor(Cmed);
% Mask
IeSizeX = 576;
IeSizeY = 720;
radius = 82;
[x,y] = meshgrid(1:IeSizeY, 1:IeSizeX);
mask = sqrt((x-C).^2 + (y-R).^2) <= radius;
%============ Figure 3.3.2i =============%
% Optical Disk Removal
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
73
ex2 = imsubtract(e1, mask);
%============ Figure 3.3.2j =============%
%===================================================%
g2 = 255-g1;
% Image inversion
bd = g2-225;
bde = edge(bd,'roberts'); % Edging
sq = ones(20,20);
% Thickening of edges
d2 = imdilate(bde,sq);
fi = ex2-d2;
% Subtracting edges
%============ Figure 3.3.2k =============%
for x = 1:10
% Eliminate top border
for y = 1:720
fi(x,y) = 0 ;
end
end
for x = 560:576
% Eliminate bottom border
for y = 1:720
fi(x,y) = 0;
end
end
for x = 1:576
for y = 1:10
% left
fi(x,y) = 0 ;
end
end
for x = 1:576
for y = 710:720
% right
fi(x,y) = 0;
end
end
s3=strel('disk',3);
e2=imerode(fi,s3); % Erosion
%============ Figure 3.3.2l =============%
% Area Calculation
L=0;
for i=1:size(fi)
for j=1:size(fi)
if fi(i,j)==1
L=L+1;
end
end
end
L
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
74
APPENDIX E
TEXTURES MATLAB CODE
function [output] = firstOrderStat(image)
x = rgb2gray(image); %convert to grayscale.
%calculate mean
mean = 0;
for k = 1:255
mean = mean + k*intenProb(k,x);
end
%calculate Standard Deviation
stddev = 0;
y = int16(x)-int16(mean); % convert from uint8 to int16 to avoid
overflow.
z = y.*y;
stddev = sum(sum(z));
stddev = stddev/numel(x);
stddev = sqrt(stddev);
%calculate third moment
thirdMoment = 0;
total = (y./stddev).^3; %(x(i)- mean)^3
thirdMoment = sum(sum(total));
thirdMoment = thirdMoment/numel(x); %divide by number of element N;
%calculate entropy
temp = 0;
entropy = 0;
for k=1:255
temp = (k-mean)*(k-mean)*intenProb(k,x);
if(intenProb(k,x) ~= 0)
entropy = entropy + intenProb(k,x)*log(intenProb(k,x));
end
end
entropy = -1*entropy;
%print output
output = struct('Mean',mean,'Deviation', stddev,
'Third_Moment',thirdMoment,'Entropy', entropy);
end
function out = intenProb(i,x)
numOccur = 0;
numOccur = sum(sum(x==i));
out = numOccur/numel(x);
end
%function h(i) first Order Statistic.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
75
APPENDIX F
MEETING LOGS
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
Capstone project meeting log - 1
16 January 2010
Date
12pm – 12.30pm
Time
½ hour
Duration
Minutes of current meeting Overview of diabetes and diabetic retinopathy.
Find and read some related journals and online
Action items/ Targets to
information regarding diabetic retinopathy and
achieve
the stages and diabetes.
Capstone project meeting log - 2
6 February 2010
Date
11.15am – 11.45am
Time
½ hour
Duration
Minutes of current meeting Discussed further about individual DR features
such as blood vessels, exudates, microaneurysms
and textures of normal and abnormal (DR) in
different stages. Overview of detection of DR
based on the features data and values using
MATLAB.
Continue on literature review. Gained better
Action items/ Targets to
understanding and had rough idea on how to
achieve
proceed on my proposal.
Capstone project meeting log - 3
13 February 2010
Date
10.45am – 11.45am
Time
1 hour
Duration
Minutes of current meeting Overview of some of the MATLAB commands. I
am required to practice using the MATLAB
commands to prepare for writing image
processing codes. I am also required to begin
on my project proposal.
Starting on proposal and practice MATLAB
Action items/ Targets to
commands. Submitted proposal draft to
achieve
supervisor for vetting before submission.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
76
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
Capstone project meeting log - 4
13 March 2010
Date
5pm – 5.30pm
Time
1/2 hour
Duration
Minutes of current meeting Updated my ongoing MATLAB practice
progress. Starting to write blood vessels
extraction MATLAB codes.
Continue on literature review. Ongoing
Action items/ Targets to
MATLAB practice.
achieve
Capstone project meeting log - 5
17 April 2010
Date
11am – 11.30am
Time
1/2 hour
Duration
Minutes of current meeting Updated my ongoing MATLAB practice
progress and blood vessels codes.
Continue on literature review. Ongoing
Action items/ Targets to
MATLAB practice. Starting on interim report.
achieve
Submitted interim report draft to supervisor for
vetting before submission.
Capstone project meeting log - 6
8 May 2010
Date
11am – 11.30am
Time
1/2 hour
Duration
Minutes of current meeting Reported some problems with MATLAB codes /
function (threshold value and structuring
elements) on blood vessels. Received advice on
finding the threshold value.
Continue on literature review. Ongoing
Action items/ Targets to
MATLAB practice. Discover the appropriate
achieve
threshold value and SE value.
Capstone project meeting log - 7
22 May 2010
Date
11.15am – 11.45am
Time
1/2 hour
Duration
Minutes of current meeting Finished getting the average threshold value and
structuring elements value. Problems with area of
blood vessels values were resolved.
Continue on literature review. Ongoing
Action items/ Targets to
MATLAB practice. Starting on microaneurysms
achieve
and exudates feature extraction coding.
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
77
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
Capstone project meeting log - 8
26 June 2010
Date
11.30am – 12pm
Time
1/2 hour
Duration
Minutes of current meeting Discussed about microaneurysms and exudates
feature extraction codes. Overview of getting the
p-values (statistically significant) for different
features on all images.
Continue on literature review. Ongoing
Action items/ Targets to
MATLAB practice. Explore and discover the
achieve
best way to obtain p-values for different features
on all images.
Capstone project meeting log - 9
10 July 2010
Date
11.10am – 11.40am
Time
1/2 hour
Duration
Minutes of current meeting Discussed about texture feature extraction.
Continue on literature review. Ongoing
Action items/ Targets to
MATLAB practice. Starting to write texture
achieve
feature extraction codes.
Capstone project meeting log - 10
31 July 2010
Date
11.00am – 11.30am
Time
1/2 hour
Duration
Minutes of current meeting Discussed about texture feature extraction codes.
Overview of classifier and using different
classifiers to generate results.
Continue on literature review. Ongoing
Action items/ Targets to
MATLAB practice. Starting to write classifier
achieve
codes and preparing training and testing data.
Capstone project meeting log - 11
14 August 2010
Date
11.00am – 11.30am
Time
1/2 hour
Duration
Minutes of current meeting Discussed about classifiers results and overview
of creating graphical user interface (GUI).
Starting to write GUI codes.
Action items/ Targets to
achieve
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
78
1
2
3
4
5
Capstone project meeting log - 12
21 August 2010
Date
11.00am – 11.30am
Time
1/2 hour
Duration
Minutes of current meeting Presenting GUI to supervisor.
Preparing the materials to start writing final
Action items/ Targets to
report.
achieve
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
79
APPENDIX G
GANTT CHART
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
80
BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT
2
81
Download