Uploaded by BHARGAV BHATKALKAR

DAC-1

advertisement
Manipal University
Manipal – 576 104, Karnataka, India
MIT - Doctoral Advisory Committee (DAC) PhD Course Work
Progress Report
1.
Name
Bhargav J Bhatkalkar
Designation
Assistant Professor
Department
Computer Science & Engineering
PhD Registration No.
150900019
PhD Registration Date
30-May-2015
3.
Research Topic
Detection of Idiopathic Macular Hole and its Staging using Automated Feature
Extraction in Fundus Images.
4.
Research Guide
Dr. Srikanth Prabhu
5
Research Co-Guide
Dr. Sulatha Bhandary
6.
Subject Experts
1. Dr. V.S. Venkatesan
2. Dr. Dinesh Rao
2.
3. Dr. U.C. Niranjan
4. Dr. Niranjana S
5. Dr. Keerthana Prasad
6. Dr. P C Siddalingaswamy
7.
8.
DAC Meeting Dates
Seminar Dates
(For coursework DAC)
(i) 1-Feb-2016
(ii)
(iii)
(iv)
(v)
(vi)
(vii)
(viii)
(i) 11-11-2015
(ii) Dec-2015
Signature of PhD candidate
Signature of Guide
Signature of Co-guide
Signature of HOD
Contents
1. Title and Research Objectives …………………………………………………... 1
2. Status of Coursework ……………………………………………………………. 2
3. Brief Description on Coursework-1 …………………………………………….. 2
Anatomy of Retina, Macular Hole
3.1 Introduction ...……………………………………............................................... 2
3.2 Literature Review ………………………………………………………………... 3
3.2.1
Anatomy of Retina …………………………………………………….…. 3
3.2.1.1 Organization of Retina ………………………………….………. 4
3.2.1.2 Functions of Retinal Cell Bodies ………………………………. 5
3.2.2
Anatomy of Macula …………………………............................................ 7
3.2.2.1 Organization of Macula ………………………………………… 9
3.2.2.2 Foveal Structure …………………………………………........... 10
3.2.3
Macular Hole …………………………………………………………….. 11
3.2.3.1 Incidence of Macular Hole …………………………………….... 12
3.2.3.2 Causes and Types of Macular Hole ……………………………... 12
3.2.3.3 Vitreous Attachment and its Role in Idiopathic Macular Hole
Formation ………………………………………………………... 13
3.2.3.4 Staging of Idiopathic Macular Hole ……………………………... 13
3.2.3.5 Importance of Early Detection …………………………………... 14
3.2.3.6 Treatment Options for Idiopathic Macular Hole ……………….... 14
3.2.3.7 Prognosis of Idiopathic Macular Hole Surgery …………………. 15
4. Brief Description on Coursework-2 …………………………………………..… 16
Advanced Research Methodology
4.1 Introduction ...………………………................................................................... 16
4.2 Title of Research ……………………………………………................................ 16
4.3 Literature Review ………………………………………………………….…… 16
4.4 Problem Statement and Research Questions …………………………………… 18
4.5 Research Objectives ……………………………………………………………. 19
4.6 Rationale for the Research ……………………………………………………... 19
4.7 Research Methodology ……………………………………………………….... 19
4.7.1
Template Based Approach ……………………………………………… 20
4.7.2
Feature Based Approach ………………………………………………... 21
4.8 Data Collection ………………………………………………………………… 22
4.9 Inference and Implications of Study …………………………………………… 23
4.10
Significance of the Coursework …………………………………………….. 23
5. Plan for Future …………………………………………………………………. 23
6. References
List of Figures
1.
The Structure of a Human Eye …………………………………………………. 3
2.
The Retinal Structure ………………………………………………………….... 4
3.
Cross Sectional View of Retinal Layer ………………………………………… 4
4.
3-D View of Retinal Layer Cell Bodies ………………………………………... 5
5.
Retinal Layer Cell Connectivity ……………………………………………….. 7
6.
Sub-regions of Macula …………………………………………………………. 8
7.
Relationship between Retina, Macula and Fovea ……………………………….. 8
8.
Connectivity Between Photoreceptors and Bipolar Cells …………………….... 9
9.
Vertical Section of Central Retina and Peripheral Retina ………………………. 10
10.
Microscopic Structure of the fovea ……………………………………………... 11
11.
Fundus Photo and OCT Image of Macula ………………………………………. 11
12.
Distorted Vision due to Macular Hole ………………………………………..... 12
13.
Formation of a Macular Hole due to the Shrinkage of Vitreous Jelly …………. 13
14.
Vitrectomy Surgery ………………………………………………………….…. 15
15.
Algorithm for Face Recognition using Template Based Method …………….... 20
16.
Template Based Approach …….……………………………………………...... 21
17.
Algorithm for Face Recognition using Feature Based Method ……………….. 21
18.
Feature Based Approach
…………..…………………………………………... 22
1. TITLE AND RESEARCH OBJECTIVES
Tile of Research Work
Detection of Idiopathic Macular Hole and its Staging using Automated Feature Extraction in
Fundus Images.
Research Objectives
1. Developing a protocol for enhancing the quality of acquired fundus photograph.
2. To design a robust feature extraction algorithm for identifying the different
components of a fundus photograph, such as, Optic disk, Macula and Fovea for the
detection of a macular hole.
3. To design a classification model using the extracted features for staging the macular
hole.
4. To evaluate the performance of proposed automated method with the expert diagnosis
result.
Objectives of the first coursework derived from the first research objective is presented
below:
1. To study and understand the anatomy of retina and its cell structures.

Outcome - Different constituents of retina and their properties and role in
normal functioning of an eye are studied in detail. The facts determining the
degeneration of retina and macula in particular are listed for the further
investigation of macular hole.
2. To study and understand the anatomy of macula and its different layers.

Outcome - The framework of different layers of macula is studied in detail. The
cellular bodies composing each layer and their role in transforming the light
energy in to nerve impulses from the bottom layer, the outer nuclear layer to
the innermost layer, the ganglion layer is explored.
3. To understand the different subparts of macula and their features.

Outcome – A clear understanding of the different subparts of macula, namely
Foveaola, Fovea and Parafoveolar region is drawn from this objective. The
color features of these subparts in the fundus photograph plays an important
role in separating them in to different regions of macula which is required for
further investigation during the detection of a macular hole.
4. To classify the different stages
DAC-1
Page - 1
5.
of macular hole based on the theoretical study.

Outcome - The different parameters required for the classification of four
stages of a macular hole are explored. The parameters selected for defining the
feature of a macular hole are: size, color and texture. A detailed study is
carried out for quantizing these three parameters into four stages of a macular
hole.
2. STATUS OF COURSEWORK
Coursework No.
Coursework Title
Status
1
Anatomy of Retina, Completed
Macular Hole
11-11-2015
2
Research
Methodology
Dec-2015
3
Medical
Processing
4
Soft
Computing Pending
Techniques
Completed
Date of Completion
Image Pending
3. BRIEF DESCRIPTION ON COURSEWORK-1 : Anatomy of Retina,
Macular Hole
3.1 Introduction
The structure of a normal human eye is given in Figure 1. Through the cornea, which is the
eye’s outermost layer, the light enters into a human eye. Cornea covers the front of the eye.
The pupil is a dark circle in the middle of the iris that helps in managing the amount of light
that enters the eye. In the bright light, the pupil becomes smaller in size to allow less light to
pass through it. In the dark, the pupil expands and hence more amount of light passes through
it [2]. Iris is the colored part of an eye surrounding the pupil and it controls the contraction
and expansion of the pupil. The lens is a transparent structure behind the pupil that focuses
light rays into the eye. The light continuous through the vitreous humor, also called as the
vitreous jelly, the clear gel that makes up about 80% of the eye volume, and then, falls on to
the light-sensitive layer called retina which is located at the back of the eye. Retina covers
more than 60 percentage of the inner surface of the eye. Retina contains photosensitive cells
namely; rods and cones which convert the light entering in to the eye into signals that are
carried to the brain by the optic nerve [3]. The center of retina is called as macula. Macula is
a yellowish oval shaped spot that is highly pigmented. The macula enables the eye to view
DAC-1
Page - 2
detailed central vision that is sharp, to perceive colors and to carry out tasks like reading,
driving etc. Light entering the eye is finally focused on the macula. Macula contains special
light-sensitive photoreceptor cells which are responsible for converting the light focused on
them into nerve pulses that are sent to the brain. The center part of macula is called as the
fovea. Fovea has the highest concentration of photoreceptor cells which provide the clear and
sharp central vision.
Figure 1: The structure of a human eye
Courtesy: Anatomy of human eye, Access Media Group LLC.
3.2 Literature Review
3.2.1
Anatomy of Retina
The retina is the light-sensitive tissue that lines the inside of the eye. The retina is actually an
extension of the brain, formed from neural tissue and connected to the brain by the optic
nerve. The optical elements within the eye focus an image onto the retina of the eye, initiating
a series of chemical and electrical events within the retina. Nerve fibers within the retina send
electrical signals to the brain, which then interprets these signals as visual images [4]. An
estimated 80% of all sensory information in humans is thought to be of retinal origin [5]
indicating the importance of retinal function for the ability to interact with the outside world.
The retinal layer is approximately 0.5 mm thick and covers about 70 percentage of
interior eye surface. The retina processes light through a layer of photoreceptor cells called
rods and cones. Rod cells only respond to the dim light and they don’t respond to the bright
light. A person’s vision during the dim light is all because of the rod cells. In contrast to rod
cells, the cone cells don’t respond to dim light and only respond to bright light. Cone cells are
accountable for a person’s capability to see the things in fine detail and to determine their
color. The macular region is found on the Innermost layer of retina is called as Inner Limiting
Membrane (ILM).
Macula is a small and highly sensitive part of the retina responsible for the centralvision and to perceive colors. Central vision is a person’s field of vision that allows us to do
our day-to-day activities like driving, reading, estimating the distance, seeing details sharply
etc. Central vision is also accountable for identifying shapes and colors of the object view. In
the retinal layer, the highest concentration of cons cells is found in the macular region. The
DAC-1
Page - 3
macula responsible for sharp central vision and the damage to it may lead to the failure of
central vision.
The center part of the macula is a tiny dimple known as fovea. Fovea has exclusive
cones and they are smaller and more closely packed than elsewhere on the retina. Fovea is the
only rods-free region in the retina. The size of macula is about 5.5 mm in diameter and the
center of macula, the fovea is of 1.5 mm in diameter [6]. Figure 2 describes the relationship
between retina, macula and fovea.
Figure 2: The retinal structure
Courtesy: Anatomy of human eye, Access Media Group LLC.
3.2.1.1 Organization of Retina
Crosses sectional view of retina reveals the complex arrangement of interneurons in between
the outer ganglion cell layer and the inner photoreceptor layer as shown in Figure 3. The light
ray passes through all interneurons and finally falls on the photoreceptor layer. The converted
nerve impulses travel in the opposite direction from photoreceptor layer to the ganglion cell
layer which connects to the optical nerve.
Figure 3: Cross sectional view of retinal layer
Courtesy: Drelie Gelasca et al. BMC Bioinformatics
DAC-1
Page - 4
The human retina is made up of three layers of cell bodies as shown in Figure 4. The
outermost layer is termed as the outer nuclear layer. This layer contains a large volume of
photoreceptor cells, the rods and cones. The internal layer is called as inner nuclear layer.
This layer is composed of inter connecting cell bodies namely, bipolar cells, horizontal cells,
and amacrine cells. The innermost layer is the ganglion cell layer. This layer consists of
ganglion cells displaced amacrine cells.
Figure 4: 3-D view of retinal layer cell bodies
Courtesy: webvision.med.utah.edu
3.2.1.2 Functions of retinal cell bodies
There are different cell bodies composing the retinal layer. The incoming light passes through
all the layers of retina before it finally falls on the photoreceptors. Light entering the eye is
converted to nerve impulses by the photoreceptor cells. This information in the form of nerve
impulses is then passed on to bipolar cells and then on to the ganglion cells along its path.
Different layers of retina and their composition is shown in Figure 5.
The ganglion cell layer houses thousands of ganglion cells. Ganglion cells are directly
connected to optic nerve. The information received all the way from photoreceptor cells, the
nerve impulses are sent to optic nerve by ganglion cells. The optic nerve finally transfers
these impulses to brain for further interpretation.
The next layer in the hierarchy is the inner plexiform layer. This layer establishes
pathways in between millions of bipolar cells and amacrine cells with ganglion cells by
interconnecting their axons and dendrites.
Next comes the inner nuclear layer which is a composition of the connector cell
bodies, namely, horizontal cells, amacrine cells, and bipolar cells, and photoreceptor cells.
DAC-1
Page - 5
Last in the hierarchy of layers, we locate the outer segment layer. This layer contains
the photoreceptor cells, the rods and cones. The endings of these photoreceptor cells
emerging out from this layer are embedded in the pigment epithelium layer situated at the top
of the retinal layer hierarchy. The functions of different retinal cell bodies are as follows:

Inner Limiting Membrane is the boundary between the retina and the vitreous
humour.

Ganglion Cells exist in the outermost layer of the retina. They assimilate the
information (nerve impulses) generated from photoreceptor cells. The bipolar cells
carry information from photoreceptor cells to ganglion cells which is sent to brain via
optic nerve [7] [8].

Amacrine cells are interconnecting neurons in the retina and they function at the inner
plexiform layer (IPL). In the IPL, the bipolar cells communicate directly with
ganglion cells and amacrine cells. The signals coming from bipolar cells are
integrated at the amacrine cells and further passed on to ganglion cells [9].

Bipolar cells carry signals generated from photoreceptors to the ganglion cells that
carry these signals to the optic nerve. One end of the bipolar cell connects to either
multiple rods or a single cone and the other end connects to the ganglion cells to
deliver the processed signal. Each of the at least 13 distinct types of bipolar cells
systematically transforms the photoreceptor input in a different way, thereby
generating different signals representing elementary 'building blocks' from which the
microcircuits of the inner retina derive a feature-oriented description of the visual
world [10].

Horizontal cells are situated in the outer plexiform layer of the retina. Horizontal cells
form a junction between multiple photoreceptors and bipolar cells as shown in
Figure 5. A photoreceptor cell transfers the generated stimuli to the horizontal cell
which is further transferred to either other photoreceptors or bipolar cells based on the
neural network in the outer plexiform layer [11].

Photoreceptors (rods & cones) in the retina are responsible for converting light
energy into nerve impulses. The bipolar cells then receive these impulses in the
second layer and passes on it to the ganglion cells in the third layer [12].
DAC-1
Page - 6
Figure 5: Retinal layer cell connectivity
Courtesy: webvision.med.utah.edu
3.2.2
Anatomy of Macula
Macula is located in the center of retina and it is a highly sensitive region which is about
5.5 mm in diameter. It is responsible for the central-vision and to perceive colors. Central
vision is a person’s field of vision that allows us to do our day-to-day activities like driving,
reading, estimating the distance, seeing details sharply etc. Central vision is also accountable
for identifying shapes and colors of the object view. In the retinal layer, the highest
concentration of cons cells is found in the macular region. The macula responsible for sharp
central vision and the damage to it may lead to the failure of central vision. Macula is
organized into different sub-regions namely, foveola, fovea, perifoveolar region and
parafoveolar region [13] as shown Figure 6.
The center of macula that appears as a small pit on the retina is called the fovea. The
fovea is about 1.5 mm in diameter and it is the only area in the retina where all of the
photoreceptor cells are cones. There are no rod cells in the fovea. Fovea is responsible for
the sharp central vision. As the fovea has no rod cells, it does not contribute for the peripheral
vision. To see the objects and to identify their shape in the dark requires one’s peripheral
vision to be strong. To use the peripheral vision, one has to look just to one side of them so
that the light entering eye falls on a retinal area where there is abundance of rod cells. Such a
retinal area resides only outside of the macular zone. The cross sectional views of retina
show the region of fovea as a depression in the inner retinal surface as shown in Figure 7.
DAC-1
Page - 7
Figure 6: Sub-regions of macula
Courtesy: wikiwand.com
Figure 7: Relationship between Retina, Macula and Fovea
Courtesy: A.K.Khurana Indu Khurana , “Anatomy And Physiology Of Eye”
The center of fovea is called the foveola that has a diameter of 0.35 mm
approximately. The foveolar region is the thinnest segment of the retina. In the retina, the
distribution of rod cells and cone cells is not even. There are approximately 1, 20,000 cone
cells in the foveal region compared to only 30,000 cones in the tiny foveolar region. In the
retina, cone cells are more at the foveal region where as rod cells dominate around the
periphery of the retina. The consequence of this distribution of rods and cones makes the
peripheral retina more sensitive to light. The ration between photoreceptors to ganglion cells
is very low in the central retina for providing a high visual acuity. In the fovea, the cone cells
are small in size and they are tightly packed [14]. This fact further increases the visual acuity.
Outside the regions of fovea, the sizes of cones are larger and have more space between them
which is filled by rod cells.
DAC-1
Page - 8
3.2.2.1 Organization of Macula
Nerve impulses generated from rod cells and cone cells reach ganglion cells either by the
direct pathway or by the indirect pathway. Direct pathway is through bipolar cells and the
indirect pathway involves horizontal cells as shown in Figure 8. In both these cases the
impulses must pass through the bipolar cells to reach the ganglion cell. At the center of the
fovea, only one photoreceptor cell is connected to a bipolar cell. But, in the peripheral retina,
thousands of photoreceptor cells are connected to a single bipolar cell.
Figure 8: Connectivity between photoreceptors and bipolar cells
Courtesy: Dowling JE. “The Retina: An Approachable Part of the Brain”
The thickness of macular region (central retina) closer to fovea is significantly thicker
than the peripheral retina. The reason behind this phenomenon is the more packing density of
cone cells and their associated bipolar and ganglion cells in this region as shown in Figure 9.
The following points elaborate on the anatomical differences between the macular region and
the rest of the peripheral retina:
 In the peripheral retina, the rod cells are dominating whereas in the central retina, it is
the cone cells dominating. Thus in central retina the cone cells are densely packed and
hence closely spaced and the rod cells are very few in number between the cones
(Figure 9a and 9b).
 The thickness of outer nuclear layer (ONL) accommodating the rods and cones is
almost same in both peripheral and central retina. The outnumbered cones in the
central retina have their oblique axons displaced by their synaptic pedicles in the outer
plexiform layer (OPL). The oblique axons along with Muller cells form an area
known as the Henle fibre layer. This layer further increases the thickness of central
retina. The Henle fibre layer is not present in the peripheral retina.
 The inner nuclear layer (INL) in the central retina is thicker compared to peripheral
retina. The reason behind this fact is that cone connecting cells called cone bipolar
DAC-1
Page - 9
cells, the horizontal cells, and amacrine cells are more densely packed as shown in
Figure 9a.
 There is a significant difference between the regions of central retina and peripheral
retina in terms of thickness in the nerve fibre layer, inner plexiform layers and
ganglion cell layers as shown in Figure 9a and Figure 9b. In these layers, cone cells
are outnumbered compared to rods. To establish the pathways between the abundant
cones and the ganglion cells, more number of bipolar cells, horizontal cells and
amacrine cells are present in these layers. The greater number of ganglion cells means
more synaptic interaction in a thicker IPL and greater numbers of ganglion cell axons
coursing to the optic nerve in the nerve fibre layer (Figure 9a).
(a)
(b)
Figure 9: Vertical section of (a) central retina (b) peripheral retina
Courtesy: webvision.instead-technologies.com
3.2.2.2 Foveal Structure
In this area there are no rods, cones are larger, in abundance and tightly packed, and other
layers of retina are very thin [37]. At the center of fovea there is a small foveal pit also
termed as foveola. This is the only region in the retina where all the photoreceptors are cone
cells and rod cells are absent. Foveola is covered by a thin internal limiting membrane as
shown in Figure 10. The diameter of this central foveal pit is only 200 microns. Using the
optical coherence tomography (OCT) images of a healthy eye, the cross sectional view of
retinal layer is well seen. Fundus photo and its OCT image of a healthy human eye is shown
in Figure 11. In the figure 11 (B), the foveal pit is marked as an arrow and it is clearly visible
that the walls of fovea are slopping. Photoreceptor cells are blue in color and in the foveal pit
they are primarily densely packed cones. The complete layering of retina appears to be
gradually thickening as we go outward from the foveal pit towards the rim of the fovea
(parafovea). In the parafovea, the ganglion cells are heaped into six layers and hence making
this region as the thickest portion of the entire retina.
DAC-1
Page - 10
Figure 10: Vertical section of human fovea
Courtesy: A.K.Khurana Indu Khurana , “Anatomy And Physiology Of Eye”
A
B
Figure 11: A) Fundus photograph projecting macula of a normal human B) OCT image of the
same fundus photograph (A).
Courtesy: webvision.med.utah.edu
3.2.3
Macular Hole
A macular hole is a retinal illness that forms a hole in the macula, which is located in the
center of retina [15]. Macula is responsible for our central vision. The central vision required
for performing our day-to-day work like, driving, reading, distinguishing between colors and
seeing fine detail. A macular hole can occur at any age but it is most common in the people
aged 60 and above.
As macula is responsible for the central vision, a macular hole will affect only the
central vision and the peripheral vision will remain unaffected. The patients suffering from
DAC-1
Page - 11
macular hole may find difficulty with reading and close work, and the patients may observe
black spots or grey spots in their vision. When an eye gets affected by macular hole, the
chances of other eye getting affected are more.
Macular holes progress gradually from an early stage to a final stage. During its
initial stage, patients may not notice the minor disruptions in their central vision. The early
signs of macular hole includes distortion in the central vision, blurring of central vision, and
the straight lines ( such as edges of table, edges of photo frame or standing pole) appears to
be bent or curvy as shown in Figure 12. The exact location and size of a macular hole
determines how much our central vision is affected.
Figure 12: A) Normal vision B) Distorted vision due to macular hole
Courtesy: www.sw.org/HealthLibrary
3.2.3.1 Incidence of Macular Hole
According to a survey about 50% of the stage-1 macular holes advances to stage-2 and about
70% of the stage-2 macular holes progress to stage-3 [16]. The incidence is 3.3 per 1000 >55
years in the USA and 0.14-0.17% in India [17][18]. If a macular hole is detected in its earlier
stage, the surgery can yield a better result compared to having been detected it in its latter
stage.
3.2.3.2 Causes and Types of Macular Hole
There are many reasons behind the formation of a macular hole. Aging, diabetes and
shrinkage of vitreous jelly are the main causes of formation of a macular hole. Different
causes are listed below:
 Vitreous shrinkage and/or separation (Idiopathic macular hole)
 A detached retina
 High amount of nearsightedness (Myopia)
 A diabetic eye disease
 An eye injury
DAC-1
Page - 12
 Best's disease (inherited condition causing macular damage)
 A macular pucker which is an extra layer of tissue that has formed on the macula.
Based on the listed causes, macular hole is classified in to two following categories:
1)
Primary macular hole caused by vitreous traction on the foveal from an abnormal
vitreous separation.
2)
Secondary macular hole caused by other pathologies not associated with vitreous
traction such as, eye injury, high myopia etc.
3.2.3.3 Vitreous Attachment and its Role in Idiopathic Macular Hole Formation
Idiopathic macular hole is an illness that forms a hole in the central part of the retina, the
macula [19]. It is mainly caused by shrinking of vitreous jelly. As vitreous jelly shrinks, it
pulls the retina and then, it tears the retina. The fibers on the retina further pull the hole and
enlarge it. The gap left by the vitreous gel is filled by a clear fluid. This liquid may slowly
ooze through the hole and hence damaging the retina further as shown in the Figure 13.
Figure 13: Formation of a Macular Hole due to the Shrinkage of Vitreous Jelly
Courtesy: www.naturaleyecare.com/eye-conditions/vitreous-detachment/
3.2.3.4 Staging of Idiopathic Macular Hole
A macular hole can occur at any age but it is most common in middle age and older age
people. According to the Gass Classification System [20], macular holes can be classified
into four different stages. In some very few cases, during the progression of the macular hole,
it may naturally regress to the healthy status. As the time elapses, the probability of initial
stage macular hole progressing to a full-thickness macular hole increases [21].
Stage 1A:
In the fundus photographs, it is seen as a central yellow spot in the macula with a size of
approximately 250-300 microns. The inner retinal layer starts detaching itself from the
DAC-1
Page - 13
underlying photoreceptor layer (Pigment Epithelium Layer). An OCT image shows the
flattening of the foveal depression in this stage.
Stage 1B:
In this stage, the foveal detachment progresses and macular hole is enlarged in size. In fundus
photograph, a doughnut-shaped, yellow ring of approximately 200-300 microns is found
around the fovea. An OCT image shows the pulling at retinal region where the fovea is
situated.
Stage 2:
In this stage, due to the further pull in the foveal region, the Inner Limiting Membrane of the
retina is slightly torn off. In the fundus photograph, a minute hole of approximate size <400
micron is seen within the yellow ring. Stage 2 macular holes can be central or eccentric with
a red colored crescent or horseshoe shape break at the edge of the yellow ring. An OCT
image shows a slight cut in the foveal region.
Stage 3:
In this stage, a full thickness hole of approximate size > 400 micron is formed. It is a
complete retinal defect with yellow-white pigment deposits on the bottom of the hole. An
OCT image shows a large cut in the foveal region with or without an operculum.
Stage 4:
In this stage, the vitreous gel is completely detached from the entire macula and optic disk.
This stage usually appears like a brick-colored round hole, often having small yellow deposits
in the center of macula. It is usually 1/3rd or 2/3rd of optic disk diameter. In the OCT image,
there is a big cut in the foveal region with complete detachment of vitreous gel from the
macula.
3.2.3.5 Importance of Early Detection
An early detection of macular hole can help the ophthalmologist to educate the patient and
warn the patient about the progression of the disease. This creates awareness in the patient
and the patient can follow up for the regular diagnosis. According to the surveys [22] about
50% of the stage-1 macular holes advances to stage-2 and about 70% of the stage-2 macular
holes progress to stage-3. As per the survey, the incidence of macular hole in USA is 3.3 per
1000 for people of the age >55 [22]. In India and Denmark another survey has shown a
prevalence of 0.14-0.17% [23]. In the UK, the incidence of macular hole is only 1 per 10,000
persons per year. If a macular hole is detected in its earlier stage, the surgery can yield a
better result compared to having been detected it in its latter stage [23].
3.2.3.6 Treatment Options for Idiopathic Macular Hole
According to a survey, around 50% of stage 1 macular holes heal naturally, but macular holes
of stage 2 and above advances to their next stage without surgery [25] [26]. The extent of
adhesion between the macular region in the retina and the vitreous jelly determines the
likelihood of spontaneous Posterior vitreous detachment.
When the macular holes of stage 2 and above persist, a surgery is performed. This
surgery is named as Vitrectomy with internal limiting membrane (ILM) peeling. This
DAC-1
Page - 14
surgery releases the persisting vitreomacular adhesions (VMA) between the macular region
and the vitreous cortex and hence restoring the normal macular region.
In Vitrectomy surgery, the vitreous gel is first removed from the eye . The removal of
vitreous gel gives the ophthalmologist a clear access to the retina located at the back of the
eye. During the vitrectomy surgery, the doctor inserts tiny instruments into the eye, cuts the
vitreous gel, and suctions it out. After removing the vitreous gel, the surgeon may treat
the retina by cutting or removing fibrous or scar tissue from the retina, flatten areas where the
retina has become detached, or repair tears or holes in the retina or macula as shown in
Figure 14.
The removal of vitreous gel leaves a hollow eye. So, at the end of the surgery,
ophthalmologist injects either a gas bubble or the silicone oil into the eye. This helps to
lightly press the retina against the wall of the eye. If a gas bubble is used, it is automatically
replaced with the newly formed vitreous gel. When silicone oil is used, a second operation is
done to remove it from the eye as it cannot be absorbed by the body.
According to survey, the patients treated with vitrectomy surgery are reported to have
the hole closure rates of 73-95% [27] [28] [29]. Also in the patients aged > 80 years and
having the macular hole for more than 2 years, vitrectomy surgery has proven a very good
success rate.
Figure 14: Vitrectomy Surgery
Courtesy: chicagoretinavitreous.com
3.2.3.7 Prognosis of Idiopathic Macular Hole Surgery
Vision improvement after the macular hole surgery may differ from patient to patient based
on the period for which they had it. People those who had a macular hole for less than 6
months have a very good result after the surgery compared to those who have had one for a
longer period. The introduction of ILM peeling to MH surgery improved the closure rate
dramatically [31] [32]. Microincision vitrectomy surgery has decreased the operating time,
reduced postoperative inflammation and astigmatism formation, and improved patient
comfort [33]. Furthermore, multiple lines of evidence have revealed that postoperative facedown positioning can be shortened or is even unnecessary in patients with a small MH, which
considerably alleviates patient suffering [34] [35]. With the improved closure rate and
decreased surgery-related risks and distress, the success rate of postoperative visual recovery
has become very high for modern MH surgery.
DAC-1
Page - 15
The stage of MH is an important prognostic factor for postoperative visual acuity
[32]. According to the recent literature review, a postoperative visual acuity of 6/12 or better
was achieved in 65.9% of the stage 2 MH group. This figure is significantly higher than in
the stage 3 and 4 MH group (15.0%) [36]. A recent study has also concluded that the
advanced stages had a worse visual outcome [37] [38].
4.
BRIEF DESCRIPTION ON COURSEWORK-2 : Advanced Research
Methodology
5.1 Introduction
Research can be defined as a specialized form of enquiry with the exclusive goal of
producing knowledge. Further, research is a scientific undertaking which, by means of logical
and systematized techniques aims to: (a) discover new facts or verify and test old facts; (b)
analyze their sequences, interrelationships and causal explanations; (c) develop new scientific
tools, concepts and theories. The learnings of the coursework is demonstrated in the
following by considering a research problem pertaining to the second objective of the
proposed research.
5.2 Title of Research
Biometric Face Detection System for Forensic Investigation using Distributed Computing.
5.3 Literature Review
Biometrics can be defined as measurable characteristics of the individual based on their
physiological features or behavioral patterns that can be used to recognize or verify their
identity. Biometric technologies attempt to automate the measurement and comparison of
such characteristics for recognizing individuals [39]. Many different technologies have
recently been developed for person recognition and identity authentication and some
examples include measures based on information from handwriting (especially signatures),
fingerprint, face, voice, retina, iris, hand or ear shape and its data.
There are two distinct phases of operations for biometric systems: enrollment and
verification/identification. In the first phase identity information from users is added to the
system. In the second phase live biometric information from users is compared with stored
records.
Face recognition by machine has been started since 70s and currently becomes an
active and important research area. One of the popular approaches is principle component
analysis (PCA) or eigenface [40]. The authors of paper [41] first proposed to use PCA to
represent human faces. The performance of this method on aligned and scaled human faces is
very good, but will degrade dramatically for non-aligned faces. The main reason is that PCA
basis functions are global representation.
To overcome the limitation of global representation of PCA, a better approach is to
find basis functions which are local and give good representation of face images. Along this
line, in paper [42] the authors have proposed to use independent component analysis (ICA)
for face representation. Theoretically speaking, ICA has a number of advantages over PCA.
First, ICA de-correlates higher-order statistics from the training signals, while PCA decorrelates up to second-order statistics only. Second, ICA basis vectors are more spatially
DAC-1
Page - 16
local than the PCA basis vectors, and local features give better face representation. ICA is
usually related to local descriptions such as edges [43], sparse coding and wavelet [44]. This
property is particularly useful for recognition.
As face is a non-rigid object, local representation of faces will reduce
the sensitivity of the face variations due to different facial expressions, small occlusion and
pose variations. That means, some independent components (IC) are less sensitive under such
variations. The authors of paper [45] have also demonstrated that the recognition accuracy
using ICA basis vectors is higher than that of the PCA basis vectors with 200 face images.
They also found that ICA representation of faces has greater invariance to changes in pose
and small changes in illuminations.
Face recognition plays an important role in many applications such as building/store
access control, suspect identification and surveillance. All these promising applications have
resulted in a significant increase of research activities in this area over the past few years
[46]. However, few can achieve a completely reliable performance. The problem arises due to
the difficulty of distinguishing different individuals who have approximately the same facial
configuration and yet contend with wide variations in the appearance of a particular face due
to changes in pose, lighting, facial makeup and facial expression.
For 2D face recognition, Elastic Graph Matching (EGM) based on Gabor filter
responses [47] and Eigenface are two successful methods and further studies based on them
are developed recently [48]. The fiducial points on a face are described by sets of Gabor
response components (jets). A set of Jets referring to one feature point is called a Bunch. The
goal of EGM on a test image is to find the fiducial points and thus extract from the image a
graph that maximizes the similarity. During the localization process a coarse to fine approach
is used, but is extremely time consuming. To narrow down the search range will be a good
way to improve the efficiency of this algorithm. In [49], the Eigenface method is presented
based on Principle Component Analysis (PCA). The basic idea of PCA is to construct a
subspace that can represent query image data with lower dimensional feature vectors. Using
the raw facial image as input to PCA requires strict, limiting constraints on image alignment,
illumination and local variations of a face. To a limited extent, Gabor filter responses may be
extracted on carefully chosen fiducial points and using these responses as inputs to PCA will
help to relax the above limitation.
In 3D domain, many researchers have handled the 3D-face recognition problem using
differential geometry tools of computing. However, the computation of curvature is neither
accurate nor reliable. Point Signature represents each face and treated the face recognition
problem as a 3D recognition problem of non-rigid surfaces. Point Signature is a simple yet
effective point representation, so we will use it to describe the facial shape in this system.
Very few research groups [50] focus on face recognition from both 2D and 3D facial
images. Almost all existing recognition systems rely on a single type of facial information:
2D intensity (color) images or 3D range data sets. It is evident that range images can
represent 3D shape explicitly and can compensate for the lack of depth information in a 2D
image. 3D shape is also invariant under the change of color (such as due to a change in a
person's facial makeup) or reflectance properties (such as due to a change in the ambient
lighting). On the other hand, the variety of gray-level information provided by different
persons gives more detailed information for interpreting facial images, albeit its dependence
on color and reflectance properties. Therefore, integrating 2D and 3D sensory information
will be a key factor for achieving a significant improvement in performance over systems that
rely solely on a single type of sensory data.
DAC-1
Page - 17
Distributed Computing works by splitting up the larger task into smaller chunks of
problems which can be performed at the same time independently of each other [51]. In the
minimal distributed computing model, the two main entities in distributed computing are the
server and the many clients. A central computer, the server, will generate work packages,
which are passed onto worker clients. The clients will perform the task, detailed in the work
package, data, and when it has finished, the completed work package will be passed back to
the server.
A popular distributed model is the three-tier model. This model splits the server task
into two layers, one to generate work packages and another to act as a broker, communicating
with clients. This is simply a means to manage local and protect against failure
[52].According to Moore’ law [53], a modern computer will complete a work package twice
as fast as a computer 18 months old. The amount of time taken by different clients will vary a
lot. Flexibility is useful but not required.
5.4 Problem Statement and Research Questions
Problem Statement
Face recognition involves computer recognition of personal identity based on geometric or
statistical features that are derived from the face images [54]. The human face is not a unique,
rigid object. Indeed, there are numerous factors that cause the appearance of the face to vary.
The sources of variation in the facial appearance can be categorized into two groups: intrinsic
factors and extrinsic ones. A) Intrinsic factors are due purely to the physical nature of the face
and are independent of the observer. These factors can be further divided into two classes:
intrapersonal and interpersonal [55]. Intrapersonal factors are responsible for varying the
facial appearance of the same person, some examples being age, facial expression and facial
paraphernalia (facial hair, glasses, cosmetics, etc.). Interpersonal factors, however, are
responsible for the differences in the facial appearance of different people, some examples
being ethnicity and gender. B) Extrinsic factors cause the appearance of the face to alter via
the interaction of light with the face and the observer. These factors include illumination,
pose, scale and imaging parameters (e.g., resolution, focus, imaging, noise, etc.).
Research Questions
1. How to detect facial features accurately when there is age variations, illumination
variations and pose variations?
2. How to adapt template matching algorithm and feature matching algorithm for face
detection?
3. How to use a distributed computing model in forensic investigation?
DAC-1
Page - 18
5.5 Research Objectives
Following are the three objectives set to address during the course of this research:
1. To design a template matching algorithm based on energies method to recognize the
face independent of facial orientations.
2. To design a feature matching algorithm based on the feature points to extract the
facial features independent of pose variations.
3. To develop a distributed computing model for face recognition system.
5.6
Rationale for the research
This research study uses an approach called template based approach where the main idea is
to find different templates of faces and comparing the templates for the test image and the
templates for all the images in the database. Features such as eyes, nose, eye brows,
moustache, and lips are extracted from the query image. The sum of the energies is calculated
for these features for the query image. The image which has the closest match in terms of
energies with respect to query image is the identified image. This method can match the face
in the query image independent of the facial orientations.
In order to overcome the problem of extracting the facial features independent of the
pose variations, in this research, the feature points are chosen in such a way that they have
exact and meaningful position (such as mouth corners, eye corners) and are very distinctive
(such as the nose tip in a range facial image). Choosing feature points in this way will make
the initial manual localization and automatic detection more accurate and easier. Feature
points are chosen to ensure minimum variations caused by different facial expressions,
backgrounds and viewpoints, for example, no feature points are chosen near the contour of a
face.
Distributed computing works by splitting up the larger task into smaller chunks,
which can be performed at the same time independently of each other. Each computing entity
in this architecture is called a worker. In a distributed image recognition system, the database
having millions of criminal records is scattered throughout a vast geographical area. Each
worker node is associated with a particular region in the geographical area and it is
responsible to match the query image with the stored facial images in its database. Thus, a
concurrent search is done for a particular query image at all the worker nodes, and hence the
response time of the system as a whole is reduced to a greater extent. Further, the failure of a
single worker node will not bring down the entire system and hence it improves the
reliability.
5.7
Research Methodology
Automatic human face recognition, a technique which can locate and identify human faces
automatically in an image and determine who is who from a database, are gaining more and
more attentions in the area of computer vision and pattern recognition over the last two
decades. There are several important steps involved in this problem: detection, representation
and identification. Based on different representations various approaches can be grouped into
DAC-1
Page - 19
(1) Image-based Techniques
(2) Feature-based Techniques.
5.7.1
Template Based Approach
The main idea in template based approach is to find different templates of faces and
comparing the templates of the query image and the templates of all the images in the
database to get the identified image. This research projects a novel approach based on the
concept on Energies. Energy as a term in Digital Signal Processing is defined as Square of
Intensity for a particular pixel. The template based approach uses the following three steps
for efficient face recognition:
o
Image Enhancement (Low Pass Filtering)
o
Feature Extraction (Application of Sobel, Perwitt, Kirsh filters)
o
Face Recognition (Comparison of energies)
In Image Enhancement process, noise is removed. In Feature Detection process, features
are found out as masks (eyes, nose, mouth, eyebrows). In Face Recognition process, energies
of the features are found out as square of intensities of each pixel. The different energy values
are stored in the database. The energy value of the query image and the energy values of all
the images in the database are compared and the image having the same energy value as that
of the query image is the identified image.
The following algorithm lists the steps planned for face recognition using template-based
method.
1.
Acquire the query image
2.
Extract the features of query image using template based method.
3.
Energy Matching. (where Energy= Intensity of pixel2)

Calculate Energy as a function of Intensity for each pixel in the image for both the
query image and all the images in the database.
4.
Sum all the energies for all the pixels.
5.
The image in the database which is very close to the query image in terms of energy is the
identical image.
Fig. 15 Algorithm for Face Recognition using Template Based Method.
DAC-1
Page - 20
The following Figure 16 shows the template based approach.
Fig. 16 Template Based Approach
5.7.2
Feature Based Approach
In Feature based approach, the feature detection process is defined as extraction of feature
points. Here the features are eyebrows, eyes, nostrils, mouth, cheeks and chin, moustache.
Out of these features eyes, nostrils and lips are low intensity features. Eyebrows and
moustache are high intensity features. Extraction of eyes, nostrils and mouth is based on
minima analysis. The extraction of cheeks and chin is done by using Hough Transforms. The
extraction of eyebrows and moustache is done by template matching technique. The
distances between feature points are found out and stored in a database. The image in the
database which is having maximum similarity in distances when compared to query image is
the identified image. The following algorithm lists the steps planned for face recognition
using feature-based method.
6.
1. Mark all the N feature points in the given query image.
7.
2. Given N feature points, find Energy = (Amplitude)2 between all the feature points
of both query image and all images in the database.
3.
Add all the energies of the feature points in the test image and the images in database.
8.
4. The image in the database having maximum number of similarities w.r.t energies
with the query image is the identified image.
Fig. 17 Algorithm for Face Recognition using Feature Based Method
DAC-1
Page - 21
The Feature based approach is summarized in Figure 18.
Fig. 18 Feature Based Approach
4.8 Data Collection
The data required for the pilot study is collected from the well reputed secondary data
bases. Many researchers working on this field have used the following databases for testing
the accuracy of their developed system. The following public data bases are used in the
proposed system and few sample records of test databases are shown below:
a) Yale database
b) Olivette database
c) MIT database
MIT Dataset:
Yale and Olivette Datasets:
DAC-1
Page - 22
4.9 Inferences and Implications of Study
Biometric technologies attempt to automate the measurement and comparison of
characteristics for recognizing individuals. These technologies will provide important
components in regulating and monitoring access to valuable resources. Significant
application areas include electronic commerce, security monitoring, database access, border
control and immigration, forensic investigations and telemedicine.
The proposed system carries out a “one-to-many” search of its stored models of
individual’s identities. It can be therefore used in security applications for example to detect
criminals, fraud or intrusion. Compared to traditional biometric systems, the proposed system
works faster as it uses distributed computational model. The database is divided into multiple
sets and these individual sets are distributed to different locations. The query image is the
sent to all the distributed sites where it is compared with the images in the database
concurrently. Face recognition using the energies of the pixel in the image is completely a
novel approach which is not tried in the literature.
4.10
Significance of the Coursework
The coursework provides a strong foundation in the fundamental concepts of research
such as problem formulation, hypothesis testing, research design, design of
experiments etc. The course work aims to strengthen both technical as well as softskills of researchers. While topics such as experimental research, research design, and
hypothesis testing, sampling methods educate us on adopting a scientific and
objective approach to research, topics such as ethics in research, writing journal
papers, soft-skills needed to succeed in research. Lastly, the coursework provides a
platform to share and demonstrate a real-time sample research work wherein the
learnings are both presented to, and evaluated by an expert.
5. PLAN FOR FUTURE
The following action items have been planned for the future:
1. To complete remaining coursework III and IV
2. Publish papers in conferences and journals
6. REFERENCES
[1] A.K.Khurana Indu Khurana , “Anatomy And Physiology Of Eye”, 131, CBS Publishers
& Distributors, 2008.
[2] Athanasios Nikolaidis, Ioannis Pitas, Facial feature extraction and pose determination
Pattern Recognition 33 (2013) 1783–1791.
[3] Anatomy of human eye, website available at: http://www.allaboutvision.com/resources,
Access Media Group LLC. © 2000-2014 Access Media Group LLC, 2014.
[4] Gross anatomy of the eye, website available at: http://webvision.med.utah.edu/ book/parti-foundations/gross-anatomy-of-the-ey/, Helga Kolb, © 2010-2013 Webvision, 2013.
DAC-1
Page - 23
[5]
Principles physical of human eyes structure, website available
http://www.academia.edu/7204292/principles_physical_of_human_eyes_structure/,
Ibrahim Avci, © 2010-2015 Academia.edu, 2015.
at:
[6] Kolb H, Fernandez E, Nelson R, “The Organization of the Retina and Visual System”,
Salt Lake City, UT: National Library of Medicine, National Institutes of Health; 1995.
[7] Sharma, R.K., Ehinger, B.E.J., “Development and structure of the retina” Mosby, St
Louis Adler’s Physiology of the Eye, 10th edn, 319–347, 2013.
[8] Provis J. M., Dubis A. M., Maddess T, Carroll J, “Adaptation of the central retina for high
acuity vision: cones, the fovea and the avascular zone”, Journal of Progress in Retinal and
Eye Research, Vol. 35, 63–81, Elsevier, 2013.
[9] Ronen Segev , Jason Puchalla , Michael J. Berry II, ”Functional Organization of Ganglion
Cells in the Salamander Retina” , Journal of Neurophysiology, Vol. 95, 2277-2292,
2013.
[10] Sanes JR, Masland RH,”The types of retinal ganglion cells: current status and
implications for neuronal classification”, Annual Review of Neuroscience, Vol. 38,
2015.
[11] Richard H. Masland, “The Neuronal Organization of the Retina”, Neuron, Vol. 76, Issue
2, 266–280, Elsevier, 2012.
[12] Wei W, Hamby AM, Zhou K, Feller MB, “Development of asymmetric inhibition
underlying direction selectivity in the retina”, Journal of Ophthalmology, Vol. 469, 4026, PubMed, 2011.
[13] H. Sebastian Seung, Uygar Sümbül, “Neuronal Cell Types and Connectivity: Lessons
from the Retina”, Neuron, Vol. 83, Issue 6, 1262–1272, Elsevier, 2014.
[14] Lamb T. D, “Evolution of photo transduction, vertebrate photoreceptors and
retina”, Progress in Retinal and Eye Research, Vol. 36, 52–119, Elsevier, 2013.
[15] Kishi S, Nippon Ganka Gakkai Zasshi, “The vitreous and the macula”, Vol. 119, Issue
3, 117-43, PMID:25854107, PubMed, 2015.
[16] Chui TYP, Song H, Clark CA, Papay JA, Burns SA, Elsner AE, “Cone photoreceptor
packing density and the outer nuclear layer thickness in healthy subjects”, Investigative
Ophthalmology & Visual Science, Vol. 53, Issue 7, 3545–3553, ARVO Journal, 2012.
[17] Macular Holes, website available at: http://www.eyeinstitute.co.nz/the-eye/eye-diseasesand-conditions/macular-hole.htm, Eye Institute, © Eye Institute Auckland NZ, 2014.
[18] Parveen Sen, Arun Bhargava, Lingam Vijaya and Ronnie George “Prevalence of
idiopathic macular hole in adult rural and urban south Indian population” Journal of
Clinical Experimental Ophthalmology, Vol. 36, 257-260, 2009.
[19] McCannel CA, Ensminger JL, Diehl NN, Hodge DN “Population based incidence of
macular holes”, Journal of Ophthalmology, Vol. 116, Issue 7, 1366–1369, Elsevier,
2009.
[20] Stacy M. Meuer, Chelsea Myers, Ronald Klein, Barbara E. Klein, “The 20-Year
Incidence of Macular Holes and Associated Risk Factors: The Beaver Dam Eye Study”,
DAC-1
Page - 24
Investigative Ophthalmology & Visual Science, Vol.53,Issue 14, 6348, ARVO Journal,
2012.
[21] Steel DH, Lotery AJ, “Idiopathic vitreomacular traction and macular hole:
a comprehensive review of pathophysiology, diagnosis, and treatment”, Vol. 27, Issue 3,
212, Eye 2013.
[22] Gass JDM, “Idiopathic senile macular hole: Its early stages and pathogenesis”, Vol. 110,
629-639, Arch Ophthamol, 1988.
[23] Hikichi T, Yoshida A, Akiba J, Trempe CL, “Natural outcomes of stage 1, 2, 3, and 4
idiopathic macular holes”, Vol. 79, Issue 6, 517-520, Br J Ophthalmol. 1995
[24] Oh KT et al, Macular Hole, Medscape, Jul 2011.
[25] la Cour M, Friis J, “Macular holes: classification, epidemiology, natural history and
treatment”, Acta Ophthalmol Scand, Vol. 80, Issue 6,579-87, 2002.
[26] Hickichi T, Yoshida A, Trempe CL, “Course of vitreomacular traction syndrome”, Am J
Ophthalmol, Vol. 119, 55-61, 1995.
[27] Weinand F, Jung A, Becker R, Pavlovic, “Spontaneous resolution of vitreomacular
traction syndrome”, Ophthalmologe, Vol. 106, 44-46, 2009.
[28] Haritoglou C, Reiniger IW, Schaumberger M, “Five-year follow-up of macular hole
surgery with peeling of the internal limiting membrane: update of a prospective study”,
Retina, Vol. 26, Issue 6, 618-622, 2006.
[29] Chew EY, Sperduto RD, Hiller R, “Clinical course of macular holes: the Eye Disease
Case-Control Study”, Arch Ophthalmol, Vol. 117, Issue 2,242-6, 1999.
[30] Freeman WR, Azen SP, Kim JW, “Vitrectomy for the treatment of full-thickness stage 3
or 4 macular holes. Results of a multicentered randomized clinical trial”, Arch
Ophthalmol. Vol. 115, Issue 1, 11-21, 2012.
[31] Alpatov S, Shchuko A, Malyshev V, “A new method of treating macular holes”, Eur J
Ophthalmol, Vol. 17, Issue 2, 246-52, 2007.
[32] Park DW, Sipperley JO, Sneed SR, Dugel PU, Jacobsen J, “Macular hole surgery with
internal-limiting membrane peeling and intravitreous air”, Ophthalmology,
Vol.106,1392-1397,2000.
[33] Brooks HL Jr, “Macular hole surgery with and without internal limiting membrane
peeling”, Ophthalmology, Vol.107, 1939-1948, 2000.
[34] Fabian ID, Moisseiev J, “Sutureless vitrectomy: evolution and current practices” Br J
Ophthalmology, Vol.95, 318-324, 2011.
[35] Tatham A, Banerjee S, “Face-down posturing after macular hole surgery: a metaanalysis”, Br J Ophthalmology, Vol. 94, 626-631, 2010.
[36] Chandra A, Charteris DG, Yorston D, “Posturing after macular hole surgery: a review”,
Ophthalmologica, Vol. 226, Issue 1, 3-9, 2011.
[37] Kang HK, Chang AA, Beaumont PE, “The macular hole: report of an Australian surgical
series and meta-analysis of the literature”, Clin Experiment Ophthalmology, Vol.
28:298-308, 2010.
DAC-1
Page - 25
[38] Ezra E, Gregor ZJ, “Surgery for idiopathic full-thickness macular hole: two-year results
of a randomized clinical trial comparing natural history, vitrectomy, and vitrectomy plus
autologous serum: Moorfields Macular Hole Study Group report No 1”, Arch
Ophthalmology, Vol. 122, 224-236, 2004.
[39] Athanasios Nikolaidis, Ioannis Pitas, Facial feature extraction and pose determination
Pattern Recognition 33 (2013) 1783–1791.
[40] Pong C. Yuen, J.H. Lai, Face representation using independent component analysis,
Pattern Recognition 35 (2012) 1247–1257.
[41] Yingjie Wang, Chin-Seng Chua, Yeong-Khing Ho, Facial feature detection and face
recognition from 2D and 3D images, Pattern Recognition Letters 23 (2012) 1191– 1202.
[42] G. Yang, T.S. Huang, Human face detection in a complex background, Pattern
Recognition 27 (!) (2014) 53–63.
[43] R. Chellapa, C.L. Wilson, S. Sirohey, Human and machine recognition of faces : a
survey, Proc. of the IEEE 83 (5) (2015) 705–740.
[44] J. Illingworth, J. Kittler, The adaptive hough transform, IEEE Trans. Pattern Anal. and
Mach. Intell. 9 (5) (2011) 690–698.
[45] X. Li, N. Roeder, Face contour extraction from front-view images, Pattern Recognition
28 (8) (2015) 1167–1179.
[46] Liu, C., Wechsler, H., 2011. A Gabor feature classifier for face recognition. In : Eighth
IEEE Internat. Conf. on Computer Vision, pp. 270–275.
[47] Wiskott, L., Fellous, J., Kruger, N., von der Malsburg, C., 2014. Face recognition by
elastic bunch graph matching. IEEE Trans. Pattern Anal. Machine Intell. 19 (7), 775–
779.
[48] La Torre, F.D., Black, M.J., 2011, Robust principle component analysis for computer
vision. In : Eighth IEEE Internat. Conf. on Computer Vision, pp. 362–368.
[49] Turk, M., Pentland, A., 2010. Eigenfaces for recognition. J. Cognitive Neurosci. 3 (1),
71–86.
[50] Tsutsumi, S., Kikuchi, S., Nakajima, M., 2009. Face identification using a 3d gray-scale
image – a method for lessening restrictions on facial directions. In : Third IEEE Internat.
Conf. on Automatic Face and Gesture Recognition, pp. 306–311.
[51] J. A. Bannister and K.S. Trivedi, “Task allocation in fault-tolerant distributed systems,”
Acta Inform., vol. 20, pp. 261-281, 2013.
[52] T. C.K. Chou and J. A. Abraham, “Load balancing in distributed systems,” IEEE Trans.
Software Eng., vol.SE-8, no.4, pp. 401- 412, July 2012.
[53] Schaller RR, George Mason Univ., Fairfax, “Moore's law: past, present and future”,
Spectrum, IEEE, Vol. 34, Issue: 6, 2014,
[54] Chung, K.C., Kee, S.C., Kim, S.R., 2008, Face recognition using principle component
analysis of Gabor filter responses. In : Internat. Workshop on Recognition Analysis and
Tracking of Faces and Gestures in Real-Time System, pp. 53–57
[55] A. Bell, T.J. Sejnowski, Edges are the independent components of natural scenes, NIPS
96, Denver Colorado, 2006
DAC-1
Page - 26
Download