From: AAAI Technical Report SS-94-05. Compilation copyright © 1994, AAAI (www.aaai.org). All rights reserved.
Diagnosis of Diabetic Retinopathy by ComputerVision
by
Samuel C. Lee, Ph.D
School of Electrical Engineering
University of Oklahoma
Norman, OK 73072
Vivian S. Lee, M.D., Ph.D
Department of Radiology
Duke University
Durham, NC
Elisa T. Lee, Ph.D
Department of Biostatistics and Epidemiology
University of Oklahoma
Oklahoma City, OK
Over one-third of adult Native Americans in Oklahomahave noninsulin-dependent
diabetes
mellitus, and hence this population is at significant risk for diabetic eye disease and subsequent visual
impairment. One of the leading causes of the visual impairmentis diabetic retinopathy. A large-scale
epidemiologic study of diabetes (including diabetic retinopathy through eye examinations and fundus
photography) in AmericanIndians in Oklahomawas conducted by Kelly West [1] and Elisa Lee [2] over
a period of 30 years (1972-1992) involving 1,012 Indians. 1.n this study 824 fundus photographs were
taken from the subjects and manually examined and graded.
The long-term objective of this project is to develop a low-cost, real-time computervision expert
system to analyze and grade retinal images and to diagnose diabetic retinopathy. In achieving it, we
propose to develop an expert system whichwill detect and quantitate retinal lesions and grade the extent
of diabetic retinopathy based on fundus photographs (or images). Applications for this system include
large-scale retinopathy screening, epidemiologicstudies, clinical trials, and in routine-clinical settings,
it mayprovide a useful quantitative index of disease for ophthalmologists and primary care providers.
The development of such a system will require achieving the following specific aims: (1)
determine the gradability of the image, (2) to apply and develop image processing techniques for image
enhancementand restoration such as for images with poor focus due to cataracts or vitreous hemorrhages,
(3) to establish standard chromatic characteristics for fundus images and to develop standardization
procedures, (4) to developcomputervision methodsfor the identification of essential retinal features, the
optic disc, the macula, and the blood vessels, and for the detection and differentiation of specific retinal
lesions and vascular abnormalities. Based on the detection of retinal lesions, to use computervision to
grade the severity of diabetic retinopathy based on developed classification criteria and to compare
computergrading results with grading results by retinal specialists. In addition, the technical feasibility
of a low-cost, real-time computervision diagnostic system for diabetic retinopathy will be assessed.
Chromaticand geometric analyses of retinal lesions/features and deterministic and statistical
pattern recognition techniques will be applied to the development of the expert system. A readily
available set of 824 fundus images will be used to develop the expert system as well as to test the
reliability of the system.
50
1. Color Analysis of FundusImages
Thefirst question whicharose waswhetherchromaticinformation of a fundusimagealone is
sufficient to discriminatethe essential retinal features,e.g., optic disc, macula,vessels, etc., andlesions,
e.g., hemorrhagesand microaneurysms,
hard exudates, cotton-woolspots, etc. It is foundthat (1) the
chromaticinformationof fundusimagesis approximatelyrepresented by the u-coordinate of the UCS
color coordinatesystemand the retina is roughlyseparatedfrombloodvessels in the coordinate,and(2)
arteries and veins cannothe completelydistinguishedin any coordinate,but if the area in the imageis
limited, the V-values(indicating intensity) are useful for discriminatingarteries fromveins.
Toverify their findings, weconducteda chromaticanalysis of a numberof fundusimagesusing
35mmcolor transparencies. Someof the transparencies were standard imagesused for comparisonin
the ModifiedAirlie HouseClassification Systemfor DiabeticRetinopathy.Theslides wereprojected onto
a screen and the images captured by a CCDcolor camera(Sony CCD-G5)
which was connected to
80386personalcomputerthrough an imagedigitization board (Professional ImageBoardfrom Atronics).
Imagesweredigitized at resolution of 512x 256 pixels. Eachpixei in the digitized imageoccupied15
hits which consisted of red (R), green (G), and blue (13) componentsand each assigned
correspondingto 32 gray levels.
Wesoughtto obtain chromaticsignal informationon the followingretinal features/lesions: (1)
optic disc, (2) macula,(3) vessels (arteries and veins), (4) hemorrhages,(5) hard exudates,(6)
woolspots, (7) drusen, (8) laser photoeoagulation
scars, (9) fibrous proliferation, and (10) background
retina. Thedigitized data for sampleareas of each of the abovefeatures wereplotted onto the UCScolor
¯ coordinatesystem(u,v,V) to see if these features could be distinguished. Wefoundthat, except for
retinal imagesfromnormalyoungadults or personswith early stage nonproliferative retinopathy, the
backgroundretina cannot be separated fromblood vessels by the u-coordinate. In fact, they cannot be
separated from each other by any single or combinedcoordinates of the UCScolor coordinate system.
Furthermore,becausethe color of the backgroundretina maydiffer substantially fromone fundusimage
to another, noneof the essential features or lesions couldbe separated fromthe background
retina by a
single or multiple color threshold. In addition, wefoundthat there is a considerableamountof overlap
amongthe features/lesions in the u-v and u-Vspaces; noneof the essential retinal features and lesions
could be distinguished from one another by an ordinary global thresholding method.
Wealso foundthat, amongthe three components,u, v, and V, the Vcomponent
can best separate
the retinal features and lesions. Theresults of using the V intensity values can be enhancedif an
appropriate color filter is employed.Since the major color component
of the fundusimageis red, a
greencolor filter wouldproducean imagewith the highestcontrast in intensity, whichresults in sharper
edges than those of the original image. By doing so, we found that not only the vessels could be
separated fromthe other features/lesions muchmoreeasily than fromthe unfiltered original image,but
also the arteries and veins in the entire imagecouldbe distinguished.
Fromthese results, weconcludethat the chromaticcharacteristics of the samefeature/lesion
within a givenimage,as well as betweenimages,differed significantly and color analysis alone is not
sufficient to distinguish amongretinal features/lesions. However,the usefulness of color filters was
encouraging.Morerefined procedureswereattemptedby using color filters and informationabout their
geometry(shape, size, orientation, etc.). In other words,the principles of pattern recognitionshould
applied.
51
2. Pattern Recognition
Based on the optical and special properties of the features/lesions to be recognized, we
constructeda filter with four different templatesto seek edgesin all four directions. This multi-template
matchedfilter approachcombinesedgedetection and shape detection into a single computationalstep.
In the actual implementationof the algorithm, twelve or possibly moretemplates should be considered
dependingon howrefined the lines and shapes are desired. Anedge is detected whenthe convolution
of the imagedata by the templatewhosedirection is the sameas the edgeline exceedsa certain threshold
value. A roundspot of high (low) intensity is detected whenthe convolutionsof the imagedata by all
of the templatesare greater (less) than a certain threshold value. For this reason, the multi-template
matchedfilter techniquecan be designedto extract any line edges, for example,edgesof the optic disc
and vessels, and any high or low intensity spots, such as hemorrhagesand mieroaneurysms
(HMAs),and
exudates. However,this methodcannot distinguish betweenhard and soft exudates. Anexampleof
comparingthe detections of HMAs
and exudates in a fundus image by an ophthalmologist and by our
pattern recognition methodare shownin Figure 1. It should be noted that while detecting HMAs
and
exudatesby the matched-filter method,edges of vessels and the optic disc werealso detected. Since our
interest is in the detectionsof HMAs
andexudates,edgesof other features/lesionswereeliminatedin this
display. Wesee that without using a green filter, only five out of eight HMAs
and three out of seven
exudatesweredetected. Thespots illuminatedin Figure l(b-2) and (e-2) indicated the lesions detected.
Applyinga green filter to the feature extraction of the transparencies, we obtained the
feature/lesion detection results shownin Figures 2. Figures on the left-hand side are results obtained
froma fundusimagetaken withouta green filter whereasthe ones on the right-handside showthe results
obtained with a green filter. Figure 2(b-2) and (c-3) showthat (1) the HMAs
and exudates,
previously were missed, are nowdetected and (2) vessel edges, whichwerebroken, are nowconnected.
In fact, all the eight HMAs
were correctly detected. Theninth HMA,
located on the lower left side of
the optic disc and which wasmissed by humaneye, was detected by this method.A comparisonof the
vessel and optic disc edgestaken withoutand with a green filter is shownin Figure 3. It is seen that
edgesin the imagetaken with the filter are muchclearer and display less disrupted lines.
In summary,from the preliminary study welearned that:
(1)
a greenfilter enhancesthe detectability,
(2)
the matchedfilter pattern recognition methodprovidesgoodaccuracyin the detection of
HMAsand exudates,
(3)
solely basedon the shape and size of the lesion, this methodcannotdistinguish between
hard and soft exudates.
3. Knowledge-Based
Expert System
Theabovetwo studies indicated that due to various possible sources of noise and artifacts and
the inherent close resemblancesand similarities of someretinal features and lesions, withoutthe use of
biological/pathologicalinformation,the chromaticand geometricanalysesare not sufficient to distinguish
them. Wemust incorporate the available and necessary biological/pathological knowledgenormally
exercised by the ophthalmologistin every step of the decision makingprocess. Withthis in mind,we
initiated a preliminarystudyto detect/locate the optic disc usinga knowledge-based
color analysis/pattern
recognition (CAPR)method.Becauseit is required as an essential elementby every existing grading
criterion, the optic disc wasselected as our first target. Thetransparencies wehaveare single-field
fundusphotographs,each includes both the optic disc and the macula,whichare roughlysymmetricalon
52
the twosides of the center line. Oncewecouldlocate the optic disc, the tasks of locatingthe maculaand
the landmarkvessel reference points wouldbe easier.
In our studies, weobservedthe followingtwodistinct features of the optic disc with respect to
its color and intensity: (1) Thedisc containsan area of distinctly high intensity (very bright and white
in color). (2) Theaverageintensity of the optic disc is usuallyhigherthan that of its surroundingarea.
It mayhavesurroundingyellow-whiteatrophythat confoundsthe picture. Theshapeof the disc is always
roundand its size varies sightly fromindividual to individual. Usingthese facts, a knowledge-based
CAPR
algorithmdescribed belowfor detecting and locating the optic disc wasdeveloped.
.
Definethe permissibleregionsof the disc. If a retinal imageincludingthe optic disc is
divided into four regions by twomutuallyperpendiculardiagonallines, the tworegions
on the right andleft are def’medas the permissiblere~ionsof the disc.
.
Findthe intensity histogramof the permissibleregions of the contrast-enhanced
intensity
witha greenfilter.
.
Findthe areas in these regionswhoseintensity correspondsto the first high-intensitypeak
in the histogramand whosesizes are less than the size of the disc.
4.
Findthe geometriccenter of eachof these areas.
.
For each center, create a square area with a size of 2Do,whereDodenotesthe diameter
of the disc (see Figure 3Co)). Whenever
the square areas exceedthe boundariesof
disc permissibleregions, use the disc permissibleregion boundariesas their boundaries.
Thesenewlydefined areas are referred to as Oisc-~earchareas.
.
Applythe multi-templatematchedfilter to each disc-search area to detect all the edge
points in the region.
.
Applythe Houghtransformto the edgepoints to detect circles with their sizes close to
that of the disc.
J
.
If such circles are found, proceedwith the followingverification process. Compute
the
average intensity of the pixels inside the circle and the average intensities of the
surroundingcircular areas of the samesize. If the formeris greater than the latter, the
circle foundis confirmedto be the optic disc.
If, in step 8, noneof the circles wasverified as the disc, go back to step 2, find
additional disc search areas fromthe secondhigh-intensity peak in the histogramand
repeat steps 3-7, and so on. Whenall the areas of the disc permissibleregions havebeen
searchedand yet no disc is found, wethen concludethat the retinal imageunder study
doesnot containthe disc or it is out of focus.
A preliminary version of the algorithm has been implemented.Theresult is shownin
figure 4. Figures1-4 will be presentedat the Poster Presentation.
53