Table S1

advertisement

Supplementary material and methods:

Table 1: Comparison of different OCT-device specifications. Stratus OCT (Carl Zeiss Meditec),

RTVue-100 (Optovue) and Spectralis OCT (Heidelberg Engineering) were included in this study and compared. TD: Time Domain, SD: Spectral Domain

Acquisition method

Scan beam wavelength

Detector

Axial resolution

Transversal resolution

Maximum A-Scans per B-

Scan

Acquisition time per single line scan

Scan depth

Scanning Speed

(A-scans per second)

1.28 s

2 mm

400

Stratus OCT

(Carl Zeiss

Meditec)

TD-OCT

820 nm

Single detector

10µm

20µm

512

RTVue-100

(Optovue)

SD-OCT

840nm

Spectrometer

5µm

1020µm

4096

0.038

2-2.3 mm

26,000

Spectralis OCT

(Heidelberg

Engineering)

SD-OCT

840nm

Spectrometer

4µm

14µm

1536

0.025

1.9 mm

40,000

Table 2: Scan settings for the OCT-devices tested. EMM 5: Enhanced Macula Map 5, ART: Automatic

Real Time (Eye Tracking).

Stratus OCT

(Carl Zeiss

Meditec)

Scan protocol Radial Lines

Covered area 6x6mm

A-Scans per B-Scans 512

B-Scans per Scan 6

Interval

Scan averaging

Individual settings

30°

-

-

RTVue-100

(Optovue)

EMM5

5x5mm

668

11 horizontal,

(outer region)

0.5mm

-

-

Spectralis OCT

(Heidelberg

Engineering)

Volume Scan

6x6mm

1024

25 horizontal

Spectralis OCT

(Heidelberg

Engineering)

Radial Lines

6x6mm

1024

6

0.25mm

10

30°

10

ART, Scan averaging ART, Scan averaging

Solving layer segmentation problem using the graph theoretic optimization problem

Once the cost function at the beginning of each segmentation step is calculated, the Dijkstra-Algorithm starts to solve the graph theoretical optimization problem and to extract the layer boundaries. To solve the single source shortest path problem, the Dijkstra-Algorithm needs a single source called seed point. As seed point definition we modified the approach addressed by Chui and colleagues [47]. Two additional columns are added to the OCT B-Scan image, one on the right side and one on the left side. The additional columns are addressed with the cost value null. We defined as seed point a vertex in the top left corner of the image. After the seed point specific cost graph was calculated, the extraction of the layer boundary starts at the pixel in the bottom right corner.

The layer boundary extraction is an iterative process. In each iteration, the layer boundary with the highest presented reflectivity is extracted and the cost function for the next extraction step is prepared.

First, the upper and the lower contour of the retina are the target for segmentation. However, the most visible and therefore easiest to find boundaries are those between Vitreous and NFL, and IS to ISe

(Fig 1A). Thus, the first iteration step is to extract these boundaries. The algorithm verifies, whether the two layers cross over. If the boundaries intersect, a separator line between the two layers is calculated. Below the separator line, the IS to OS/ISe boundary is extracted. Above the separator line, the Vitreous/NFL boundary is extracted. If the boundaries do not cross, the first segmentation step is successful.

Next, the algorithm generates a sub image from IS/ISe boundary to the bottom of the image to reduce the calculation time for the cost function and the Dijkstra-Algorithm. In this sub image, the

RPE/Choroid boundary is defined. As result of the first two steps, the upper boundary of the retina, the

Vitreous/NFL border and the lower boundary of the retina, the RPE/Choroid border, are obtained (Fig.

1B).

After segmentation of the entire scan, the segmentation of the highly reflective layer boundaries

NFL/GCL, IS/ISe, ISe/OS and OS/RPE (Fig. 1C-E) follows. The algorithm starts with the extraction of the IS/ISe and OS/RPE (Fig. 1C). On basis of the already defined contours Vitreous/NFL and

RPE/Choroid, a new sub-image is calculated. The costs along the boundaries that have been already identified are covered with very high cost values to prevent a possible re-identification by the algorithm. During the extraction of the ISe, it is possible that the shortest path algorithm differs from the IS/ISe and jumps to the OS/RPE. This takes place when the OS/RPE boundary has higher

contrast in some areas than the IS/ISe boundary in the OCT-image. In this situation the crossing technique is applied to ensure the robust segmentation of the IS/ISe and OS/RPE boundaries (Fig

1D). On the basis of the segmentation IS/ISe and OS/RPE, a new sub-image is calculated. In this sub image, the ISe/OS boundary is extracted (Fig 1E). The next highly reflective boundary is the boundary between NFL layer and GCL layer, which is unique. Nasal to the macula this boundary is highly reflective while temporal to the macula this boundary is only weakly reflective. For robust extraction of this boundary we restrict the detection area temporally and centrally (fovea) to 20 µm. Nasally no such restriction was set. This restriction is necessary to avoid that the algorithm jumps to the IPL/INL boundary. Finally, the NFL/GCL boundary is extracted (Fig 1F).

Next, the low reflective boundaries GCL/IPL, IPL/INL, INL/OPL, OPL/INL, ONL/ELM and ELM/IS are segmented. The algorithm starts with the segmentation of the two boundaries IPL/INL and

OPL/ONL. The range for the calculation of the specific seed point graph is based on the boundaries

NFL/GCL and IS/ISe. On the fovea OCT B-scan the inner retinal layer boundaries run together in the fovea region. Here, we allow the boundaries to run together by manipulating the cost function before extracting the information (Fig 1G).

In subsequent steps, the boundary GCL/IPL within the two boundaries NFL/GCL and IPL/INL (Fig. 1H) and the boundary INL/OPL within the two boundaries IPL/INL and OPL/ONL (Fig. 1I) are extracted. In the last two steps, the algorithm extracts the ONL/ELM boundary within the two boundaries OPL/INL and IS/ISe, and the ELM/IS boundary within the boundaries ONL/ELM and IS/ISe.

Download