Report

advertisement

A Neural Network Approach for Classifying

TACS

Mike Smith

ECE 539 – Final Report

Introduction

The goal of this project is to create an artificial neural network in order to recognize certain collagen structures known as TACS, or tumor associated collagen structures. It has been shown that one of the greatest risk factors for developing breast cancer is dense breast tissue. This dense tissue has also been associated with an increase in collagen. By analyzing the build up of collagen surrounding a tumor in mammary tissue it can observed that the collagen structure changes in both density and alignment. The changes in collagen have been classified into signatures which can be used to determine the current stage of the tumor. It has also been shown that the collagen signatures start to form even before the tumor cells appear. This means that if we can detect the early stages of these collagen signatures we can determine if a tumor will be formed in that area, which can lead to early detection of cancer and an increased chance of successful recovery.

I have gathered data taken from scans of mouse mammary glands that were known to have terminal cancer. I then classified the collagen signatures associated with the tumors and attempted to classify them in several ways. First a naïve approach was taken in order to create a baseline to compare the results of a more sophisticated technique. This approach involved simple threshold classifying. Next I attempted to classify the signatures using an artificial neural network. I varied the structure of the neural network in terms of number of hidden layers, number of neurons in the hidden layers, as well as momentum and learning rate. I then compared the results in terms of classification rate.

The classification rate of the neural networks was not as high as I would like them to be.

Given more time to try different techniques I think these rates can improve but unfortunately this was out of the time frame for this semester.

TACS

Tumor associated collagen signatures (TACS) have been observed to form around the area of tumors. As the tumor stages progress the collagen signatures also change in both density and alignment. These changes have been classified into three categories: TACS-1,

TACS-2, and TACS-3 as shown in figure 1.

Figure1: TACS stages. Taken from “Characterization of Collagen Changes”

TACS-1 is classified as being a local increase in collage density surrounding the tumor.

In this stage the collagen has not yet aligned itself to the tumor but the amount of collagen is increased.

TACS-2 can be classified by the alignment of collagen to the tumor itself at an angle of 0 degrees. This alignment is most likely due to the fact that the tumor is growing and the collagen is pulled tight around it, causing the collagen to align to it.

TACS-3 can also be classified as an alignment of collagen to the tumor but this time at an angle of 90 degrees rather then 0. At this stage cells begin to travel out along the newly aligned collagen cells.

For this project in particular I did not try and distinguish between the different types of

TACS. I decided to just try and classify the collagen surrounding a tumor.

Gathering of Data

In order to gather the images used for this project a technique known as multi-photon microscopy was used. Multi-photon microscopy is a technique that uses a laser with a wavelength that is twice as long, and half the energy then traditional laser scanning microscopy. By using half the energy the laser is less harmful to the sample and also allows for deeper imaging. Mutli-photon microscopy relies on two photon events to occur in order to get the same energy you would if a shorter wavelength, higher energy laser was used. Furthermore the probability of two photon events occurring is only realistic at the focal point, which means you only get excitation of photons at the point you are focusing at. Collagen has an intrinsic property that it exhibits an effect known as secondharmonic generation. This means that when a laser is shined on it, it forms new photons with more energy. This is ideal for imaging with a multi-photon laser scanning microscope.

The data imaged is from mouse mammary tissue that was known to be cancerous. The images were saved as 1024x1024 grayscale tiffs using software known as Wiscscan. An example image is shown in figure 2.

Figure 2: An example Slide

Methods

In order to create testing and training data each image was first segmented into 32x32 discrete areas. Each of these areas contained a total of 1024 pixels. Each area was then used as a feature vector and was classified with a 1 if it contained a collagen structure we are trying to classify or a 0 if it was not part of something we wanted to classify. Since the images did not come pre-classified, I classified them by hand giving the x and y coordinates of an area to a matlab file called creategrid.m which then analyzed the image, and wrote each 1024 input and classification to a file. Similar files were made that took a summary of each sections pixel values and used that as a single feature instead of 1024 different features in order to make computation quicker.

Three images were chosen to do classification on and were run through creategrid.m.

Their results are shown in the next three figures.

Figure 3: The First Image, corresponding to Test1

Figure 4 The second image corresponding to Test2

Figure 5 The third image corresponding to Test3

In each figure the image shown on the right is the original, and the image shown on the left shows each individual section. The red sections correspond with class 1 while the non red sections are class 0. Collagen was classified as class 1 if it was surrounding and was aligned to the tumor.

In order to obtain a baseline performance I first developed a matlab file called threshold.m. In order to classify each section, threshold.m took the average of each feature vector and found the minimum and maximum value of each feature that was classified as class 1. threshold.m then analyzed the file this time classifying each feature vector as being 0 if it fell outside of the maximum and minimum range found earlier, and

1 if it was inside the range. This was the simplest method I could think of and was not meant to be an exercise in neural networks but rather as a means to get a classification rate that a neural network could beat. For each image a training classification rate was found as well as a testing classification for the other images.

Next I created a series of tests to try and classify the data with neural networks of different structure. This time instead of using the average intensity values each of the

1024 pixels contained in one of the 32x32 sections was used as an input to the neural network, the classifications were kept the same. For each of the three images, I first tried trained and tested using all of the 1024 vectors. I also tried training on subsets of these feature vectors since often times there were many more feature vectors that were of class

0 then of class 1. Each neural network was classified with either the full set of training data or a subset of it and then tested against the full training data of each individual image.

Several different structures of neural nets were tested. Each structure had the following parameters in common though: Each had an input dimension of 1024 and an output of 1, each input was scaled between -5 and 5, each output was scaled between 0.2 and 0.8 each number of hidden layer neurons were kept the same, each hidden layer perceptron used a hyperbolic tangent activation function, and each output perceptron used sigmodial activation, each neural net was run for 100 epochs using all the training data at each epoch and was checked for convergence after 10 epochs.

The neural networks structure was tried a number of different ways. Each was run with either 1 or 3 hidden layers, with either 10, 100 or 500 neurons in each hidden layer, alpha varied between 0.05 and 0.1, and the momentum variable changed between 0 and 0.8.

Results

Tables 1-3 show the classification rate of the threshold program.

Filename Test1 Test2

Classification Rate

(%)

60.9375

Table 1: Train on Test1

Filename Test1

58.9844

Test3

50.8794

Classification Rate

(%)

62.0117

Table 2: Train on Test2

Test2

59.9609

Test3

53.1250

Filename

Classification Rate

(%)

Table 3: Train on Test3

Test1

62.7930

Test2

59.5703

Test3

55.1758

The results show that training with test3 gave the best classification rate over all the images.

Tables 4-9 Show the results for the various structured neural networks. Each neural net was trained on a subset of the testing data, and then tested on the entire set of data.

Filename

Hidden

Neurons

Alpha 0.05

Test1

10

0.1 0.05

Test1

100

0.1 0.05

Test1

500

Momentum 0

Classification

0.8 0 0.8 0 0.8 0 0.8 0 0.8 0 0.8

83.11 85.64 86.04 87.30 11.72 85.94 88.28 11.72 88.28 88.28 11.72 88.28

Rate (%)

Table 4: One Hidden Layer, Train with Train1, Test on Test1

Filename

Hidden

Neurons

Alpha 0.05

Test1

10

0.1 0.05

Test1

100

0.1 0.05

Test1

500

0.1

0.1

Momentum 0

Classification

0.8 0 0.8 0 0.8 0 0.8 0 0.8 0 0.8

88.28 87.70 68.95 86.91 85.84 58.20 75.85 87.40 11.82 11.82 11.72 11.72

Rate (%)

Table 5: 3 Hidden Layers, Train with Train1, Test on Test1

Filename Test2 Test2 Test2

Hidden

Neurons

10 100 500

Alpha

Momentum

Classification

Rate (%)

0

0.05

0.8 0

0.1

0.8 0

0.05

0.8 0

0.1 0.05 0.1

0.8 0 0.8 0 0.8

6.93 91.50 95.51 4.30 4.10 83.50 95.70 95.70 4.20 95.70 95.8 4.20

Table 6: One Hidden Layer, Train with Train2, test on Test2

Filename

Hidden

Test2

10

Test2

100

Neurons

Alpha 0.05 0.1 0.05

Test2

500

0.1 0.05 0.1

Momentum 0

Classification

0.8 0 0.8 0 0.8 0 0.8 0 0.8 0 0.8

93.75 94.43 96.09 7.62 93.07 93.16 95.70 4.10 95.90 95.90 4.30 95.90

Rate (%)

Table 7: 3 Hidden Layers, Train with Train2, test on Test2

Filename Test3 Test3

Hidden

Neurons

Alpha 0.05

10

0.1 0.05

100

0.1 0.05

Test3

500

0.1

Momentum 0 0.8 0 0.8 0 0.8 0 0.8

Classification

0 0.8 0 0.8

8.89 9.38 8.89 8.59 9.18 87.60 9.18 90.82 90.82 91.11 9.18 9.18

Rate (%)

Table 8: One Hidden Layer, Train with Train3, Test on Test3

Filename Test3 Test3 Test3

Hidden

Neurons

10

Alpha 0.05

Momentum 0 0.8 0

0.1

0.8 0

0.05

0.8

100

0

0.1 0.05

500

0.1

0.8 0 0.8 0 0.8

Classification

Rate (%)

8.98 9.08 91.02 88.38 82.81 79.69 91.11 90.82 8.89 9.08 91.02 9.08

Table 9: Three Hidden Layers, Train with Train 3, Test on Test3

The results shown in the tables are misleading. Although it seems like many of the neural networks did a good job at the classification it is because of the dis-proportional amount of data representing class 0 vs. class 1. For Test1 the amount of class 1 was 11.82%, whereas the amount of class 0 was 88.28, similarly for test2 there was 4% of class 1 and

96% of class 0, and with test3 there was 9% of class 1 and 91% of class 0. No classifier was able to perform better then what seems to be random guessing.

Problems

Several problems resulted after doing testing on the data. At first I had planned to train the neural net on one image using all the 32x32 segments, and then test it on the other two. This did not work out, even while training the neural net was unable to predict any of the correct classifications, similar to the results shown above, it would classify each segment as either all 0’s or all 1’s. At first I thought this was a result of having a disproportional amount of class 1 data vs. class 2 data so I created training files that had every class 1 feature and an equal number of class 0 features that were randomly selected from the file. This did not help the classification though. I narrowed down the amount of training data I had and finally I was able to start classifying things when only 10 training examples were given. This is a lot less data to train on but at least it occasionally learned something.

The problem is most likely due to the fact that the data should not be represented as it is.

There is no structure to the data in each segment; it is just a collection of pixels. In one segment the collagen signature might be going in one direction, while in the other it might be completely opposite. Which means that if the pixel values for each segment as inputs to the classifier sometimes input [0-359] might contain a collagen signature, but sometimes input [890-1000] would contain it, which unfortunately might be the same for a random data segment we do not want to classify. Also since I classified each image by

hand I might have incorrectly labeled one section of data of having a certain signature when in fact it didn’t, or the other way around. This could have also led to the poor classification results.

Conclusion/Future Work

The results from the neural network did not come out how I would have liked them to.

Just defining a threshold and classifying based on it outperformed any neural network structure I was able to come up with. This is most likely due to the way the data was represented. In the future I would like to think of a better way to store the data to make classification easier. Also since I am trying to classify images, each pixel only has

2^8(256) possible values. The images were obtained through a 12 bit ADC however which gives a much better resolution because 4096 possible values are obtainable. Also in the future if I was able to create a classifier that worked reasonably well I would like to try and have it classify the different types of collagen signatures to see if it can represent the stage of a tumor, or perhaps signatures the might lead to a tumor but haven’t yet.

References

P. P. Provenzano, et al., "Collagen reorganization at the tumor-stromal interface facilitates local invasion". BMC Med. 4, 38 (2006).

• P.P. Provenzano, et al., “Collagen density promotes mammary tumor initiation and progression”. BMC Med. (2008)

• www.loci.wisc.edu

Download