International Journal of Application or Innovation in Engineering & Management (IJAIEM) Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com Volume 2, Issue 1, January 2013 ISSN 2319 - 4847 Automatic Segmentation of Brain Tumour from Multiple Images of Brain MRI Samir Kumar Bandhyopadhyay, Tuhin Utsab Paul Department of Computer Science and Engineering University of Calcutta, India ABSTRACT Computational applications are gaining significant importance in the day-to-day life. Specifically, the usage of the computeraided systems for computational biomedical applications has been explored to a higher extent. Detection of Brain tumour is the most common fatality in the current scenario of health care society. Automated brain disorder diagnosis with MR images is one of the specific medical image analysis methodologies. Image segmentation is used to extract the abnormal tumour portion in brain. The fusion of information is a domain of research in full effervescence these last years. Because of increasing diversity in the techniques of images acquisitions, the applications of medical images segmentation, in which we are interested, necessitate most of the time to carry out the fusion of various data sources to have information with high quality. In this paper we propose a system of image registration and data fusion theory adapted for the segmentation of MR images. This system provides an efficient and fast way for diagnosis of the brain tumour. Proposed system consists of multiple phases. First phase consists of registration of multiple MR images of the brain taken along adjacent layers of brain. In the second phase, these registered images are fused to produce high quality image for the segmentation. Finally, segmentation is done by improved K means algorithm with dual localization methodology. Applications on a brain model shows very promising results on simulated data and a great concordance between the true segmentation and the proposed system. Keywords: Brain MRI, Segmentation, K – Means, Brain Tumor 1. INTRODUCTION Brain is the kernel part of the body. Brain has a very complex structure. The brain is a soft, delicate, non-replaceable and spongy mass of tissue. It is a stable place for patterns to enter and stabilize among each other. Brain is hidden from direct view by the protective skull. This skull gives brain protection from injuries as well as it hinders the study of its function in both health and disease. But brain can be affected by a problem which cause change in its normal structure and its normal behaviour. A tumour is the name for a neoplasm or a solid lesion formed by an abnormal growth of cells which looks like a swelling. A tumour is a mass of tissue that grows out of control of the normal forces that regulates growth [30]. Brain tumour is a group of abnormal cells that grows inside of the brain or around the brain. Tumours can directly destroy all healthy brain cells. It can also indirectly damage healthy cells by crowding other parts of the brain and causing inflammation, brain swelling and pressure within the skull. Tumour is not synonymous with cancer. A tumour can be benign, pre-malignant or malignant, whereas cancer is by definition malignant. Over the last 20 years, the overall incidence of cancer, including brain cancer, has increased by more than10%, as reported in the National Cancer Institute statistics (NCIS). The National Brain Tumour Foundation (NBTF) for research in United States estimates that 29,000 people in the U.S are diagnosed with primary brain tumours each year, and nearly 13,000 people die. In children, brain tumours are the cause of one quarter of all cancer deaths. The overall annual incidence of primary brain tumours in the U.S is 11 to 12 per 100,000 people for primary malignant brain tumours, that rate is 6 to 7 per 1,00,000. In the UK, over 4,200 people are diagnosed with a brain tumour every year (2007 estimates). There are about 200 other types of tumours diagnosed in UK each year. About 16 out of every 1,000 cancers diagnosed in the UK are in the brain (or 1.6%). In India, totally 80,271 people are affected by various types of tumour (2007 estimates). Brain tumour causes the abnormal growth of the cells in the brain. The cells which supplies the brain in the arteries are tightly bound together thereby routine laboratory test are inadequate to analyse the chemistry of brain. Computed tomography and magnetic resonance imaging are two imaging modalities that allow the doctors and researchers to study the brain by looking at the brain non-invasively [31]. Magnetic Resonance Imaging (MRI) is a medical imaging technique. Radiologist used it for the visualization of the internal structure of the body. MRI provides rich information about human soft tissues anatomy. MRI helps for diagnosis of the brain tumour. Images obtained by the MRI are used for analysing and studying the behaviour of the brain. Image intensity in MRI depends upon four parameters. One is proton density (PD) which is determined by the relative concentration of water molecules. Other three parameters are T1, T2, and T2* relaxation, which reflect different features of the local environment of individual protons. Volume 2, Issue 1, January 2013 Page 240 International Journal of Application or Innovation in Engineering & Management (IJAIEM) Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com Volume 2, Issue 1, January 2013 ISSN 2319 - 4847 Segmentation is performed on the images for Grey Matter (GM), White Matter (WM), Cerebra – Spinal Fluid (CSF) and tumour region extraction. Image segmentation is a process of partitioning an image into different homogeneous regions, so that meaningful information about the image can be obtained and different analysis can be performed on that segmented image. Extraction of brain tumour region requires the segmentation of brain MR images into two segments. One segment contains the normal brain cells consisting of GM, WM and CSF and the second segment contains the tumorous cells of the brain. Correct segmentation of MR images is very important because most of the time MR images are not highly contrast thereby these segments can be easily overlapped with each other. So, to develop high contrast MR images, we propose two additional phases, namely, registration of adjacent layer MR images and fusing the registered images to produce a high quality image. This image is then used for segmentation by using advanced K - means algorithm with extended dual phase localization. Experimental results on MR image datasets obtained from online patient image database shows promising results. The paper is organized as follows: Related work is represented in section 2. Details of the proposed method are described in section 3. Section 4 contains details of experimentation and results. Conclusion and future prospects of the research is presented in section 5. References are cited in the last section. 2. RELATED WORK The process of separation of image areas based on different attribute such as texture, grey value range etc is known as Image segmentation. The main objective in image processing applications is extraction of important image features from image data which will eventually lead to automatic computerized description, interpretation and analysis of the scene. Segmentation by the medical experts manually from the magnetic resonance images of the brain tumour is very much time-consuming task, tiresome, susceptible to error,. Several segmentation methods had been proposed by the digital image processing community, many of which are ad - hoc [6]. Four of the most common methods are: 1) amplitude thresholding, 2) texture segmentation 3) template matching, and 4) region-growing segmentation. This is very much important for detecting necrotic tissues, edema and tumours. Various algorithms for segmentation [16:17:18:19:20:21:22] had been suggested by several authors. Siyal et al described a new method on "Fuzzy C-means for segmentation purpose" [7]. Phillips, W.E et al described "Application of fuzzy C-Means Segmentation Technique for tissue Differentiation in MR Images of a hemorrhagic Glioblastoma Multiforme"[15]. S. Murugavalli1 et al, proposed "A high speed parallel fuzzy c-mean algorithm for brain tumor segmentation" [14]. S. Murugavalli1, proposed "An Improved Implementation of Brain Tumor Detection Using Segmentation Based on Neuro Fuzzy Technique" [13], Vaidyanathan M et al described "Comparison of Supervised MRI Segmentation methods for Tumor Volume Determination during Therapy"[12]. Jayaram K et al described "Fuzzy Connectedness and Image Segmentation"[11].Kannan et aln describe "Segmentation of MRI Using New Unsupervised Fuzzy C mean Algorithm"[10] Ruspini, E Described "Numerical methods for fuzzy clustering"[9]. Dunn, J.C., described "A fuzzy relative of the ISODATA process and its use in detecting compact, well Separated clusters"[6]. Bezdek, J.C., described "Cluster validity with fuzzy sets"[8]. Some other methods such as Learning vector quantization, Watershed, Hybrid SOM or graph cut based approach had also been proposed in different literatures. 3. PROPOSED WORK Detection and segmentation of Brain tumour accurately is a challenging task in MRI. Magnetic resonance imaging (MRI) is a medical imaging technique used in radiology to visualize detailed internal structures in different body parts. MRI creates apply of the possessions of nuclear magnetic resonance (NMR) to image nuclei of atoms within the human body. In experimental observation, MRI is used to differentiate pathologic tissue (such as a brain tumour) from usual normal tissue. [23] One of the benefits of an MRI scan is that it is safe to the patient. It applied by strong magnetic fields and non-ionizing emission in the radio frequency range, unlike CT scans and traditional X-rays, which both use ionizing radiation. The MRI image is an image that produces a high contrast images indicating regular and irregular tissues that help to distinguish the overlapping. In the domain of image processing segmentation of images has a significant position in present days. The aim of segmentation is to make straightforward and/or change the illustration of an image is impressive that has additional important and easier to examine. Image segmentation is typically used to position objects and boundaries in images [24]. In computer vision, segmentation refers to the process of separation a digital image into multiple segments (sets of pixels, also known as super pixels). More specifically, image segmentation is the procedure of transmission a label to each pixel in an image such that pixels with the similar label share convinced visual characteristics. Every of the pixels in a section are similar with respect to some computed property or characteristic, such as color, intensity, or texture. Neighbouring regions are appreciably dissimilar with compare to the similar characteristic(s). The effect of image segmentation is a set of segments that communally cover the entire image, or a set of contours remove from the image. [24] It becomes more vital while usually production with medical images where pre-surgery and post-surgery Volume 2, Issue 1, January 2013 Page 241 International Journal of Application or Innovation in Engineering & Management (IJAIEM) Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com Volume 2, Issue 1, January 2013 ISSN 2319 - 4847 announcement are necessary for the meaning of initiating and speeding up the improvement process. Labour-intensive segmentation of these abnormal tissues cannot be measure up to with recent day’s high speed computing machines which allow us to visually watch the volume and location of not needed tissues. Computer aided recognition of abnormal enlargement of tissues is mostly motivated by the requirement of achieving maximum possible truthfulness. The improvement in the application of information technology has totally changed the world. The observable motive for the opening of computer systems is: simplicity, accuracy, ease of use and reliability. An image is captured in medical imaging, digitized and progression for segmentation and extracting vital information. [25]Therefore, there is a well-built algorithms need to have some well-organized computer based structure that accurately classify the boundaries of brain tissues along with minimizing the chances of user communication with the system. Additionally, manual segmentation procedures need at least three hours to complete. According to the conventional methods for compute tumour volumes are not dependable and are error susceptible. The steps in the proposed workflow go as follows: 3.1 Pre – Processing : Image Enhancement In this paper the image enhancement is done by using a 3 – by – 3 ‘unsharp’ contrast enhancement filters. An ‘unsharp’ filter is generated by using the negative of the laplacian filter with parameter 'alpha', where the 'alpha' controls the shape of the laplacian and must be in the range 0.0 to 1.0. we had used the default value as 0.2. By the use of this filter the image is sharpened by subtracting a blurred version of the image from itself. A two dimensional array H is created as the filter. Each element of the output image is computed using double precision floating point. Since the input image pixel values are integer so output elements that exceed the range of the integer are truncated and fractional values are rounded. 3.2 Registering Multiple Images Here we had been working with multiple MRI Images from different adjacent layers. In the pre – processing section we have been doing two operations primarily. Firstly, the multiple images of the brain are registered with respect to one base image. So Multiple Image registration needs to be done and secondly fusing those registered image. Medical imaging — comparison of the patient’s image with digital anatomical atlases, specimen classification. Due to the diversity of images to be registered and due to various types of degradations it is impossible to design a universal method applicable to all registration tasks. Every method should take into account not only the assumed type of geometric deformation between the images but also radiometric deformations and noise corruption, required registration accuracy and application-dependent data characteristics. Nevertheless, the majority of the registration methods consists of the following four steps : Feature detection. Salient and distinctive objects like closed-boundary regions, edges, contours, line intersections, corners, etc. are automatically detected by histogram analysis and moving a spatial mask of 9 by 9 matrix over the ROI. For further processing, these features are represented by their point representatives, centre of gravity in this case, which are called control points (CPs) in the literature. At least three control points are set up for the purpose. Feature matching. In this step, the correspondence between the features detected in the sensed image and those detected in the reference image is established. Different feature descriptors and similarity measures along with spatial relationships among the features are used for that purpose. Transform model estimation. The type and parameters of the so-called mapping functions, aligning the sensed image with the reference image, are estimated. The parameters of the mapping functions are computed by means of the established feature correspondence. In this case, rigid geometric transformation model is used. Rotation of up to 90 Degree is allowed. Translation of the image is done to align with the base image. Image resampling and transformation. The sensed image is transformed by means of the mapping functions. Image values in non-integer coordinates are computed by the appropriate interpolation technique. In this case, nearest neighbour interpolation is used for zoom in/out transformation. 3.3 Fusing Registered Images The most important issue concerning image fusion is to determine how to combine the sensor images. One of the simplest image fusion methods just takes the pixel-by-pixel grey level average of the source images. The developed system fuses two or more registered images into a single image. It supports both Grey & Color Images. In the designed system, Alpha Factor can be varied to vary the proportion of mixing of each image. With Alpha Factor = 0.5, the two images are mixed equally. With Alpha Factor < 0.5, the contribution of background image will be more. With Alpha Factor > 0.5, the contribution of foreground image will be more. Volume 2, Issue 1, January 2013 Page 242 International Journal of Application or Innovation in Engineering & Management (IJAIEM) Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com Volume 2, Issue 1, January 2013 ISSN 2319 - 4847 3.4 Skull Stripping The scull is stripped using a mask generated from the original image. For the purpose of mask generation, Otsu's method is used to automatically perform histogram shape – based image thresholding,[26:27] i.e., to reduce the grey level image to a binary image. OTSU method for thresholding assumes that the image to be thresholded has two classes of pixels or has a bi-modal histogram (e.g. foreground and background) then it calculates the optimum threshold separating those two classes so that their combined spread i.e. intra-class variance is minimal. For the image, we calculate {(weight of foreground* variance of foreground) + (weight of background* variance of background)} and subsequently the minimum value is accepted for thresholding. On successful thresholding, we get a binary image with scull as the main outline and some small island of dark zone consisting of cerebra – spinal fluid. To remove those patches of cerebra – spinal fluid, we fill the region encased by the scull in the binary image using flood fill algorithm. Finally the negative of the binary image is used as the scull mask. This generated scull mask is used over the original image to remove the scull. The original image is masked by the generated scull mask and thereby the scull gets removed from the original image giving a scull stripped image as output. 3.5 Automatic Segmentation The segmentation process proposed here is a three step refining segmentation process. The three steps are: K - Means algorithm based segmentation. Local standard deviation guided grid based coarse grain localization. Local standard deviation guided grid based fine grain localization. The problem of image segmentation is nothing but a classical clustering problem where the range of image grey values is clustered in some fixed number of clustered grey values. One of the easiest unsupervised learning algorithms that explain the well acknowledged clustering problem is the K-means [28]. The process follows a straightforward and easy way to categorize a given data set during a definite number of clusters (assume k clusters) fixed a priori. These centroids must be located in a wiliness way because of dissimilar location cause dissimilar outcome, so locate them as much as possible far away from each other. The most important application is to label k centroids, one for every cluster. The next step is to obtain every point be in the right location to a particular (given) data set and relate it to the adjacent centroid. If there are no point is waiting, the first step is finished and an early group age done. In this point we want to re-calculate k new centroids of the clusters resultant from the earlier step. After calculate these k new centroids, a new binding has to be done between the equivalent data set points and the neighbouring new centroid. A repetition has been produce. As a effect of this loop we may well notice that the k centroids alter their position step by step until no additional changes are done, in other words centroids do not shift any more. Finally, the algorithms aspire at the minimizing of objective function, in that case a squared error function where the square is calculated of the distance measured between a data point and the cluster centre, which is the pointer of the distance of the n data points from their own bunch centre. The algorithm runs through the following steps: 1. Position K points into the image characterize by the objects that are being clustered. These points simply signify initial group centroids. 2. Closest centroides are assigning each object in to a group. 3. After assigning each object then recalculation of the positions of the K centroids is done. 4. Until the centroids no longer move repeat Steps 2 and 3. This is done due to create a disconnection of the objects from the groups from which the metric to be minimized can be calculated. Although it can be establish that the method will always come to an end, the k-means algorithm does not essentially locate the best possible arrangement, equivalent to the global objective function minimum. The algorithm is also appreciably responsive to the initial arbitrarily chosen cluster centres. So, to obtain an optimal segmentation cluster configuration we propose a standard local deviation guided grid based coarse grain localization after the K - means algorithm segmentation. Here, we first calculate the local standard deviation of the k - means segmented image. Here the image is mapped on large grid layout, ideally each grid is of size 8 – by – 8 i.e. 64 pixels. Local standard deviation of each pixel is calculated based on the pixel values of these 64 pixel of that grid. Then the histogram of each grid id calculated and based on the local standard deviation and histogram in each grid, the segmentation boundaries in that grid are re-evaluated to generate more optimal segmentation. Choosing a large grid dimension helps us reduce the effect of noise in segmenting the grid. But, conversely, the large grid dimension also ignores the finer anatomic details such as twists and turns in the boundary of the tumour or overlapping region of grey and white matters in the brain. So, finally to obtain the most optimal segmentation, we once more process the above segmented image using the same concept of standard local deviation guided grid based fine grain localization. Here we chose the grid size ideally to be 3 Volume 2, Issue 1, January 2013 Page 243 International Journal of Application or Innovation in Engineering & Management (IJAIEM) Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com Volume 2, Issue 1, January 2013 ISSN 2319 - 4847 – by – 3. So, choosing a small grid helps in zeroing on the finer anatomical details of the MRI image that needs to be preserved. This helps in restoring the acute details of the tumour boundary and finer analysis of the overlap region of grey matter and white matter. 3.6 Post – processing: Extracting WM, GM and Tumour After the successful completion of the above mentioned three step segmentation process, the histogram of the segmented image is calculated. The analysis of the image histogram shows the distinct peaks in three different image pixel grey value corresponding to grey matter, white matter and tumour. Based on this histogram, the corresponding grey matter, white matter and tumour is extracted. 3.7 Post – processing: Analysis of Tumour Dimension A line scan method is applied to the image of the extracted tumour after the segmentation process and the maximum breadth and length of the tumour along the x – axis and y – axis is estimated. 4. TEST CASE OUTPUT We have got 3 types of MRI images they are : Saggital view images (Side view) Coronal view images (Back view) Axial view images (Top view) The quality of images, i.e. intensity, gradient, brightness, presence of noise etc, is totally depend on the resolution of the association of the MRI magnets. Our proposed system works on MRI images taken from these entire three axes. The computational analysis is implemented on a DELL Inspiron system Intel® Core™ i3 CPU @2.13GHz processor and 4.00GB RAM with Windows (C) 7 Home Basic 64 – bit Operating System. The support analysis software used is MATLAB © R2010a 64 – bit version. In order to evaluate the performance of our algorithms and methodology, the experiments were conducted on MRI data set. Test case MRI images are downloaded from MRI image Data Bank of Radiopedia©. The results of the test case are as follows: The MRI is in Coronal View: Figure 1a. Input Image Volume 2, Issue 1, January 2013 Figure 1b. Enhanced Image Page 244 International Journal of Application or Innovation in Engineering & Management (IJAIEM) Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com Volume 2, Issue 1, January 2013 ISSN 2319 - 4847 Figure 2 Image Registered Figure 3 Image Fused Figure 4a Skull Mask Volume 2, Issue 1, January 2013 Figure 4b Skull Removed Figure 4c K – Means Segmented Page 245 International Journal of Application or Innovation in Engineering & Management (IJAIEM) Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com Volume 2, Issue 1, January 2013 ISSN 2319 - 4847 Figure 4d Local Standard Deviation Figure 4e White Matter Figure 4f Gray Matter Figure 5 Tumour Time taken: 12.36 seconds. Tumour Dimension: Width: 206 Pixel Height: 182 Pixel Similar results are also available in the other views of MRI Images. Test Cases had been done in nearly hundred MRI images and successful tumour detection has been found in more than 97% cases. 5. CONCLUSION The segmentation technique proposed here is constraint by the fact that the images need to be of adjacent imaging layer. Moreover only rigid geometric transformation is used. But in this constrain environment, the registration gives satisfactory result. But the registration may be made more precise by using affine transformations. The image fusion technique gives good result in fusing multiple images. But in certain case it results in loss of intensity. This can be overcome by doing fusion in frequency domain. The method for segmentation proposed here overcomes the drawbacks of the conventional K – means algorithm and gives very satisfactory result both from qualitative and quantitative perspective. The results of segmentation as given above are at par with the recent medical standard. Moreover the success rate of the segmentation in images of brain MRI taken from all the three angles is quite high and satisfactory. The real time execution time in the test cases is less than 12 seconds in almost all the cases and thus can be said to be good as per the current industry standard ad is also very less as compared to manual process. So, based on the above discussion it can be claimed to be a noble segmentation approach in its family of unsupervised clustering approach. This is future scope for 3 – D modelling and volume analysis of brain and tumour and classification of tumour based on this segmentation approach. REFERENCES [1] Chang PL, Teng WG (2007) Exploiting the self-organizing map for medical image segmentation. In: Twentieth IEEE international symposium on computer-based medical systems, pp 281–288 [2] Zhang Y et al (2007) A novel medical image segmentation method using dynamic programming. In: International conference on medical information visualisation-bioMedical visualisation, pp 69–74 [3] Hall LO, Bensaid AM, Clarke LP, Velthuizen RP, Silbiger MS, Bezdek J (1992) A comparison of neural network and fuzzy clustering techniques in segmenting magnetic resonance images of the brain. IEEE Trans Neural Netw 3:672–682 Volume 2, Issue 1, January 2013 Page 246 International Journal of Application or Innovation in Engineering & Management (IJAIEM) Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com Volume 2, Issue 1, January 2013 ISSN 2319 - 4847 [4] Tian D, Fan L (2007) A brain MR images segmentation method based on SOM neural network. In: The 1st international conference on bioinformatics and biomedical engineering, pp 686–689 [5] Han X, Fischl B (2007) Atlas renormalization for improved brain MR image segmentation across scanner platforms. IEEE Trans Med Imaging 26(4):479–486 [6] Dunn, J.C., 1973.” A fuzzy relative of the ISODATA process and its use in detecting compact, well Separated clusters”,Journal of Cybernetics, 3: 32-15. [7] sai .C, Manjunath B.S,Jagadeesan.R,:”Automated Segmentation of brain MR Images”,Pergamon,Pattern Recognition, Vol 28, No 12, March 1995. [8] Bezdek, J.C., 1974. “Cluster validity with fuzzysets”, Cybernetics, 3: 58-73. [9] Ruspini, E., 1970. “Numerical methods for fuzzy clustering”, Information Sciences, 2: 319-350. [10] Kannan S.R,:”Segmentation of MRI Using New Unsupervised Fuzzy C mean Algorithm”,ICGST,2005. [11] Jayaram K.Udupa,Punam K.Saha,:”Fuzzy Connectedness and Image Segmentation”, Proceedings of the IEEE,vol.91,No 10,Oct 2003. [12] Vaidyanathan M, Clarke L.P, Velthuizen R.P, Phuphanich S, Bensaid A.M, Hall L.O, Bezdek J.C, Greenberg H, Trotti A, Silbiger ,:”Comparison of Supervised MRI Segmentation methods for Tumor Volume Determination During Therapy”,Pergamon,Magnetic Resonance Imaging,vol.13,no.5,pp,719-728,1995. [13] S. Murugavalli1 , V. Rajamani,” “A high speed parallel fuzzy c-mean algorithm for brain tumor segmentation”, BIME Journal, Volume (06), Issue (1), Dec., 2006. [14] S. Murugavalli1 , V. Rajamani,” An Improved Implementation of Brain Tumor Detection Using Segmentation Based on Neuro Fuzzy Technique” Journal of Computer Science 3 (11): 841-846, 2007 [15] Parra, C.A., K. Iftekharuddin and R. Kozma, 2003.”Automated Brain Tumor segmentation and pattern recognition using ANN”, Computational Intelligence Robotics and Autonomous Systems. [16] Aidyanathan M, Clarke L.P, Velthuizen R.P, Phuphanich S, Bensaid A.M, Hall L.O, Bezdek J.C, Greenberg H, Trotti A, Silbiger M,:”Comparison of Supervised MRI Segmentation methods for Tumor Volume Determination During Therapy”,Pergamon,Magnetic Resonance Imaging,vol.13,no.5,pp,719-728,1995. [17] Chunyan Jiang,Xinhua Zhang,Wanjun Huang,Christoph Meinel.:”Segmentation and Quantification of Brain Tumor,”IEEE International conference on Virtual Environment,Human-Computer interfaces and Measurement Systems, USA, 12-14, July 2008. [18] HanShen,WilliamSandham,MalcolmGranet,Annette Sterr.:”MRI Fuzzy Segmentation of Brain Tissue Using Neighbourhood Attraction with Neural-Network Optimization”,IEEE Transcations on Information Technology in BiomedicineVol.9,No.3,Sep 2005. [19] Hillips,W.E, Velthuizen R.P, Phuphanich S, L.O, Clarke L.P, Silbiger ,:”Application of fuzzy C-Means Segmentation Technique for tissue Differentlation in MR Images of a hemorrhagic Glioblastoma Multiforme “, Pergamon,Megnetic Resonance Imaging, Vol.13, 1995. [20] Kohonen..T “The self-organizing map”. Proceedings of the IEEE, (78(9)):1464–1480, 1990. [21] Pal, N.R. and S.K. Pal, 1993.” A review on image segmentation techniques”, Pattern Recognition 26(9): 12771294. [22] Phillips,W.E, Velthuizen R.P, Phuphanich S, L.O, Clarke L.P, Silbiger ,:”Application of fuzzy C-Means Segmentation Techniquefor tissue Differentlation in MR Images of a hemorrhagic Glioblastoma Multiforme “, Pergamon,Megnetic Resonance Imaging, Vol.13, 1995. [23] Squire LF, Novelline RA (1997). Squire’s fundamentals of radiology (5th ed.) Harvard University Press. [24] Linda G. Shapiro and George C. Stockman (2001): “Computer Vision”, pp 279-325, New Jersey, Prentice-Hall. [25] S. Wareld, J. Dengler, J. Zaers, C. Guttmann, W. Gil, J. Ettinger, J. Hiller, and R. Kikinis. “Automatic identication of grey matter structures from mri to improve the segmentation of white matter lesions”. J. of Image Guided Surgery, 1(6):326{338}, 1995. [26] M. Sezgin and B. Sankur (2004). "Survey over image thresholding techniques and quantitative performance evaluation". Journal of Electronic Imaging 13 (1): 146–165. [27] Nobuyuki Otsu (1979). "A threshold selection method from gray-level histograms". IEEE Trans. Sys., Man., Cyber. 9 (1): 62–66. [28] J. B. MacQueen (1967): "Some Methods for classification and Analysis of Multivariate Observations, Proceedings of 5-th Berkeley Symposium on Mathematical Statistics and Probability", Berkeley, University of California Press, 1:281-297. [29] Test case images from http://radiopaedia.org/. [30] Lin, W., E. Tsao and C. Chen, 1991. “Constraint satisfaction neural networks for image segmentation”, In: T.Kohonen, K. Mkisara, 0. Simula and J. Kangas (eds.), Artificial Neural Networks (Elsevier Science Publishers), pp: 1087-1090. [31] Milan Sonka, Satish K. Tadikonda, Steve M.Collins,Know1edge-BasedInterpretation of MR Brain Images , IEEE Transaction on Medical Imaging, Vol. 15, No. 4, 1996, pp. 443-452. Volume 2, Issue 1, January 2013 Page 247 International Journal of Application or Innovation in Engineering & Management (IJAIEM) Web Site: www.ijaiem.org Email: editor@ijaiem.org, editorijaiem@gmail.com Volume 2, Issue 1, January 2013 ISSN 2319 - 4847 [32] “Segmentation of Brain MRI Image–A Review” SK Bandhyopadhyay, TU Paul IJARCSSE 2 (3), 2012, pp. 409 – 413 [33] “Segmentation of Brain Tumor from Brain MRI Images Reintroducing K–Means with advanced Dual Localization Method” TU Paul, SK Bandhyopadhyay IJERA 2 (3), 226 - 231 AUTHOR Dr. Samir K. Bandyopadhyay, B.E., M.Tech., Ph. D (Computer Science & Engineering), C.Engg., D.Engg., FIE, FIETE, is a Professor of Computer Science & Engineering, University of Calcutta, visiting Faculty Dept. of Comp. Sc., Southern Illinois University, USA, MIT, California Institute of Technology, etc. His research interests include Bio-medical Engg, Image processing, Pattern Recognition, Graph Theory, Software Engg.,etc. He has 25 Years of experience at the Postgraduate and under graduate Teaching & Research experience in the University of Calcutta. He has already got several Academic Distinctions in Degree level/Recognition/Awards from various prestigious Institutes and Organizations. He has published more than 300 Research papers in International & Indian Journals and 5 leading text books for Computer Science and Engineering. He has visited round the globe for academic and research purposes. He is currently serving as Vice – Chancellor of West Bengal University of Technology, India. Tuhin Utsab Paul received his B.Sc (Hons) in Computer science in 2008, M.Sc in Computer and Information Science in 2010, and M.Tech in Computer Science and Engineering from the University of Calcutta, India. He is currently doing his P.hD in Computer science and engineering specializing in Bio - Medical Imaging from the University of Calcutta. His research interest include Image processing, pattern recognisation, image steganography and image cryptography. He has published quite a few Research papers in International & Indian Journals and conferences. He is an IEEE member since 2009. Volume 2, Issue 1, January 2013 Page 248