International Journal of Engineering Trends and Technology (IJETT) – Volume 9 Number 6 - Mar 2014 Directive Contrast Based Multimodal Medical Image Fusion in NSCT with DWT Domain Senthil V PG scholar Prof. B. Rajesh Kumar M.E.,(Ph.D.), RVS College of Engineering and Technology RVS College of Engineering and Technology Coimbatore Coimbatore Tamilnadu Tamilnadu. Abstract—Multi modal medical image fusion, as a powerful tool for the clinical applications, has developed with the advent of various imaging modalities in medical imaging. The main motivation is to capture most relevant information from sources into a Single output, which plays an important role in medical diagnosis. In this paper, a novel fusion framework is proposed for multimodal medical images based on non Sub sampled contour let transform (NSCT). The source medical images are first transformed by NSCT followed by combining low- and high-frequency components. Two different fusion rules based on phase congruency and directive contrast are Proposed and used to fuse low- and high-frequency coefficients. Finally, the fused image is constructed by the inverse NSCT with all composite, coefficients. Experimental results and comparative study show that the proposed fusion frame work Provides an effective way to enable more accurate analysis of multimodality images. Further, the applicability of the proposed framework is carried out by the three clinical examples of persons affected with Alzheimer, subacute stroke and recurrent tumor. Keywords—Low frequency fusion, high frequency fusion, Inverse NSCT implementation. INTRODUCTION Medical imaging has attracted increasing attention due to its critical role in health care. However, different types of imaging techniques such as X-ray, computed tomography(CT), magnetic ISSN: 2231-5381 resonance imaging (MRI), magnetic resonance angiography (MRA), etc., Provide information where some information is common, and some are unique. For example, X-ray and computed tomography (CT) can provide dense structures like bones and implants with less distortion, but it cannot detect physiological changes. Similarly, normal and pathological soft tissue can be better visualized by MRI image whereas PET can be used to provide better information on blood flow and flood activity with low spatial resolution. As a result, the anatomical and functional medical images are needed to be combined for a compendious view. For this purpose, the multimodal medical image fusion has been identified as a promising solution which aims to integrating information from multiple modality images to obtain a more complete and accurate description of the same object. Multimodal medical image fusion not only helps in diagnosing diseases, but it also reduces the storage cost by reducing storage to a single fused image instead of multiple-source images. So far, extensive work has been made on image fusion technique with various techniques dedicated to multimodal medical image fusion These techniques have been categorized into three categories according to merging stage. These include pixel level, feature level and decision level fusion where medical image fusion usually employs the pixel level fusion due to the advantage of containing the original measured quantities, easy implementation and computationally efficiency .Hence, in this paper, we concentrate our http://www.ijettjournal.org Page 288 International Journal of Engineering Trends and Technology (IJETT) – Volume 9 Number 6 - Mar 2014 efforts to pixel level fusion, and the terms image fusion or fusion are intently used for pixel level fusion. The well-known pixel level fusion are based on principal component analysis (PCA), independent component analysis (ICA), contrast pyramid (CP), gradient pyramid (GP) filtering, etc. Since, the image features are sensitive to the human visual system exists in different scales. Therefore, these are not the highly suitable for medical image fusion . Recently, with the development of multi scale decomposition, wavelet transform has been identified ideal method for image fusion. However, it is argued that wavelet decomposition is good at isolated discontinuities, but not good at edges and textured region. As a result, the anatomical and functional medical images are needed to be combined for a compendious view. For this purpose, the multimodal medical image fusion has been identified as a promising solution which aims to integrating information from multiple modality images to obtain a more complete and accurate description of the same object. Multimodal medical image fusion not only helps in diagnosing diseases, but it also reduces the storage cost by reducing storage to a single fused image instead of multiplesource images . These techniques have been categorized into three categories according to merging stage These include pixel level , feature level and decision level fusion where medical image fusion usually employs the pixel level fusion due to the advantage of containing the original measured quantities , easy implementation and computationally efficiency we concentrate our efforts to pixel level fusion, and the terms image fusion or fusion are intently used for pixel level fusion . The well-known pixel level fusion are based on principal component analysis (PCA), independent component analysis (ICA), contrast pyramid (CP), gradient pyramid (GP) filtering, etc. Since, the image features are sensitive to the human visual system exists in different scales. Therefore, these are not the highly suitable for medical image fusion. Image registration, also referred to as image fusion , superimposition , matching or merge , is the process of attaching two images so that corresponding coordinate points in the two images correspond to the same physical region of the scene being imaged . It is a classical problem in many applications where it is necessary to match two or more images of the same scene. The images to be registered might be acquired with different sensors (e.g. sensitive to different parts or different tissues in human body) or the same sensor at different times. Image registration has application in many fields; medical image registration is one of the most successful fields that image registration is used. Medical images are widely used in healthcare and ISSN: 2231-5381 biomedical research. Their applications occur not only within clinical diagnostic settings, but also prominently so in the area of planning, consummation, and evaluation of surgical and radio therapeutical procedures. There is a wide range of imaging modalities available, which include X-ray Computed Tomography (CT), Single Photon Emission Computed Tomography (SPECT), Positron Emission Tomography (PET), Magnetic Resonance Imaging, Nuclear Medicine, Ultrasonic Imaging, Endoscopy and surgical microscopy, etc. These and other imaging technologies provide rich information on the physical properties and biological function of the issues. 2METHODOLOGY A newimagefusion framework for multimodal medical images which relies on the NSCT domain. Two different fusion rules are proposed for combining low-and high-frequency coefficients. For fusing the low-frequency coefficients, the phase congruency based model is used. The main benefit of phase congruency is that it selects and combines contrast- and brightnessinvariant representation contained in the lowfrequency coefficients. in the contrary, a new definition of directive contrast in NSCT domain is proposed and used to combine high-frequency coefficients. Using directive contrast, the most prominent texture and edge information are selected from high-frequency coefficients and combined in the fused ones. The definition of directive contrast is consolidated by in corporating a visual constant to the SML based definition of directive contrast which provides a richer representation of the contrast. LOW FREQUENCY ALGORITHM The coefficients in the low-frequency subimages represent the approximation Component of the source images. The simplest way is to use the conventional averaging methods to produce the composite bands. However, it cannot give the fused low- frequency component of high quality for medical image be-cause it leads to the reduced contrast in the fused images. Therefore, a new criterion is proposed here based on the phase congruency. The complete process is described as follows. Phase congruency is a measure of feature perception in the images which provides a illumination and contrast invariant feature extraction http://www.ijettjournal.org Page 289 International Journal of Engineering Trends and Technology (IJETT) – Volume 9 Number 6 - Mar 2014 method This approach is based on the Local Energy Model, which postulates that significant features can be found at points in an image where the Fourier components are maximally in phase. Furthermore, the angle at which phase congruency occurs signifies the feature type. The phase congruency approach to feature perception has been used for feature detection. First, logarithmic Gabor filter banks at different discrete orientations are applied to the image and the local amplitude and phase at a point are obtained. The main properties, which acted as the motivation to use phase congruency for multimodal fusion, are as follows. • The phase congruency is invariant to different pixel intensity mappings. The images captured with different modalities have significantly different pixel mappings, even if the object is same. Therefore, a feature that is free from pixel mapping must be preferred. the intensity contrast rather than the intensity value itself. Generally, the same intensity value looks like a different intensity value depending on intensity values of neighboring pixels. Where is the local luminance and is the luminance of the local background. Generally, is regarded as local low frequency and hence, is treated as local high frequency. This definition is further extended as directive contrast for multimodal image fusion These contrast extensions take high-frequency as the pixel value in multi resolution do-main. However, considering single pixel is insufficient to determine whether the pixels are from clear parts or not. Therefore, the directive contrast is integrated with the summodified-Laplacian to get more accurate salient features. • The phase congruency feature is invariant to illumination and contrast changes. The capturing environment of different modalities varies and resulted in the change of illumination and contrast. Therefore, multimodal fusion can be benefitted by an illumination and contrast invariant feature. • The edges and corners in the images are identified by collecting frequency components of the image that are in phase. As we know, phase congruency gives the Fourier components that are maximally in phase. Therefore, phase congruency provides the improved localization of the image features, which lead to efficient fusion. First the features are extracted from low-frequency pca,pcb Sub-images using the phase congruency extractor denoted by and respectively. Fuse the low-frequency sub-images HIGH FREQUENCY ALGORITHM The coefficients in the high-frequency subimages usually include detailscomponent of the source image. It is noteworthy that the noise is also related to high-frequencies and may cause miscalculation of sharpness value andthereforeeffect the fusion performance.Therefore, a new criterion is proposed here based on directive contrast. The whole process is described as follows.First, the directive contrast for NSCT high-fre-quency sub-images at each scale and orientation using Equations, denoted by Dca,Dcb Fuse the high-frequency sub-image.The contrast feature measures the difference of the intensity value at some pixel from the neighboring pixels. The human visual system is highly sensitive to ISSN: 2231-5381 Fig 1.0 multimodal medical image fusion framework. 2 a.EXISTING SYSTEM Multi resolution (MR) transforms are a widespread tool for image fusion. MR image fusion as compared to other wavelet families. Additionally, it revise the novel Multi size Windows (MW) technique as a general approach for MR frameworks that exploits advantages of different window sizes. 2 b.PROPOSED SYSTEM Decomposition using NSCT ANon-Sub sampled Contour let Transform (NSCT)NSCT, based on the theory of CT, is a kind of multi-scale and multi-direction computation framework of the discrete images It can be divided into two stages including non-sub sampled pyramid (NSP) and non- sub sampled directional filter bank(NSDFB). The former stage ensures the multi scale property by using two-channel non-sub sampled filter bank, and one low-frequency image and one http://www.ijettjournal.org Page 290 International Journal of Engineering Trends and Technology (IJETT) – Volume 9 Number 6 - Mar 2014 high-frequency image can be produced at each NSP decomposition level. The subsequent NSP decomposition stages are carried out to decompose the low-frequency component available iteratively to capture the singularities in the image .As a result, NSP can resul tin sub-images ,which consists of one low- and high-frequency images having the same size as the source image where denotes the number of decomposition levels. With levels. The NSDFB is two-channel non-sub sampled filter banks which are constructed by combining the directional fan filter banks. NSDFB allows the direction decomposition with stages in high-frequency images from NSP at each scale and produces directional sub-images with the same size as the source image. Therefore, the NSDFB offers the NSCT with the multi-direction property and provides us with more precise directional details information. Phase Congruency Phase congruency is a measure of feature perception in the images which provides a illumination and contrast invariant fea-ture extraction method This approach is based on the Local Energy Model, which postulates that significant features can be found at points in an image where the Fourier components are maximally in phase. Furthermore, the angle at which phase congruency occurs signifies the feature type. The phase congruency approach to feature perception has been used for feature detection. First, logarithmic Gabor filter banks at different discrete orientations are applied to the image and the local amplitude and phase at a point are obtained. The main properties, which acted as the motivation to use phase congruency for multimodal fusion, are as follows. • The phase congruency is invariant to different pixel intensity mappings. The images captured with different modalities have significantly different pixel mappings, even if the object is same. Therefore, a feature that is free from pixel mapping must be preferred. • The phase congruency feature is invariant to illumination and contrast changes. The capturing environment of different modalities varies and resulted in the change of illumination and contrast. Therefore, multimodal fusion can be benefitted by an illumination and contrast invariant feature. ISSN: 2231-5381 • The edges and corners in the images are identified by collecting frequency components of the image that are in phase. As we know, phase congruency gives the Fourier components that are maximally in phase. Therefore, phase congruency provides the improved localization of the image features, which lead to efficient fusion. The visual demonstrates that the proposed algorithm can enhance the details of the fused image, and can improve the visual effect with much less information distortion than its competitors. These statistical assessment findings agree with the visual assessment. Directive contrast The contrast feature measures the difference of the intensity value at some pixel from the neighboring pixels. The human visual system is highly sensitive to the intensity contrast rather than the intensity value itself. Generally, the same intensity. value looks like a different intensity value depending on intensity values of neighboring pixels. where is the local luminance and is the luminance of the local background Generally, is regarded as local low frequency and hence, is treated as local high frequency. This definition is further extended as directive contrast for multimodal image fusion .These contrast extensions take high-frequency as the pixel value in multi resolution do-main. However, considering single pixel is insufficient to determine whether the pixels are from clear parts or not. Therefore, the directive contrast is integrated with the summodified-Laplacian to get more accurate salient features. Inverse NSCT implementation Perform n level inverse NSCT on the fused low-frequency and high-frequency sub images, to get the fused image . 3 RESULT AND DISCUSSION In this Method, a image fusion framework is proposed for multi-modal medical images, which is based on non sub sampled contour let transform and directive contrast. For fusion, two different rules are used by which more information can be preserved in the fused image with improved quality. The low frequency bands are fused by considering phase congruency where as directive contrast is adopted as the fusion measurement for high - frequency bands. In our experiment, two groups of CT/MRI and two groups of MR-T1/MR-T2 image s are fused using http://www.ijettjournal.org Page 291 International Journal of Engineering Trends and Technology (IJETT) – Volume 9 Number 6 - Mar 2014 convention al fusion algorithms and the proposed frame work. 3 a.SCREENSHOT Figure 3a.3 CT –Low freq image The input CT image is divided into two segments i.e. High level and low level Figure 3a.3 describes the low level segment. Figure 3a.1 The input image for NSCT( human brain). The input image for NSCT describes the image of the human brain,which gets divided into two segments i.e. High level and low level. Figure 3a.4 CT -High freq image The input CT image is divided into two segments i.e. High level and low levelFigure3a.4 describes the high level segment. Figure 3a.2 Input MRI images The input MRI image describes the image of human brain which gets divided into two segments i.e. High level and low level. Figure 3a.5 MRI-low freq image ISSN: 2231-5381 http://www.ijettjournal.org Page 292 International Journal of Engineering Trends and Technology (IJETT) – Volume 9 Number 6 - Mar 2014 The input MRI image is divided into two segments i.e. High level and low level Figure 3a.5 describes the low level segment. frequency bands are fused by considering phase congruency where as directive contrast is adopted as the fusion Measurement for high-frequency bands. In our experiment, two groups of CT/MRI and two groups of MR-T1/MR-T2 images are fused using conventional fusion algorithms and the proposed framework. The visual and statistical comparisons demonstrate that the proposed algorithm can enhance the details of the fused image, and can improve the visual effect with much less information distortion than its competitors. These statistical assessment findings agree with the visual assessment. Further, in order to show the practical applicability of the proposed method, three clinical example are also considered which includes analysis of diseased person’s brain with alzheimer, subacute stroke and recurrent tumor. REFERENCES Figure 3a.6 MRI-high freq image The input CT image is divided into two segments i.e. High level and low level Figure3.6 describes the high level segment. [1]Jan Rexilius, “ Sum modified-Laplacian based Multi focus Image Fusion Method in Cycle Spinning Sharp Frequency Localized Contourlet Transform Domain”, master's Thesis, Medical University of Luebeck, Germany, 2001. [2]Christensen G. E. and Johnson H. J., “Consistent Image Registration”, IEEE Transactions on medical imaging, vol. 20, No. 7, 2001. [3]Woods R. P., Mazziotta J. C., and Cherry S. R., “MRI-PET registration with automated algorithm,” J. Comput. Assist. Tomogr., 17, pp. 536–546, 1993. [4]Wang, M. Y., Fitzpatrick, J. M., and Maurer, C. R. “Design of fiducials for accurate registration of CT and MR volume images”.InLoew, M. (ed.), Medical imaging, Vol. 2434, pp. 96– 108.SPIE, 1995. [5]Peters, T., Davey, B., Munger, P., Comeau, R., Evans, A., and Olivier, A. “Threeguid-dimensional multimodal image-ance for Figure 3a.7 Fusion images Figure 3a.7 describes the fusion of all the input images. The low level segment of both input images is fused together and the same is done with high level segment. Then finally both the fused images is inverted and fusion is done. CONCLUSION A novel image fusion framework is proposed for multi-modal medical images, which is based on non-sub sampled contourlet transform and directive contrast. For fusion, two different rules are used by which more information can be preserved in the fused image with improved quality. The low ISSN: 2231-5381 neurosurgery.”IEEE Transactions on medical imaging, 15(2), 121– 128, 1996. [6]K.S. Arun, T.S. Huang, S.D. Bostein, “Least squares fitting of two 3D point sets.” IEEE [7]M.J. Fitzpatrick, J. West and C. Maurer Jr., “Predicting error in rigid-body, point-based registration.” IEEE Transactions in Medical Imaging, vol. 17, pp. 694-702, 1998. [8]C.A. Pelizzari, G.T.Y Chen, D.R Spelbring, R.R. Weichselbraum, C. Chen, “Accurate three dimensional registration of CT, PET and/or MR images of the brain.” Journal of Computer Assisted Tomography vol. 13, pp. 20-26, 1989. [9]P.J. Besl and N.D. McKay, “A method for registration of 3D shapes.” IEEE Transactions in Pattern Analysis and Machine http://www.ijettjournal.org Page 293 International Journal of Engineering Trends and Technology (IJETT) – Volume 9 Number 6 - Mar 2014 Vision.Vol.14, pp. 239-256, 1992.aph.Image Processing, vol. 27, pp. 321–345, 1984. [10]C. T. Huang and O. R. Mitchell, “A Euclidean distance transform using grayscale morphology decomposition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 16, pp. 443–448, 1994. [11]K. J. Friston, J. Ashburner, J. B. Poline, C. D. Frith, J. D. Heather, and F. R.S.J., “Spatial registration and normalisation of images,” Human Brain Mapping, vol. 2, pp. 165–189, 1995. [12]Lemieux L and Barker G J, “Measurement of small inter-scan fluctuations in voxel dimensions in magnetic resonance images using registration”, Med. Phys. 25 1049–54, 1998 [13] R. P. Woods, S. T. Grafton, C. J. Holmes, S. R. Cherry, and J. C. Mazziotta, “Automated image registration: I. General methods and intrasubject, intramodality validation,” J. Com-put. Assist. Tomogr., vol. 22, pp. 139–152, 1998. [14]Van Den Elsen P A, Pol E-J D, Sumanaweera T S, Hemler P F, Napel S and Adler J R, “Grey value correlation techniques used for automatic matching of CT and MR brain and spine images”, Proc. SPIE 2359227–37, 1994 [15]J.P.W. Pluim, J.B.A. Maintz, M.A. Viergever, "Mutual information based registration of medical images: a survey", IEEE Transactions on Medical Imaging, 2003. [16]Buzug T. M. and Weese J., “Image registration for DSA quality enhancement”, Computerized Imaging Graphics 22103 1998. [17]W. M. Wells, III, P. Viola, H. Atsumi, S. Nakajima, and R. Kikinis, “Multi-modal volumeregistration by maximization of mutual information,” Med. Image Anal., vol. 1, pp. 35–51, 1996. ISSN: 2231-5381 http://www.ijettjournal.org Page 294