a novel hybrid multi spectral image fusion method using contourlet transform -- tencon 2009, ieee

6
A Novel Hybrid Multispectral Image Fusion Method using Contourlet Transform Tanish Zaveri, Ishit Makwana Electronics and Communication Engineering Department Institute of Technology, Nirma University Ahmedabad, Gujarat, India [email protected] Mukesh Zaveri Computer Engineering Department Sardar Vallabhbhai National Institute of Technology Surat, Gujarat, India [email protected] Abstract—Standard Pan-sharpening methods do not allow control of the spatial and spectral quality of the fused image. The color distortion is also most significant problem in standard pan- sharpening methods. In this paper a novel hybrid pan sharpening method using contourlet transform is proposed which provides novel tradeoff solution between the spectral and spatial fidelity and preserves more detail spectral and spatial information. New hybrid image fusion rules are also proposed. Proposed method is applied on number of registered Panchromatic and Multispectral images and simulation results are compared with standard image fusion parameters. The simulation results of proposed method also compared with six different standard and recently proposed Pan sharpening methods. It has been observed that simulation results of our proposed algorithm preserves more detailed spatial and spectral information and better visual quality compared to earlier reported methods. I. I NTRODUCTION In the recent years image fusion is an important research area because of its wide application in many image analysis tasks such as target recognition, remote sensing, wireless sensor network and medical image processing. Image fusion is a procedure that aims at the integration of disparate and complementary data to enhance the information present in the source images as well as to increase the reliability of the interpretation. This process leads to more accurate data interpretation and utility. According to Piella [1], fusion pro- cess is nothing but a combination of salient information in order to synthesize an image with more information than individual image and synthesized image is more suitable for visual perception. In this paper, we focus on multispectral image fusion application which is an important research area in the field of remote sensing. The synthesis of multispectral (MS) images to the higher spatial resolution of the Panchromatic (Pan) image is called as Pan sharpening method. Pan sharpened multispectral image is a fusion product in which the MS bands are sharpened by the higher-resolution Pan image. Most earth resource satellites, such as SPOT, IRS, Landsat 7, IKONOS, QuickBird and OrbView, plus some modern airborne sensors, such as Leica ADS40, provide both Pan images at a higher spatial resolution and MS images at a lower spatial resolution. We assume that Pan and MS input data sets are a priori geometrically registered. Many research papers have reported the limitations of existing fusion techniques. Standard image fusion methods do not allow control of the spatial and spectral quality of the fused image. The color distortion is also most significant problem of most of standard pan sharpening methods. To reduce the color distortion and improve the fusion quality, a wide variety of strategies have been developed, each specific to a particular fusion technique or image dataset. No satisfactory solution has been achieved which can consistently produce high quality fusion for different data sets as well as it can reduce color distortion. Various pan sharpening methods have been developed ear- lier; the comprehensive review of most published image fusion techniques described by Pohl and Van Genderen [2]. Most successful pan sharpening methods are in general fall into the following three categories: (1) projection and substitution methods, such as IHS (Intensity Hue Saturation) fusion, and PCA (Principal Component Analysis) fusion; [2] [3] (2) band ratio and arithmetic combination, such as Brovey transform (BT) and SVR (Synthetic Variable Ratio), and (3) the recently popular wavelet transform and contourlet transform based fusion which injects spatial features from panchromatic images into multispectral images.[4][6] All the three IHS method, PCA method and BT based methods are most popular and standard algorithms among remote sensing community due its advantages and practical applications. However, the color distortion problem appears significantly in those techniques which leads to poor spectral fidelity in all these three meth- ods compared to recently proposed multiscale transform with multiresolution decomposition based approach. The wavelet transform is good at isolating the discontinuities at object edges but can not capture the geometry of image edges which are typically located along smooth curves [3]. The natural images contain many intrinsic geometrical structures. In such images, if we apply wavelet transform in 2D than it isolates edges in the images with discontinuities at edge points and smoothness of the boundaries of object will be loss. To overcome these limitations, in this paper a novel hybrid multispectral image fusion algorithm based on contourlet trans- form is proposed which is a combination of new IHS method and Contourlet transform. The paper is organized as follow; the proposed method is described in section 2. The evaluation parameters of pan sharpening methods are described in section 3. The simulation results assessment of proposed algorithm and comparison with other recently proposed method are also 978–1–4244–4547–9/09/$26.00 c 2009 IEEE TENCON 2009 1 Authorized licensed use limited to: NIRMA INSTITUTE OF TECHNOLOGY. Downloaded on April 15,2010 at 11:11:06 UTC from IEEE Xplore. Restrictions apply.

Upload: ishit-makwana

Post on 27-Jul-2015

157 views

Category:

Documents


0 download

DESCRIPTION

In recent years, Pan sharpening methods are viewedas an effective tool to analyze multiband remote sensing images. In this paper a novel substitutive Pan sharpening method using PCA and contourlet transform is proposed. The proposed method provides novel tradeoff solution between the spectral and spatial fidelity and preserves more detail spectral and spatialinformation. Also, new substitution based image fusion rules are proposed. Proposed method is applied on registered Panchromatic and Multispectral images and simulation results are compared using standard image fusion parameters. The simulation results of proposed method also compared with four different standard Pan sharpening methods available in literature. It has been observed from simulation results that proposed algorithm preserves better spatial and spectral information and better visual quality compared to earlierreported methods.

TRANSCRIPT

Page 1: A Novel Hybrid Multi Spectral Image Fusion Method Using Contourlet Transform -- TENCON 2009, IEEE

A Novel Hybrid Multispectral Image Fusion Methodusing Contourlet Transform

Tanish Zaveri, Ishit MakwanaElectronics and Communication Engineering Department

Institute of Technology, Nirma UniversityAhmedabad, Gujarat, India

[email protected]

Mukesh ZaveriComputer Engineering Department

Sardar Vallabhbhai National Institute of TechnologySurat, Gujarat, India

[email protected]

Abstract—Standard Pan-sharpening methods do not allowcontrol of the spatial and spectral quality of the fused image. Thecolor distortion is also most significant problem in standard pan-sharpening methods. In this paper a novel hybrid pan sharpeningmethod using contourlet transform is proposed which providesnovel tradeoff solution between the spectral and spatial fidelityand preserves more detail spectral and spatial information. Newhybrid image fusion rules are also proposed. Proposed method isapplied on number of registered Panchromatic and Multispectralimages and simulation results are compared with standard imagefusion parameters. The simulation results of proposed methodalso compared with six different standard and recently proposedPan sharpening methods. It has been observed that simulationresults of our proposed algorithm preserves more detailed spatialand spectral information and better visual quality compared toearlier reported methods.

I. INTRODUCTION

In the recent years image fusion is an important researcharea because of its wide application in many image analysistasks such as target recognition, remote sensing, wirelesssensor network and medical image processing. Image fusionis a procedure that aims at the integration of disparate andcomplementary data to enhance the information present inthe source images as well as to increase the reliability ofthe interpretation. This process leads to more accurate datainterpretation and utility. According to Piella [1], fusion pro-cess is nothing but a combination of salient information inorder to synthesize an image with more information thanindividual image and synthesized image is more suitable forvisual perception. In this paper, we focus on multispectralimage fusion application which is an important research area inthe field of remote sensing. The synthesis of multispectral (MS)images to the higher spatial resolution of the Panchromatic(Pan) image is called as Pan sharpening method. Pan sharpenedmultispectral image is a fusion product in which the MS bandsare sharpened by the higher-resolution Pan image.

Most earth resource satellites, such as SPOT, IRS, Landsat7, IKONOS, QuickBird and OrbView, plus some modernairborne sensors, such as Leica ADS40, provide both Panimages at a higher spatial resolution and MS images at a lowerspatial resolution. We assume that Pan and MS input data setsare a priori geometrically registered. Many research papershave reported the limitations of existing fusion techniques.Standard image fusion methods do not allow control of the

spatial and spectral quality of the fused image. The colordistortion is also most significant problem of most of standardpan sharpening methods. To reduce the color distortion andimprove the fusion quality, a wide variety of strategies havebeen developed, each specific to a particular fusion techniqueor image dataset. No satisfactory solution has been achievedwhich can consistently produce high quality fusion for differentdata sets as well as it can reduce color distortion.

Various pan sharpening methods have been developed ear-lier; the comprehensive review of most published image fusiontechniques described by Pohl and Van Genderen [2]. Mostsuccessful pan sharpening methods are in general fall intothe following three categories: (1) projection and substitutionmethods, such as IHS (Intensity Hue Saturation) fusion, andPCA (Principal Component Analysis) fusion; [2] [3] (2) bandratio and arithmetic combination, such as Brovey transform(BT) and SVR (Synthetic Variable Ratio), and (3) the recentlypopular wavelet transform and contourlet transform basedfusion which injects spatial features from panchromatic imagesinto multispectral images.[4][6] All the three IHS method,PCA method and BT based methods are most popular andstandard algorithms among remote sensing community dueits advantages and practical applications. However, the colordistortion problem appears significantly in those techniqueswhich leads to poor spectral fidelity in all these three meth-ods compared to recently proposed multiscale transform withmultiresolution decomposition based approach. The wavelettransform is good at isolating the discontinuities at object edgesbut can not capture the geometry of image edges which aretypically located along smooth curves [3]. The natural imagescontain many intrinsic geometrical structures. In such images,if we apply wavelet transform in 2D than it isolates edges inthe images with discontinuities at edge points and smoothnessof the boundaries of object will be loss.

To overcome these limitations, in this paper a novel hybridmultispectral image fusion algorithm based on contourlet trans-form is proposed which is a combination of new IHS methodand Contourlet transform. The paper is organized as follow;the proposed method is described in section 2. The evaluationparameters of pan sharpening methods are described in section3. The simulation results assessment of proposed algorithmand comparison with other recently proposed method are also

978–1–4244–4547–9/09/$26.00 c© 2009 IEEE TENCON 20091

Authorized licensed use limited to: NIRMA INSTITUTE OF TECHNOLOGY. Downloaded on April 15,2010 at 11:11:06 UTC from IEEE Xplore. Restrictions apply.

Page 2: A Novel Hybrid Multi Spectral Image Fusion Method Using Contourlet Transform -- TENCON 2009, IEEE

Fig. 1. Block Diagram of Proposed Modified IHS method

described in section 4. It is followed by conclusion.

II. PROPOSED METHOD

The proposed method is a novel framework which providesnovel tradeoff solution to get better spectral and spatial qualityPan sharpened image. The block diagram of proposed methodis divided into two parts as shown in Fig. 1 and Fig. 2. BothMS and Pan image are considered as input source image. TheIHS color space is used in proposed algorithm because ofit is easy to implement and less computational complexity.The Simple IHS method described in [2] is standard Pansharpening method which produces color distortion because inthat method only Pan image is used to modified intensity imagewhich may produced good spatial quality Pan sharped imagebut less amount of spectral information can preserve in it. Inthe proposed method, to increase spectral component whilepreserving spatial details both intensity images of PAN and MSare considered to produce modified Pan intensity image. Thesteps to generate modified intensity image Inew is describedbelow and block diagram is also shown in Fig. 1.

1) Consider the PAN and MS image as a source imageand perform RGB to HSI operation as described in [7]to extract the modified intensity component of both theimages, Ip and Im.

2) Perform Match Measure operation [12] between Ip andIm to obtain the Imatch image. The Match measure valuebelow certain threshold is used to retain high contrast infused image and when above threshold equal amount ofweight multiplied to add information from both images.

3) Apply Histogram Equalization (HE) method [13] on theIp image which results in the modified image Ip’. HE isused to improve the features of intensity image

4) Take average (Avg) between Ip’ and Im to obtain Iave.5) Perform average of Iave and Imatch, resulting in Iave2.6) Perform Histogram Matching (HM) [13] between Iave2

and Ip’ and obtain the output modified intensity compo-nent Inew. HM is used to improve the features of thathave a smoother transition of levels as specified by Iave2.

The second part of block diagram of proposed method is shownin Fig. 2. Any image fusion method is broadly divided intotwo categories pixel based method and region based method.Pixel based method is simple in computation and deals withoriginal information directly but it produces many undesirable

side effects in resultant image while region based method iscomplex but it is less affected by misregistration, less sensitiveto noise and better contrast. In proposed method both pixelbased and region based approaches are used in one algorithm.This novel framework allows us to design hybrid fusion rulesso this method is called as hybrid image fusion method whereboth pixel based and region based fusion rules are used toproduce final Pansharped image. It preserves more details infinal Pan sharped image and also it carries advantages ofboth types of fusion methods. The Pan sharped image can begenerated by following steps as described below.Step 1 I new is same as Inew as shown in Fig. 1. I newand Im are considered as input source image. The contourlettransform (CT) [11] applied to the I new and Im imageswhich separates approximation and detail components fromeach image are represented as IACT,j and IDCT,j respectively.Where j represents decomposition level of CT.Step 2 Hybrid fusion rule is divided into three categories;pixel based, block processing based and region based. Thehybrid fusion rule is applied on CT based decomposed detailimage InDCT,j and ImDCT,j of both source image I new andIm respectively. The four fusion rules are designed based onfour important features parameters contrast, energy, standarddeviation and average gradient. Among these four fusion rulesone fusion rule is pixel and region based each and other twoother fusion rules are block processing based fusion rules. Allthe four fusion rules are applied on InDCT,j and ImDCT,j .First contrast parameter based pixel level fusion rule isproposed. The energy and standard deviation are standardblock processing based feature extraction parameters whichare proposed as fusion rule 2 and 3 respectively. One regionbased image fusion rule 4 with average gradient parameteris designed. All Four fusion rules are described in followingfour equations. Fusion rule 1 is pixel based fusion rule usingcontrast. The contrast is very important spatial feature in theimage which gives information about differences of objectsin the image. In proposed method contrast max fusion rule isapplied so high directive contrast coefficients are preserved infinal fused image. Contrast is defined as

Cdn,j = Id

nDCT,j/IdnACT,j (1)

Where d represents the CT based decomposed detail image ofdifferent direction. Here j represents decomposition level. Heren represented for the contrast coefficient computed for imageI new. All these symbols and its meaning are uniform for allthe equations. Using this contrast parameter first fusion rule isdefined as

IdFC,j =

{IdnDCT,j if Cd

n,j ≥ Cdm,j

IdmDCT,j if Cd

n,j<Cdm,j

(2)

Energy is a parameter used to measure the activity level andtexture uniformity in any image which is defined in equation(2). The energy is computed for window size (M x N).

Edn,j =

M∑x=1

N∑y=1

IdnDCT,j(x, y)2 (3)

2

Authorized licensed use limited to: NIRMA INSTITUTE OF TECHNOLOGY. Downloaded on April 15,2010 at 11:11:06 UTC from IEEE Xplore. Restrictions apply.

Page 3: A Novel Hybrid Multi Spectral Image Fusion Method Using Contourlet Transform -- TENCON 2009, IEEE

Fig. 2. Block Diagram of Proposed method

The fusion rule 2 is block processing based energy max rulefor window size (M x N). It can also call as directive energyfusion rules. The block of pixels whose directive energy ismaximum are saved to be the pixels of the fused image.

IdFE,j =

{IdnDCT,j if Ed

n,j ≥ Edm,j

IdmDCT,j if Ed

n,j<Edm,j

(4)

The standard deviation (SD) indicates the distribution of in-formation in the block so it measures the activity level whileconsidering the effect of neighborhood pixels. The directivestandard deviation is defined as

Sdn,j = 1/9

M∑x=1

N∑y=1

(IdnDCT,j(x, y)− IMd

nDCT,j)2 (5)

Here IMdnDCT,j is mean value of window size (M x N) block.

The fusion rule 3 is block processing based SD max rule forwindow size (M x N). The block of pixels whose directiveSD is maximum are considered to be the pixels of the fusedimage. The directive SD based image fusion rule 3 is definedas

IdFS,j =

{IdnDCT,j if Sd

n,j ≥ Sdm,j

IdmDCT,j if Sd

n,j<Sdm,j

(6)

The fusion rule 4 is region based image fusion rule based onaverage gradient which is used to compare spatial resolution orclarity of information from the images or regions. The averagemean gradient is computed as described in equation (5).

Gdnc,j =

1

(M − 1)(N − 1)

M−1∑x=1

N−1∑y=1

√δIncut(x,y)

δx

2+ δIncut(x,y)

δy

2

2

(7)

I ncut is generated by taking the average of I new and Imimage. Normalized ncut segmentation algorithm [14] is appliedon I ncut to extract n number of regions as shown in Fig.2(b). The fusion rule 4 is defined as

Idg,i,j =

{Rni if Gd

n,i,j ≥ Gdm,i,j

Rmi if Gdn,i,j<Gd

m,i,j

(8)

Here i is a number of regions and it varies from 1 to n. wherei = 1, 2, ..., n. Normalized cut set segmentation algorithm isapplied on I new. Segmented regions extracted from imageI new and Im are represented as Rni and Rmi respectivelywhich is extracted using segmentation results of I new. Afterapplying fusion rule 4 to all n regions; merge all the regionsto generate resultant fused image Id

g,i,j .Step 3 Repeat first two steps for all the CT decomposed detailimages for level j.Step 4 Four detail images produced after applying four fusionrules and for all the four type of detail image; Id

nACT,j isconsidered as common approximation image to apply inverseCT as shown in Fig. 2(a).Step 5 After applying inverse CT resultant four fused imagesare averaged to produce final fused intensity image called asIdF,j .

Step 6 Finally, the H and S components of MS image iscombined with the Id

F,j intensity image to obtain the final fusedRGB image.

All four important activity level measurement feature pa-rameters are considered to compare the details from bothsource images. Hybrid fusion rule is proposed in the paper totake an advantage of both pixel based and region based imagefusion rules. The next section describes evaluation criteria tocompare results of different Pan sharpening methods in brief.

3

Authorized licensed use limited to: NIRMA INSTITUTE OF TECHNOLOGY. Downloaded on April 15,2010 at 11:11:06 UTC from IEEE Xplore. Restrictions apply.

Page 4: A Novel Hybrid Multi Spectral Image Fusion Method Using Contourlet Transform -- TENCON 2009, IEEE

TABLE IIMAGE FUSION QUALITY ASSESSMENT PARAMETERS FOR UNB IMAGES

UNB Spectral Spatial CommonSNR CC DE ERGAS RASE SAM SNR CC Avg.CC AG SD

I.H.S. [2] 73.447 0.504 47.247 8.607 85.978 0.509 64.455 0.946 0.725 12.7090 60.851MI-I.H.S.[7] 73.743 0.499 47.433 8.648 86.367 0.511 64.706 0.946 0.723 12.7970 60.876

PCA [12] 66.351 0.522 47.651 7.366 82.559 0.464 73.296 0.985 0.754 11.1810 55.744Sub. WT [8] 76.578 0.884 21.722 4.284 42.724 0.249 64.435 0.694 0.789 10.7550 63.866

CT [9] 72.569 0.886 22.143 4.373 43.696 0.253 63.710 0.669 0.777 11.5670 66.029Brovey [2] 72.546 0.627 39.598 7.089 70.621 0.422 64.942 0.949 0.788 10.3030 53.807Proposed 85.678 0.909 19.443 3.629 36.107 0.213 65.980 0.748 0.829 6.8589 58.516

Fig. 3. Fusion Results of UNB image (a) 1m Panchromatic image (b) 4m Multispectral image(c) IHS Method (d) Modified IHS Method (e) PCA based method(f) WT based method (g) CT based Method (h) Brovey Transform Method (i) Proposed Method

TABLE IIIMAGE FUSION PARAMETERS FOR REFERENCE BASED REPORT42 IMAGES

Report42 Spectral Spatial CommonSNR CC DE ERGAS RASE SAM SNR CC Avg.CC AG SD

I.H.S. [2] 93.006 0.807 10.956 2.717 27.368 0.176 48.594 0.954 0.881 22.795 26.536MI-I.H.S.[7] 52.182 0.784 24.812 7.322 51.858 0.211 57.362 0.981 0.883 18.903 22.205

PCA [12] 48.623 0.778 32.378 10.851 64.514 0.243 58.523 0.969 0.873 17.663 21.238Sub. WT [8] 88.517 0.975 3.654 1.027 10.302 0.066 48.575 0.807 0.891 18.470 26.956

CT [9] 89.106 0.981 3.010 0.895 8.946 0.058 48.582 0.792 0.887 17.094 27.052Brovey [2] 46.073 0.777 37.543 13.545 73.694 0.250 58.869 0.979 0.878 16.280 19.965Proposed 90.480 0.934 6.667 1.606 16.195 0.104 48.538 0.904 0.919 15.067 27.147

III. EVALUATION CRITERIA

There are many different performance evaluation indices areavailable in literature [2][3][4][12] to analyze Pan sharpenedimages. These indices are divided into mainly three categorieswhich include spatial quality indices, spectral quality andaverage indices to analyze the effect of both simultaneously.There are many parameters are available to judge spatialquality of Pan sharped image like Cross correlation (CC),distortion extent (DE), Root mean square error (RMSE) orUniversal image quality (UQI) indices explained in [2][12].

Even by analyzing visual quality of an image it is easyto analyze the sharpness of the edges or spatial quality of animage but it is much more difficult to match the colors of

the final result to the original multispectral images. There aremany indices [2][12] that analyze the spectral quality of finalfused image like Relative global error in synthesis (ERGAS),Spectral angle mapper (SAM), Relative average spectral error(RASE). The Cross correlation (CC), Root cross entropy(RCE) or mean cross entropy (MCE) and Signal to noise ratio(SNR) [10] can be used to analyze both quality factors. TheAverage gradient (AG) and standard deviation (SD) parametersare not using any reference image for evaluation. All theseparameters are explained in details in literature [2][12].

IV. SIMULATION RESULTS AND ASSESSMENT

The test dataset images are downloaded from [15]. The 1mPanchromatic image and 4m multispectral image (UNB) of

4

Authorized licensed use limited to: NIRMA INSTITUTE OF TECHNOLOGY. Downloaded on April 15,2010 at 11:11:06 UTC from IEEE Xplore. Restrictions apply.

Page 5: A Novel Hybrid Multi Spectral Image Fusion Method Using Contourlet Transform -- TENCON 2009, IEEE

Fig. 4. Fusion Results of Quickbird image (a) 1m Panchromatic image (b) 4m Multispectral image(c) IHS Method (d) Modified IHS Method (e) PCA basedmethod (f) WT based method (g) CT based Method (h) Brovey Transform Method (i) Proposed Method

the city of Fredericton, Canada are shown in Fig.3 (a) and(b) respectively. These images are acquired by the commercialsatellite IKONOS. The raw multispectral image taken from thesite has been resampled to the same size of the panchromaticimage in order to perform registration. The other Quickbirdimage also considered as input source images as shown inFig.4 (a) and (b) are Pan and MS image respectively. Theproposed algorithm has been implemented using Matlab 7.Our experiment results show that CT decomposition level 3provides better visual quality. To apply region based imagefusion rule 4 number of segmentation region (n) used is nine.This value is considered after analyzing different results ofdifferent segmentation levels. In our experiments window sizefor block processing methods are considered as 3 x 3. Themost widely used three standard Pan sharpening methods IHSmethod [2], Brovey Method [7], PCA based method [8] inremote sensing area described in [2] and two recently proposedmultiresolution based wavelet transform (WT) [8] and CTbased [9]substitution methods are used to compare proposedmethod.

Average value of each quality assessment parameters of allthree band R, G and B of source images are depicted in TableI and II. These standard quality evaluation parameters for allthe six comparison methods. All the spectral based fusionparameters are better for proposed method. Spatial qualityparameters are better for PCA based method. The averagecorrelation coefficient and average SNR are significantly betterwhich shows that proposed method preserves both spatial andspectral parameter better than other reported methods. Pro-posed method provides best tradeoff solution. Multiresolutionbased WT and CT based methods have less color distortionbut spatial resolution is affected. The color distortion is alsovery less in proposed method while color distortion is highestin IHS and PCA based method. Resultant Pan sharped imageof all seven method are shown in Fig. 3 (c) to (i) and Fig. 4(c) to (i). Due to page length constrain more simulation resultsare not depicted here.

V. CONCLUSION

There are number of applications in remote sensing thatrequires single fused Pan sharp image which preserves bothspectral and spatial information with less color distortion. Thefusion of multispectral and panchromatic images providesa solution by combining the clear geometric features ofthe panchromatic image and color information of themultispectral image. The proposed algorithm provides noveltrade off solution using contourlet transform and hybrid fusionrule based framework. The quality assessment parametersof resultant fused image are significantly better than earlierreported method. The SNR and average CC are remarkablyhigher than other compared standard and recent methods.The visual quality of proposed algorithm is better thanother compared methods. The algorithm can be extendedby applying more complex fusion rules based on artificialintelligent method for more robust fusion. The computationaltime is only limitation of the algorithm.

REFERENCES

[1] Piella. G.,”A general framework for multiresolution image fusion: frompixels to regions,” Journal of Information Fusion, Vol. 4, pp 259-280,2003.

[2] C. Pohl, J. Van, Genderen, ” Multisensor image fusion in remote sensing:concepts methods and applications,” International Journal of RemoteSensing, Vol. 19(5), pp 823-854, 1998.

[3] R. A. Schowengerdt,Remote Sensing: Models and Methods for ImageProcessing, 2nd ed. Orlando, FL, Academic, 1997.

[4] Z. Wang, D. Ziou, C. Armenakis, Q. Li, ”A Comparative Analysis ofImage Fusion Methods,” IEEE Trans. Geoscience remote sensing, vol.43, no. 6, pp. 1391-1402, 2005.

[5] Zhang, Yun, ”Understanding Image Fusion,” PCI Geomatics, Vol. 24,2008.

[6] J. Zhou, D.L. Civco, J.A. Silander, ”A wavelet transform method tomerge Landsat TM and SPOT panchromatic data”, International Journalof Remote Sensing, Vo. 19 (4), pp. 743-757, 1998.

[7] T. Tu, S. Su, H. Shyn, P. Huang, ”A new look at IHS like image fusionmethods,” Information Fusion, Vol. 2. pp. 177-186, 2001.

5

Authorized licensed use limited to: NIRMA INSTITUTE OF TECHNOLOGY. Downloaded on April 15,2010 at 11:11:06 UTC from IEEE Xplore. Restrictions apply.

Page 6: A Novel Hybrid Multi Spectral Image Fusion Method Using Contourlet Transform -- TENCON 2009, IEEE

[8] Gonza, Lez-Audi,Cana M., Saleta, J. L., Catala N.,Garcia R., ”Fusionof multispectral and panchromatic images using improved IHS andPCA mergers based on wavelet decomposition,” IEEE Transactions onGeoscience and Remote Sensing, Vol. 42, pp. 1291-1299, 2004.

[9] Aboubaker M, ALEjaily, Ibrahim A, El Rube, Mohab A, Mangoud, ”Fu-sion of Remote Sensing Images Using Contourlet Transform, ProceedingInnovations and Advanced Techniques in Systems,” Computing Sciencesand Software Engineering, Springer, pp 213-218, 2008.

[10] Pouran Behnia, ”Comparison Between Four Methods for Data Fusion ofETM+ Multispectral and Pan Images,” Journal of Geo-spatial Information,Vol. 8,Issue 2, 2005.

[11] Miao Qiguang, Wang, Baoshul, ”A novel image fusion method usingcontourlet transform,” International Conference on Communications, Cir-cuits and Systems Proceedings, Vol. 1, pp 548-552, 2006.

[12] Tania, Stathaki, Image Fusion: Algorithms and Applications, Elsevier,1st ed., 2008.

[13] Gonzales, Rofael, Richard, Woods, Digital Image Processing, PearsonEducation, 2nd ed. 2006.

[14] Shutao Li, Bin Yang, ”Multifocus image fusion using region segmenta-tion and spatial frequency,” Image and Vision Computing, Elsevier, Vol.26, pp. 971-979, 2008.

[15] http://studio.gge.unb.ca/UNB/zoomview/examples.html

6

Authorized licensed use limited to: NIRMA INSTITUTE OF TECHNOLOGY. Downloaded on April 15,2010 at 11:11:06 UTC from IEEE Xplore. Restrictions apply.