Next Article in Journal
Urban Land Use Mapping by Combining Remote Sensing Imagery and Mobile Phone Positioning Data
Next Article in Special Issue
Semi-Supervised Classification of Hyperspectral Images Based on Extended Label Propagation and Rolling Guidance Filtering
Previous Article in Journal
Scene Classification Based on a Deep Random-Scale Stretched Convolutional Neural Network
Previous Article in Special Issue
Classification of Hyperspectral Images by SVM Using a Composite Kernel by Employing Spectral, Spatial and Hierarchical Structure Information
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hyperspectral Pansharpening Based on Intrinsic Image Decomposition and Weighted Least Squares Filter

State Key laboratory of Integrated Service Networks, Xidian University, Xi’an 710071, China
*
Authors to whom correspondence should be addressed.
Submission received: 19 December 2017 / Revised: 2 March 2018 / Accepted: 9 March 2018 / Published: 12 March 2018
(This article belongs to the Special Issue Hyperspectral Imaging and Applications)

Abstract

:
Component substitution (CS) and multiresolution analysis (MRA) based methods have been adopted in hyperspectral pansharpening. The major contribution of this paper is a novel CS-MRA hybrid framework based on intrinsic image decomposition and weighted least squares filter. First, the panchromatic (P) image is sharpened by the Gaussian-Laplacian enhancement algorithm to enhance the spatial details, and the weighted least squares (WLS) filter is performed on the enhanced P image to extract the high-frequency information of the P image. Then, the MTF-based deblurring method is applied to the interpolated hyperspectral (HS) image, and the intrinsic image decomposition (IID) is adopted to decompose the deblurred interpolated HS image into the illumination and reflectance components. Finally, the detail map is generated by making a proper compromise between the high-frequency information of the P image and the spatial information preserved in the illumination component of the HS image. The detail map is further refined by the information ratio of different bands of the HS image and injected into the deblurred interpolated HS image. Experimental results indicate that the proposed method achieves better fusion results than several state-of-the-art hyperspectral pansharpening methods. This demonstrates that a combination of an IID technique and a WLS filter is an effective way for hyperspectral pansharpening.

Graphical Abstract

1. Introduction

Hyperspectral pansharpening aims to combine the preponderance and complementary information of the hyperspectral (HS) and panchromatic (P) images for image analysis and various applications [1]. Spatial information and spectral information plays an important role in remote sensing image analysis. Unfortunately, due to the limitation of sensor and theoretical aspects, most satellites cannot provide a remote sensing image with both high spatial and spectral resolution [2,3]. However, hyperspectral images with high spectral and spatial resolution have been in demand. Therefore, it is important to introduce hyperspectral pansharpening techniques to improve the spatial resolution of hyperspectral images.
Many methods dedicated to hyperspectral pansharpening have been proposed in the last two decades [4,5]. These hyperspectral pansharpening methods can be grossly divided into four groups: component substitution (CS), multiresolution analysis (MRA), matrix factorization, and Bayesian. In recent years, there has been increasing interest in Bayesian methods and matrix factorization methods. Bayesian methods usually model the HS and the P images as the degraded high-resolution HS images and then restore the HS images through solving optimization problems, such as Sparse Representation [6,7], Bayesian HySure [8], and Bayesian Naive Gaussian prior (Bayesian Naive) [9]. Matrix factorization methods utilize the linear mixture model, and use it for the fusion optimization model. The coupled non-negative matrix factorization (CNMF) [10] method is a representative among the matrix factorization methods. Bayesian and matrix factorization methods have shown considerable potential in improving the quality of the fused images. However, for purpose of estimating a good solution, researchers have also made efforts to solve the ill-posed inverse problem, which is time consuming and computational expensive [11,12]. From a perspective of practical applications, it is a difficult problem.
The CS and MRA methods are easy and fast to implement [13,14]. The component substitution (CS) methods include algorithms, such as intensity hue saturation (IHS) [15,16], Gram–Schmidt (GS) [17], and principal component analysis (PCA) [18,19,20]. The primary concept of CS methods is that the HS image can be separated into spectral and spatial components, and the P image is a good substitution for the separated spatial component. The final fused image is obtained by the inverse spectral transformation [21]. The fusion step of the CS methods is summarized as
H F k = H k + α k ( P S )
where, k = 1 , 2 , , λ , λ is the number of the HS image bands, α k is the kth injection gain, matrix P is the panchromatic image, matrix S is the spatial component of the HS image, P S is generally called the detail map, matrix H is the interpolated HS image, H F k and H k are the k th band of the fused image and the interpolated HS image, respectively. The injection gain is a gain used for merging the detail map and the interpolated HS image into a fused HS image. The CS methods have simple and fast implementation [22]. However, the spectral distortion is serious due to the spectral mismatch between the P image and the replaced component [23].
The multiresolution analysis (MRA) has algorithms such as modulation transfer function (MTF) generalized Laplacian Pyramid (MTF-GLP) [24], smoothing filter-based intensity modulation (SFIM) [25] and MTF-GLP with high pass modulation (MGH) [26]. The spatial filtering is performed on the P image to extract the high-frequency spatial details. The fused HS image is obtained by injecting the extracted spatial details into each band of the interpolated HS image. Following, a formulation of MRA methods is defined as [4]
H F k = H k + β k ( P P L )
where, k = 1 , 2 , , λ , λ denotes the number of the HS image bands, β k denotes the kth injection gain, and P L denotes low-frequency component of the P image. The advantages of the MRA methods are good performance, temporal coherence, spectral consistency and acceptable computational complexity. In addition, the MRA methods can be easily adopted when the source of the high spatial frequencies is another multispectral/hyperspectral image [27]. However, blurry images may occur when the shapes of low-pass filters adopted have problems [24].
To overcome the problems of the CS and MRA methods, the CS-MRA hybrid frameworks were proposed [20,28,29]. These methods focus on fusing the P image and the spatial component of the HS image by multiscale transforms. The final fused image is obtained by the inverse spectral transformation. The performance of the CS-MRA hybrid methods has shown improvement compared with that of the CS or MRA methods. However, the fused images obtained by these hybrid methods suffer from spectral distortion of different degrees, since the structure of the HS image is not fully considered. To overcome the drawbacks of the CS-MRA hybrid methods, we propose a new CS-MRA hybrid framework based on intrinsic image decomposition and weighted least squares filter. Specifically, we filter the sharpened P image by the weighted least squares (WLS) filter to obtain the high-frequency component of the P image at first. Subsequently, the MTF-based deblurring method is performed on the interpolated HS image. The intrinsic image decomposition (IID) is applied to the deblurred interpolated HS image to extract the illumination component of the HS image. The detail map is generated by merging the high frequency information of P image with the illumination component of the HS image. Finally, the detail map is injected into the deblurred interpolated HS image to obtain the fused HS image.
The following are the major contributions of the proposed method using IID and WLS:
(a)
It uses the IID technique which separates the deblurred HS image into the reflectance and illumination components to extract the spatial information from the HS image.
(b)
Unlike the traditional CS and MRA methods where the spatial details are just extracted from the P image, the detail map in the proposed method depends on both the HS image and the P image. The spectral distortion caused by the spectral mismatch problem can be reduced.
(c)
The WLS filter preserves the spatial details on edges in a better manner compared to traditional low pass filters, since it can make the best compromise between sharpening and blurring. Therefore, the WLS filter is adopted to extract the high-frequency component of the P image in the proposed method.
(d)
Most CS and MRA methods are based on the assumption that each band of the HS image shares the same detail map. We assume that different detail map is required by different bands of the HS image. The detail map is generated according to the ratio of the information between different bands of the HS image.
This paper is organized as follows. We describe the related work in Section 2. The proposed method is discussed in Section 3. Section 4 displays the experimental results and discussion. Section 5 concludes the paper.

2. Related Work

2.1. Intrinsic Image Decomposition

Based on the principle of human visual perception, intrinsic image decomposition (IID) aims to decompose an image into reflectance and illumination components [30]. The reflectance component depends on the material of objects in an image. There is abundant edge and structure information in an image. However, these information is not directly related to the reflectance component. Edge and structure information is mainly preserved in the illumination component. The intrinsic image decomposition process can be expressed as:
I = S R
where matrix I represents an input image, matrices S and R represent the illumination and reflectance components, respectively. From the aforementioned equation, it is obvious that estimating S and R based on I is a difficult problem. In order to solve this problem, many solutions have been presented in recent years [31,32,33,34]. Among these methods, user intervention [34], extra constraints [30], and heuristic cues have shown good results to estimate the intrinsic image from the input image. Retinex algorithm is a widely used image enhancement method based on scientific experiments and analysis [35]. Compared with the traditional methods that can only enhance a certain characteristic, the retinex algorithm can balance the dynamic compression, contrast improvement and constant color. It is important that the detail information immersed in the light region or shadow can be effectively displayed by the retinex algorithm. However, it is difficult to weigh up the relationship between the detail contrast of the image and the color reservation by the single scale retinex algorithm. Hyperspectral pansharpening aims at enhancing the spatial information while preserving the spectral information. Multiscale retinex algorithm indeed shows the good performance in achieving this objective [36]. Motivated by the aforementioned finding, the multiscale retinex based-IID algorithm is really a good technique for hyperspectral pansharpening.

2.2. Weighted Least Squares Filter

The WLS filter which is an edge-preserving filter has become a highly active research topic in various image processing. Since the WLS filter do not blur strong edges in the process of image decomposition, the ringing artifacts can be avoided. Compared with other edge-preserving filters, the WLS filter can make a best compromise between sharpening and blurring [37]. The WLS filter can be used for estimating the low frequency image of the input image. Specifically, the WLS filter assumes that the filtered image f is as close as possible to the input image g, and should be as smooth as possible everywhere, except across the edges. Farbman et al. [37] proposed that the filtered image f can be obtained by seeking the minimum of the following equation
p ( ( f p g p ) 2 + γ ( ω x , p ( g ) ( f x ) p 2 + ω y , p ( g ) ( f y ) p 2 ) )
where p refers to the pth pixel. The first term ( f p g p ) 2 ensures the minimum distance between the filtered image f and the input image g, ω x , p ( g ) and ω y , p ( g ) which depend on g are smoothness weights. γ is a regular term parameter that balances the two terms.

3. Proposed Method

The schematic diagram of the proposed method is shown in Figure 1. It consists of three major parts: (1) Extracting spatial details of the P image; (2) Extracting spatial detail of the H image; (3) Generating the detail map; and (4) Obtaining the fused H image.

3.1. Extracting Spatial Details of the P Image with Weighted Least Squares Filter

The P image contains plentiful spatial information. To enhance the spatial details of the P image and reduce noise, we use the Laplacian of Gaussian (LOG) enhancement algorithm to sharpen the spatial information of the P image. LOG algorithm can reduce image noise and sharpen structural details. First, Gaussian convolution filtering is performed on the P image to remove the noise. Laplace operator is then used for enhancing the spatial details of the denoised P image. The enhanced P image is finally obtained by combining the P image with the Laplacian filtered image. This process can be expressed as
P S = P + a [ P f L ( x , y ) ]
where, P S is the enhanced P image, f L ( x , y ) is the kernel function of LOG operator, a is a constant. a is related to the central coefficient of the kernel f L ( x , y ) . In this paper, the central coefficient of the kernel is a negative value, and we set a to −1. f L ( x , y ) is defined as
f L ( x , y ) = 2 x 2 f g ( x , y ) + 2 y 2 f g ( x , y )
where, f g ( x , y ) = 1 2 π σ 2 exp ( x 2 + y 2 2 σ 2 ) is the Gaussian convolution function, and σ is the standard deviation.
To extract the spatial details of the enhanced P image, the WLS filter is applied to the P S image. Based on the principle of the WLS filter, the low-frequency image P L which should be as close as possible to the P S image and be as smooth as possible except the edges can be obtained by the following optimization equation.
arg min P L ( P L P S 2 + γ ( ω x ( P L x ) 2 + ω y ( P L y ) 2 ) )
The first term P L P S 2 ensures the minimum distance between P L and P S . γ is a regularization parameter to balance the first term and the second term. The second term ω x ( P L / x ) 2 + ω y ( P L / y ) 2 aims to smooth the P L image by minimizing the derivatives of P L with respect to x and y. ω x and ω y are smoothness weights. We rewrite Equation (7) by using matrix notation
( P L P S ) T ( P L P S ) + γ ( P L T Z x T W x Z x P L + P L T Z y T W y Z y P L )
where matrices W x and W y are diagonal matrices, the matrices Z x and Z y are the discrete differentiation operators, and the smoothness weights ω x and ω y are contained in matrices Z x and Z y , respectively. The following linear equation can be obtained by taking the derivative of Equation (8)
[ ( I + γ ( Z x T W x Z x + Z y T W y Z y ) ) ] P L = P S
where Z x T W x Z x + Z y T W y Z y is a five-point spatially inhomogeneous Laplacian matrix. We define ω x and ω y in the same manner as in [38]:
ω x , p ( P S ) = ( | l x ( p ) | α + ε ) 1 ,   ω y , p ( P S ) = ( | l y ( p ) | α + ε ) 1
where l is the log-luminance channel of the P S image, ε is a constant that is close to zero, the parameter α controls the gradients of P S , and is set to 2.0 in this paper. By the above analysis, the low-frequency image P L can be estimated. Considering the MRA formulation (Equation (2)), we can obtain the high-frequency image P H by subtracting the low-frequency image P L from the enhanced P image P S . This process can be expressed as
P H = P S P L
where P L and P H denote the low-frequency image and high-frequency image of the P S image, respectively. We consider that most of the spatial details of the enhanced P image are contained in the high-frequency image.
Figure 2 shows the example of spatial detail processing of the P image. In this experiment, the respective parameters are adjusted to the optimal values. From Figure 2b, it can be observed that the Laplacian of Gaussian (LOG) enhancement algorithm indeed plays an important role in the spatial details enhancement. Figure 2c shows the high-frequency image of the enhanced P image obtained by the proposed method. To illustrate the effectiveness of the WLS filter in spatial extraction, Figure 2c is compared with Figure 2d which is obtained by Gaussian filtering on the enhanced image. Similarly, to illustrate the effectiveness of the LOG algorithm in spatial enhancement, Figure 2c is compared with Figure 2e which is obtained by WLS filtering on the original P image. A comparison of Figure 2c with Figure 2d,e shows that there is fine distinction between three filtered images. Figure 2d,e show that some low frequency components are mixed with the high-frequency image, especially in spherical regions. Therefore, the LOG enhancement algorithm indeed displays a good performance in spatial detail enhancement, and the WLS filter is really suitable for extracting the high frequency component of the P image.

3.2. Extracting Spatial Detail of the HS Image with Intrinsic Image Decomposition

Upsampling is done on the original hyperspectral image (OH) to obtain the same size as the P image.
U H k = O H k
for k = 1 , 2 , , λ , where λ is the number of the HS image bands, is the upsampling operation, U H is the interpolated HS image, O H k and U H k are the kth band of the original HS image and the interpolated HS image, respectively.
A preprocessing step that performs MTF-based deblurring [39] on the interpolated HS image. The MTF-based deblurring method is performed in the frequency domain as follows
k = k | k | 2 + 1 S N R U k
where, is the Fourier transform of the deblurred interpolated HS image, k is the kth band of , U k , and k are the Fourier transform of U H k and the Fourier transform of the blurring kernel for the kth band, respectively, 1 / S N R is the noise-to-power ratio, and k is the complex conjugate of k . 1 / S N R is a constant that is close to zero. The deblurred interpolated HS image H is obtained by the inverse Fourier transform of . Subsequently, according to [39], we adopt the de-ringing technique to decrease ringing artifacts caused by non-periodic boundaries.
According to the CS based method, the deblurred HS image H can be separated into spectral and spatial information. IID is introduced to separate the spectral and spatial information. Based on the principle of the IID, each band of the HS image is composed of two components, i.e., the illumination component and the reflectance component. For the HS image, the reflectance component is identified as the spectral information, while the illumination component is the spatial information. Each band of the deblurred HS image can be represented as follows
H k ( x , y ) = H R k ( x , y ) × H I k ( x , y )
where ( x , y ) is spatial coordinate. H R k ( x , y ) which depends on the intrinsic nature of the objects represents the reflectance component of the kth band of an HS image. H I k ( x , y ) , which is related to the structure information of the objects, represents the illumination component of the kth band of an HS image. Based on the single scale retinex algorithm, the output can be estimated by the difference between the input and the average of its neighborhood, which can be described by the following equation
h R k ( x , y ) = log ( H k ( x , y ) ) log ( H k ( x , y ) F ( x , y ) )
where h R k ( x , y ) = log ( H R k ( x , y ) ) , H k is the kth band of the input image, h R k is the kth band of the output image, F is the Gaussian surround function, and symbol is convolution. Illumination estimation is denoted by the convolution H k ( x , y ) F ( x , y ) . The Gaussian surround function F ( x , y ) can be given as follows
F ( x , y ) = C exp [ ( x 2 + y 2 ) / 2 σ 2 ]
F ( x , y ) d x d y = 1
where σ , the scale factor of the Gaussian kernel, controls the color information and the spatial resolution of the image. σ cannot be determined and theoretically modeled. Generally, the larger the scale parameter σ , the better the color fidelity and the lower the spatial resolution of the output image. To make the compromise between the extraction of spatial details and the preservation of spectral information, multiscale retinex is used for separating the reflectance component from the illumination component of the deblurred HS image. This process can be given by the following formula
h M R k ( x , y ) = n = 1 N ω n h R k = n = 1 N ω n log ( H k ( x , y ) ) log ( H k ( x , y ) F n ( x , y ) )
F n ( x , y ) = C n exp [ ( x 2 + y 2 ) / 2 σ n 2 ]
where N represents the number of scales, and ω n represents the weighting factor. According to experimental experience, N is set to 3, and ω n is set to 1/3. σ 1 , σ 2 , and σ 3 are set as 20, 40, and 80, respectively. Obtaining the scale factor σ n , C n which denotes the normalization factor can be determined by Equations (17) and (19). Then, the reflectance component of the deblurred HS image is easy to be obtained
H R k ( x , y ) = exp ( h M R k ( x , y ) )
According to Equation (14), the illumination component of the deblurred HS image is calculated by
H I k = H k H R k
We consider that most of the spatial details of each band of the deblurred HS image are contained in H I k .
In [29], the spatial details of each band of the HS image are extracted by average filtering. The right picture of Figure 3b shows the detail map obtained by average filtering. The right picture of Figure 3c shows the detail map obtained by the adopted IID technique. The detail map shown in Figure 3b contains part of the spatial information of the HS image. The detail map obtained by the IID technique contains most of the spatial information of the HS image. The IID is indeed an effective technique for extracting spatial information of the HS image.

3.3. Generating the Detail Map

The CS methods suffer from the spectral distortion, since the detail map only depends on the P image. To reduce the spectral distortion, the detail map in this paper depends on both the P image and the HS image. The detail map of each band of the HS image is determined by the illumination component of the HS image and the high-frequency component of the P image. Specifically, after obtaining the high-frequency component of the P image and the illumination component of the kth band of the HS image, the detail map can be generated by
D k = ζ P H + ( 1 ζ ) H I k  
for k = 1 , 2 , , λ , where ζ is a weight coefficient, and D k is the initial detail map. Since the P image contains much more spatial information compared with the HS image, ζ is set to 0.9. Many traditional CS and MRA based methods assume that the same detail map is in demand by different bands of the HS image. This hypothesis always produces spectral and spatial distortion. We consider that a different detail map is required by different bands of the HS image. Assuming that the amount of spatial detail required for different bands is proportional to the ratio of the information of that band. It is helpful for reducing the spectral distortion to keep this ratio unchanged. Thus, we define the following formula
D F k = U H k ( 1 / λ ) k = 1 n U H k D k
for k = 1 , 2 , , λ , where λ is the bands number of the HS image, D F k denotes the final detail map which is required by the kth band of the HS image.

3.4. Obtaining the Fused HS Image

According to Equation (1), we define a constraint parameter α to control the final detail map. The fused HS image can be obtained by injecting the final detail map into the deblurred interpolated HS image for each band.
H F k = H k + α D F k
for k = 1 , 2 , , λ , where H F is the fused HS image, and H F k is the kth band of the fused HS image. Since the amount of the injected details is regulated by the parameter α , the spectral and spatial distortion can be restrained.

4. Results

In the experiments, the test P and HS images are cropped from four different hyperspectral remote sensing datasets, i.e., the Salinas dataset [12], the Pavia University dataset [12], the Washington DC dataset, and Hyperion dataset [40]. Several widely used evaluation indexes are adopted to estimate the effectiveness of the proposed hyperspectral pansharpening method. Six representative hyperspectral pansharpening methods are utilized for comparison, i.e., principal component analysis (PCA) [18], Guided filter PCA (GFPCA) [41], HySure [8], coupled nonnegative matrix factorization (CNMF) [10], MTF-GLP with High Pass Modulation (MGH) [26] and Sparse Representation [7]. The PCA method is a most representative among the CS-based methods. The MGH method is a successful MRA-based hyperspectral pansharpening method. The CNMF method is one of the matrix factorization algorithms. The Sparse Representation and HySure methods which were presented recently belong to the Bayesian category. The GFPCA method has been awarded the “Best Paper Challenge” in the 2014 IEEE data fusion contest. Therefore, in the experiments, these six methods are compared with the proposed method.

4.1. Dataset Description

(1) The Salinas dataset: This dataset is composed of the urban and rural scene. The HS image was collected by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) over Salinas Valley, California [12]. The collected HS image is characterized by 224 bands in the spectral range of 0.4–2.5 μm. The water absorption and noise corrupted bands are removed, and 204 bands are used for experimentation. In the experiments, the P image covers the visible spectral range with the spatial resolution of 20 m. The dimensions of the HS and P images are 40 × 40 and 200 × 200, respectively.
(2) The Pavia University dataset: This dataset was acquired by the Reflective Optics System Imaging (ROSIS) over Pavia, Italy [12]. ROSIS provides the dataset which covers the spectral range of 0.4–0.9 μm, and the dataset is characterized by 115 bands. After the water absorption and the noise corrupted bands are removed, 103 bands are used for experimentation. The P image has a spatial resolution of 1.3 m in the visible spectral range. The size of the HS and P images in the experiment are 40 × 40 and 200 × 200, respectively.
(3) The Washington DC dataset: This dataset was collected by the Spectral Information Technology Application Center of Virginia over the Washington DC Mall. The dataset consists of 210 bands in the spectral range of 0.4–2.4 μm. Some bands have been removed since the atmosphere is opaque, and 191 bands are used for the experiment. In the experiments, the P image has a spatial resolution of 0.8 m in the visible spectral range. The dimensions of the HS and P images are 40 × 40 and 200 × 200, respectively.
(4) The Hyperion dataset: The EO-I spacecraft which is operated by NASA carries two instruments: Hyperion and Advanced Land Imager (ALI) [12]. Hyperion provides the HS image which is characterized by 242 bands in the spectral range of 0.4–2.5 μm. The spatial resolution of the HS image is 30 m. ALI instrument is capable of providing the P image which covers the spectral range of 0.48–0.69 μm with the spatial resolution of 10 m.

4.2. Quality Measures

Generally, the performance of a hyperspectral pansharpening method can be assessed by the subjective effect and the objective indexes. The similarity of the colors between the reference HS image and the fused HS image can be determined by the subjective evaluation. Objective indexes are used for comparing the fusion quality accurately. This paper is limited to the five most widely used indexes, i.e., cross correlation (CC) [42], spectral angle mapper (SAM) [43], root mean squared error (RMSE), erreur relative globale adimensionnelle de synthèse (ERGAS) [44], and universal image quality index (UIQI) [45]. The CC is the spatial measure, and SAM is the spectral measure. RMSE, ERGAS, and UIQI are the global spectral and spatial measures. The formal definitions of these indexes are provided below. In the definitions, matrix F H = [ h 1 , , h m ] R λ × m denotes the fused HS image with λ bands and m pixels. R H R λ × m represents the reference HS image. R H l and F H l represent the lth columns of RH and FH, respectively. R H j and F H j represent the jth rows of RH and FH, respectively. X , Y R 1 × m denote two single band images, and X i denotes the ith element of X.
(1) Cross correlation: The CC measures the degree of the geometric distortion. It is defined as follows:
C C ( R H , F H ) = 1 λ j = 1 λ C C S ( R H j , F H j )
The CCS characterizes the geometric distortion of a single-band image as follows:
C C S ( X , Y ) = i = 1 m ( X i μ X ) ( Y i μ Y ) i = 1 m ( X i μ X ) 2 i = 1 m ( Y i μ Y ) 2
where μ X and μ Y are the means of X and Y, respectively. The optimal value of CC is 1.
(2) Spectral angle mapper: The SAM measures the spectral distortion between the fused image FH and the reference image RH, which is defined as:
S A M ( R H , F H ) = 1 m l = 1 m arccos ( R H l , F H l R H l F H l )
The SAM is a spectral measure. The smaller the SAM value is, the better the fusion performance is.
(3) Root mean squared error: The RMSE which measures the standard difference between the two matrices RH and FH, is defined as
R M S E ( R H , F H ) = t r a c e [ ( R H F H ) T ( R H F H ) ] m λ
The optimal value of RMSE is 0.
(4) Erreur relative globale adimensionnelle de synthèse: The ERGAS, which is a global measure, is defined as
E R G A S ( R H , F H ) = 100 c 1 λ j = 1 λ ( R M S E j μ j ) 2
where, R M S E j = ( t r a c e [ ( R H j F H j ) T ( R H j F H j ) ] / m ) , c represents the ratio of the linear resolution between the P and HS images, and μ j is the mean value of the jth band of the reference image. The optimal value of ERGAS is 0.
(5) Universal image quality index: The UIQI, which evaluates the similarity of the reference image RH and the fused image FH, is defined as
U I Q I ( R H , F H ) = 4 σ R F 2 μ R μ F ( σ R 2 + σ F 2 ) ( μ R 2 + μ F 2 )
where, μ R , σ R 2 , μ F , σ F 2 are the sample means and standard deviations of the reference image RH and the fused image FH, and σ R F 2 is the covariance of the two images. The ideal value of the UIQI value is 1.

4.3. Analysis of the Influence of Parameter α

In the experiments, α is the parameter which determines the quantity of the final injected spatial details and influences the fusion performance directly. To select an optimal parameter α , the proposed method is performed on the Salinas dataset with different α settings. We apply five quality measures to investigate the effects of the parameter α on the fusion performance. For clarity, the five quality measures are normalized to [0, 1] by the min-max normalization method and are displayed in one figure. Figure 4 shows the performance of the proposed method with different α settings. It can be observed that the CC and UIQI values are increasing from 0 to 0.1 when α is increased from 0 to 0.1. The CC value obtains the biggest value when α equals to 0.1. In addition, the values of SAM, RMSE, and ERGAS all are decreasing when α is increased from 0 to 0.1. While they will increase when α equals to 0.1. Therefore, we can draw a conclusion that when α = 0.1 , the performance of the proposed method is the best. We have also performed the performance of the proposed method on various hyperspectral remote sensing images. We found that α = 0.1 also give the best performance there. Therefore, the parameter α is set as 0.1 in this paper.

4.4. Experiments on Simulated Hyperspectral Remote Sensing Datasets

The Salinas dataset, Pavia University dataset, and Washington DC dataset are all simulated datasets. For the simulated dataset, a reference high spatial resolution HS image is given. The simulated P image and the simulated low spatial resolution HS image can be obtained by the Wald’s protocol [44]. We can use the reference high spatial resolution HS image as the reference image to evaluate the performance of the fused image.

4.4.1. Salinas Dataset

The color displays of the fused HS images obtained by different methods are shown in Figure 5b–h. As reported in some articles, the PCA method generates serious spectral distortion. The fused image obtained by the GFPCA method looks blurry, since the injected spatial information seems to be not sufficient. There is less spectral distortion generated by the GFPCA method compared with the PCA method. The edges in the fused images obtained by the HySure and MGH methods appear too sharp due to the artifacts occurred around the edges. The CNMF and Sparse Representation methods can well preserve the spectral information of the original HS image. However, the edges in the vegetation and roof areas are not clear in the fused images obtained by these two methods. The halo artifacts and the blurring problems can be eliminated by the proposed method. It can be seen that the proposed method performs well in both spectral and spatial aspects.
To further compare the visual quality of the fused images obtained by different fusion methods, Figure 6 is given to show the difference images between the fused HS images and the reference HS image. Here, the difference image is generated by subtracting the reference image from the corresponding fused image, on a pixel by pixel strategy. It is observed that the difference image between the reference image and the fused image obtained by the proposed method shows the light blue color for almost the entire image. In other words, the proposed method causes the smallest value difference between the reference HS image and the fused image compared with other methods, which further proves that the outstanding fusion performance can be obtained by the proposed method. The quality metrics of different methods for the Salinas dataset are shown in Table 1. We consider the five quality metrics together to evaluate the performance of different pansharpening methods. It can be seen that, for the Salinas dataset, the proposed method gives the optimal quality indexes in terms of all the quality metrics. This means that the proposed method can perform well in both spectral and spatial aspects.

4.4.2. Pavia University Dataset

Figure 7a shows the reference HS image of the Pavia University dataset. Figure 7b–h show the fused images obtained by different pansharpening methods. By visually comparing these fused images with the reference one, a similar conclusion as the above experiments can be drawn. The spatial and spectral quality of the fused image obtained by the PCA method is not desired. The spectral distortion caused by the PCA method is most visible, especially in the vegetation areas. The GFPCA method improves with respect to the spectral aspect. However, the spatial quality of the fused image obtained by the GFPCA method needs further improvement. The HySure and MGH methods produce halo artifacts around edges, although such artifacts make the edges appear sharper. The CNMF method introduces spectral distortion, since the color of the fused image obtained by the CNMF method is not match to that of the reference image in roof area. By contrast, the fused images produced by the Sparse Representation and the proposed method are the closest to the reference one.
The visual quality of fused images obtained by different methods can be measured by the difference images between the fused HS images and the reference HS image. Figure 8 is given to show the difference images with outstanding defects and flat background between the fused HS images and the reference HS image. The difference image of the proposed method is almost all light blue with few yellow mixed. Based on the comparison of difference images, the proposed method indeed displays the best performance in visual quality. Table 2 shows the objective quality assessment of different methods for the Pavia University dataset. It can be clearly seen that the proposed method shows the best objective performance in most measurement terms including CC, SAM, RMSE, and ERGAS. The UIQI value of the proposed method is second largest. This further demonstrates that the proposed method can obtain the state-of-the-art fusion performance.

4.4.3. Washington DC Dataset

Figure 9 shows the visual comparison of the fused images obtained by different fusion methods for the Washington DC dataset. The reference HS image is displayed in Figure 9a. Figure 9b–h show the fused images obtained by different fusion methods. It is apparent that the fused image obtained by the PCA method suffers from the spectral and spatial distortion. The fused image obtained by the GFPCA method shows improvement, but the spatial quality is not improved obviously. A visual comparison shows that the MGH method performs well in spectral aspect, but the fused image obtained by the MGH method looks blurry in some areas. The reason is that the injected spatial details is insufficient. The CNMF and Sparse Representation methods are close to the reference image in spectral aspect. However, the spatial quality of the CNMF and Sparse Representation methods in the edge regions is not desired. The fused results obtained by the HySure and the proposed fusion methods have superior performance in spectral and spatial aspects.
Figure 10 shows the visual comparison of difference images between the fused HS images and the reference HS image for the Washington DC dataset. The proposed method indeed performs best in achieving the objective that the fused HS image should be as close as possible to the HS image acquired by the high-resolution sensors. The quality metrics of different methods for the Washington DC dataset are shown in Table 3. It can be seen that, for the Washington DC dataset, the proposed method gives the smallest quality indexes for RMSE and ERGAS, and the optimal quality indexes for CC and UIQI. Although the objective assessment of the proposed method is not always the best, it achieves a very stable performance in terms of five widely used quality metrics. This means that the proposed method can perform well in terms of providing the spatial details while preserving the spectral information.

4.5. Experiments on Real Hyperspectral Remote Sensing Datasets

The Hyperion dataset which is the real hyperspectral dataset is utilized to evaluate the performance of the proposed method in real applications. For the real HS image, fusion is performed at the full scale for the subjective evaluation. The dimensions of the test P image are 210 × 150, and the size of the experimental HS image is 70 × 50. Figure 11a,b show the interpolated HS image and the P image, respectively. Figure 11c–i displays the results of different pansharpening methods. By visually comparing these fused HS images with the original HS image, it is clear that the blocking artifacts exist in the fused image obtained by the PCA method. The result obtained by the GFPCA method also looks blurry in this experiment. The HySure, MGH, and CNMF methods preserve the spectral information effectively, but the spatial quality of these methods is poor. The spectral distortion of the Sparse Representation method is visible in some areas. The proposed method can well preserve the spectral information and greatly improve the spatial quality of the original HS image.
The high spatial resolution HS image of the real dataset is generally not available. Fusion is performed at the degraded scale for the objective evaluation. Specifically, according to the literatures [46], we degrade the original HS and P images, and fuse the degraded HS and P images. The original HS image is used as the reference. The fused image is compared with the original HS image to evaluate the objective performance. Table 4 shows the quality metrics of different methods for the Hyperion dataset. The proposed method shows the best objective performance in terms of all the measurements terms including CC, SAM, RMSE, ERGAS, and UIQI.
To verify the validity of the proposed method on the real HS images, the experiment is performed on another Hyperion image. The test P image is of size 300 × 300 pixels, and the size of the test HS image is 100 × 100. Figure 12a,b show the interpolated HS image and the P image, respectively. The fused images obtained by different methods are displayed in Figure 12c–i. The color of the fused image obtained by the PCA method is not match to that of the original HS image in some areas. The GFPCA method produces the serious spatial distortion, although it performs better in the spectral aspect compared with the PCA method. The fused images obtained by the HySure and Sparse Representation methods appear too sharp due to the artifacts occurred around the edges. The color of the fused images obtained by the MGH, CNMF and the proposed methods is close to that of the original HS image, which indicates the superiority of these pansharpening methods in spectral preservation. However, the spatial quality of the CNMF method in some edges is not desired. By contrast, the fused images produced by the proposed method and the MGH method obtain the outstanding fusion performance in terms of spectral and spatial aspects. Table 5 shows the objective quality evaluation of each method for the Hyperion dataset. The proposed method performs best in terms of most of the indexes. The MGH method obtains the best ERGAS index. Although the objective performance of the proposed method is not always the best, it has a stable performance. Based on the analysis of the visual comparison and objective evaluation, we can draw a conclusion that the proposed method obtains the excellent performance for the real hyperspectral dataset in terms of the objective and subjective evaluations.

4.6. Discussion

Experimental results demonstrate that the proposed method outperforms the other six hyperspectral pansharpening methods. The proposed method has a good performance in the spectral fidelity, since it always obtains an optimal SAM index. The HySure method has an excellent performance on the simulated images, but it performs badly on the real HS images. The proposed method performs well on the simulated and real hyperspectral images.
The superiority of the proposed method is owing to the employment of the weighted least squares filter and the intrinsic image decomposition. The WLS filter can make a proper compromise between sharpening and blurring, which improves the spatial quality of the fused image. The IID is an effective technique to separate the HS image into the reflectance and illumination components, which plays an important role in reducing the spectral distortion.
A simple yet effective fusion rule is introduced in this paper α determines the quantity of the finial injected spatial details and influences the fusion performance directly. We have tested the performance of the proposed method on various remote sensing images and many real satellite images with different α settings. We found that α   =   0.1 always give the best performance. In future work, we plan to improve the performance of the proposed method by adaptively selecting parameter.

5. Conclusions

Hyperspectral pansharpening is an important subdivision of remote sensing image processing. A novel hyperspectral pansharpening method based on IID and WLS filter has been presented in this paper. The proposed method first obtains the spatial information of the P image with a weighted least squares filter, in which the LOG enhancement algorithm was used for the spatial enhancement. Then, the illumination component which is considered the spatial information of the HS image is estimated with the intrinsic image decomposition technique. The fused image can be obtained by injecting the detail map into each band of the deblurred interpolated HS image. The final injected spatial information takes full account of the P and the HS images. The impact of data independence can be eliminated. The existing problem of the CS and MRA-based fusion methods can be well solved by the combination of an IID technique and a WLS filter. Experiments conducted on six synthetic and real hyperspectral datasets demonstrated that the proposed method performs better than the state-of-the-art fusion methods as well as the CS and MRA-based fusion methods in terms of visual inspection and objective analysis.

Acknowledgments

This work was supported by NSFC (No. 61372069), National Defense Pre-research Foundation, SRF for ROCS, SEM (JY0600090102), “111” project (No. B08038) and the Fundamental Research Funds for the Central Universities.

Author Contributions

Wenqian Dong drafted the manuscript; Wenqian Dong, Song Xiao, and Yunsong Li conceived and designed the experiments; Wenqian Dong and Jiahui Qu performed the experiments; Wenqian Dong and Song Xiao analyzed the data; Yunsong Li and Jiahui Qu modified the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

PPanchromatic
HSHyperspectral
WLSWeighted least squares
IIDIntrinsic image decomposition
OHOriginal hyperspectral image
UHInterpolated hyperspectral image
HDeblurred interpolated hyperspectral image
DDetail map
CCCross correlation
SAMSpectral angle mapper
RMSERoot mean squared error
ERGAS Erreur relative globale adimensionnelle de synthèse
UIQI Universal image quality index
PCA Principal component analysis
GFPCA Guided filter PCA
CNMF Coupled nonnegative matrix factorization
MTF Modulation transfer function
MGH MTF-generalized Laplacian Pyramid with high pass modulation

References

  1. Kempeneers, P.; Sedano, F.; Seebach, L.; Strobl, P.; San-MiguelAyanz, J. Data fusion of different spatial resolution remote sensing images applied to forest-type mapping. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4977–4986. [Google Scholar] [CrossRef]
  2. Yokoya, N.; Grohnfeldt, C.; Chanussot, J. Hyperspectral and multispectral data fusion: A comparative review of the recent literature. IEEE Geosci. Remote Sens. Mag. 2017, 5, 29–56. [Google Scholar] [CrossRef]
  3. Zhang, H.; Huang, B. A new look at image fusion methods from a Bayesian perspective. Remote Sens. 2015, 7, 6828–6861. [Google Scholar] [CrossRef]
  4. Vivone, G.; Alparone, L.; Chanussot, J.; Mura, M.D.; Garzelli, A.; Licciardi, G.A.; Restaino, R.; Wald, L. A critical comparison among pansharpening algorithms. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2565–2586. [Google Scholar] [CrossRef]
  5. Loncan, L.; Almeida, L.B.; Dias, J.M.; Briottet, X.; Chanussot, J.; Dobigeon, N.; Fabre, S.; Liao, W.; Licciardi, G.A.; Simões, M.; et al. Hyperspectral pansharpening: A review. IEEE Geosci. Remote Sens. Mag. 2015, 3, 27–46. [Google Scholar] [CrossRef]
  6. Wei, Q.; Dobigeon, N.; Tourneret, J.Y. Bayesian fusion of multiband images. IEEE J. Sel. Top. Signal Process. 2015, 9, 1117–1127. [Google Scholar] [CrossRef]
  7. Wei, Q.; Bioucas-Dias, J.; Dobigeon, N.; Tourneret, J. Hyperspectral and multispectral image fusion based on a sparse representation. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3658–3668. [Google Scholar] [CrossRef]
  8. Simões, M.; Dias, J.B.; Almeida, L.; Chanussot, J. A convex formulation for hyperspectral image superresolution via subspace-based regularization. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3373–3388. [Google Scholar] [CrossRef]
  9. Wei, Q.; Dobigeon, N.; Tourneret, J.Y. Fast fusion of multiband images based on solving a sylvester equation. IEEE Trans. Image Process. 2015, 24, 4109–4121. [Google Scholar] [CrossRef] [PubMed]
  10. Yokoya, N.; Yairi, T.; Iwasaki, A. Coupled nonnegative matrix factorization unmixing for hyper-spectral and multispectral data fusion. IEEE Trans. Geosci. Remote Sens. 2012, 50, 528–537. [Google Scholar] [CrossRef]
  11. Ghassemian, H. A review of remote sensing image fusion methods. Inf. Fusion 2016, 32, 75–89. [Google Scholar] [CrossRef]
  12. Mookambiga, A.; Gomathi, V. Comprehensive review on fusion techniques for spatial information enhancement in hyperspectral imagery. Multidimens. Syst. Signal Process. 2016, 27, 863–889. [Google Scholar] [CrossRef]
  13. Tu, T.M.; Su, S.C.; Shyu, H.C.; Huang, P.S. A new look at IHS-like image fusion methods. Inf. Fusion 2001, 2, 117–186. [Google Scholar] [CrossRef]
  14. Hallabia, H.; Kallel, A.; Hamida, A.; Hegarat-Mascle, S. High spectral quality pansharpening approach based on MTF-matched filter banks. Multidimens. Syst. Signal Process. 2016, 4, 1–31. [Google Scholar] [CrossRef]
  15. Carper, W.; Lillesand, T.M.; Kiefer, P.W. The use of Intensity-Hue-Saturation transformations for merging SPOT panchromatic and multispectral image data. Photogramm. Eng. Remote Sens. 1990, 56, 459–467. [Google Scholar]
  16. Anderson, J.A.; Sides, S.C.; Chavez, P.S. Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic. Photogramm. Eng. Remote Sens. 1991, 57, 295–303. [Google Scholar]
  17. Laben, C.; Brower, B. Process for Enhacing the Spatial Resolution of Multispectral Imagery Using Pan-Sharpening. U.S. Patent 6,011,875, 1 April 2000. [Google Scholar]
  18. Chavez, P.S.; Kwarteng, A.Y.A. Extracting spectral contrast in Landsat thematic mapper image data using selective principal component analysis. Photogramm. Eng. Remote Sens. 1989, 55, 339–348. [Google Scholar]
  19. Shettigara, V.A. Generalized component substitution technique for spatial enhancement of multispectral images using a higher resolution data set. Photogramm. Eng. Remote Sens. 1992, 58, 561–567. [Google Scholar]
  20. Shah, V.P.; Younan, N.; King, R.L. An efficient pan-sharpening method via a combined adaptive PCA approach and contourlets. IEEE Trans. Geosci. Remote Sens. 2008, 56, 1323–1335. [Google Scholar] [CrossRef]
  21. Liu, J.M.; Liang, S.L. Pan-sharpening using a guided filter. Int. J. Remote Sens. 2016, 37, 1777–1800. [Google Scholar] [CrossRef]
  22. Aiazzi, B.; Baronti, S.; Selva, M. Improving component substitution pansharpening through multivariate regression of MS + Pan data. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3230–3239. [Google Scholar] [CrossRef]
  23. Choi, J.; Yu, K.; Kim, Y. A new adaptive component-substitution based satellite image fusion by using partial replacement. IEEE Trans. Geosci. Remote Sens. 2011, 49, 295–309. [Google Scholar] [CrossRef]
  24. Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A.; Selva, M. MTF-tailored multiscale fusion of high-resolution MS and pan imagery. Photogramm. Eng. Remote Sens. 2006, 72, 591–596. [Google Scholar] [CrossRef]
  25. Liu, J.G. Smoothing filter based intensity modulation: A spectral preserve image fusion technique for improving spatial details. Int. J. Remote Sens. 2000, 21, 3461–3472. [Google Scholar] [CrossRef]
  26. Vivone, G.; Restaino, R.; Mura, M.D.; Licciardi, G.; Chanussot, J. Contrast and error-based fusion schemes for multispectral image pansharpening. IEEE Geosci. Remote Sens. Lett. 2014, 11, 930–934. [Google Scholar] [CrossRef]
  27. Selva, M.; Aiazzi, B.; Butera, F.; Chiarantini, L.; Baronti, S. Hyper-sharpening: A first approach on SIM-GA data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3008–3024. [Google Scholar] [CrossRef]
  28. Ghahremani, M.; Ghassemian, H. Remote-sensing image fusion based on Curvelets and ICA. Int. J. Remote Sens. 2015, 36, 4131–4143. [Google Scholar] [CrossRef]
  29. Qu, J.H.; Li, Y.S.; Dong, W.Q. Hyperspectral pansharpening with guided filter. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2152–2156. [Google Scholar] [CrossRef]
  30. Tappen, M.F.; Freeman, W.T.; Adelson, E.H. Recovering intrinsic images from a single image. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1459–1472. [Google Scholar] [CrossRef] [PubMed]
  31. Shen, J.; Yang, X.; Li, X.; Jia, Y. Intrinsic image decomposition using optimization and user scribbles. IEEE Trans. Cybern. 2013, 43, 425–436. [Google Scholar] [CrossRef] [PubMed]
  32. Jiang, X.; Schofield, A.J.; Wyatt, J.L. Correlation-based intrinsic image extraction from a single image. In Proceedings of the European Conference on Computer Vision, Heraklion, Greece, 5–11 September 2010; pp. 58–71. [Google Scholar]
  33. Powell, M.W.; Sarkar, S.; Goldgof, D.B.; Ivanov, K. A methodology for extracting objective color from images. IEEE Trans. Syst. Man Cybern. B Cybern. 2004, 34, 1964–1978. [Google Scholar] [CrossRef] [PubMed]
  34. Bousseau, A.; Paris, S.; Durand, F. User-assisted intrinsic images. ACM Trans. Graph. 2009, 28, 1. [Google Scholar] [CrossRef]
  35. Jung, C.; Sun, T.; Jiao, L. Eye detection under varying illumination using the Retinex theory. Neurocomputing 2013, 113, 130–137. [Google Scholar] [CrossRef]
  36. Jobson, D.J.; Rahman, Z.; Woodell, G.A. A multi-scale Retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef] [PubMed]
  37. Farbman, Z.; Fattal, R.; Lischinski, D.; Szeliski, R. Edge-preserving decompositions for multi-scale tone and detail manipulation. ACM Trans. Graph. 2008, 27, 1–10. [Google Scholar] [CrossRef]
  38. Lischinski, D.; Farbman, Z.; Uyttenaaele, M.; Szeliski, R. Interactive local adjustment of tonal values. ACM Trans. Graph. 2006, 25, 646–653. [Google Scholar] [CrossRef]
  39. Palsson, F.; Sveinsson, J.R.; Ulfarsson, M.O.; Benediktsson, J.A. MTF-Based Deblurring Using a Wiener Filter for CS and MRA Pansharpening Methods. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2255–2269. [Google Scholar] [CrossRef]
  40. Chakrabarti, A.; Zickler, T. Statistics of real-world hyperspectral images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 193–200. [Google Scholar]
  41. Liao, W.; Huang, X.; Coillie, F.; Gautama, S.; Pizurica, A.; Philips, W.; Liu, H.; Zhu, T.; Shimoni, M.; Moser, G.; et al. Processing of multiresolution thermal hyperspectral and digital color data: Outcome of the 2014 IEEE GRSS data fusion contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2984–2996. [Google Scholar] [CrossRef]
  42. Zhou, J.; Civco, D.L.; Silander, J.A. A wavelet transform method to merge Landsat TM and SPOT panchromatic data. Int. J. Remote Sens. 1998, 19, 743–757. [Google Scholar] [CrossRef]
  43. Yang, Y.; Tong, S.; Huang, S.; Lin, P. Multifocus image fusion based on NSCT and focused area detection. IEEE Sens. J. 2015, 15, 2824–2838. [Google Scholar]
  44. Wald, L.; Ranchin, T.; Mangolini, M. Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting images. Photogramm. Eng. Remote Sens. 1997, 63, 691–699. [Google Scholar]
  45. Wang, Z.; Bovik, A.C. A universal image quality index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
  46. Selva, M.; Santurri, L.; Baronti, S. On the Use of the Expanded Image in Quality Assessment of Pansharpened Images. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1–5. [Google Scholar] [CrossRef]
Figure 1. The schematic diagram of the proposed method.
Figure 1. The schematic diagram of the proposed method.
Remotesensing 10 00445 g001
Figure 2. Example of spatial detail processing of the P image. (a) Original P image; (b) Enhanced P image. High-frequency images: (c) Sharpening + WLS filtering; (d) Sharpening + Gaussian filtering; (e) WLS filtering.
Figure 2. Example of spatial detail processing of the P image. (a) Original P image; (b) Enhanced P image. High-frequency images: (c) Sharpening + WLS filtering; (d) Sharpening + Gaussian filtering; (e) WLS filtering.
Remotesensing 10 00445 g002
Figure 3. Spatial details extracted by average filtering and intrinsic image decomposition methods. (a) HS image; (b) Average filtering method (left: Spectral component of HS image; right: Spatial component of HS image); (c) Intrinsic image decomposition method (left: Reflectance component of HS image; right: Illumination component of HS image).
Figure 3. Spatial details extracted by average filtering and intrinsic image decomposition methods. (a) HS image; (b) Average filtering method (left: Spectral component of HS image; right: Spatial component of HS image); (c) Intrinsic image decomposition method (left: Reflectance component of HS image; right: Illumination component of HS image).
Remotesensing 10 00445 g003
Figure 4. Performance of the proposed method with different α settings.
Figure 4. Performance of the proposed method with different α settings.
Remotesensing 10 00445 g004
Figure 5. Visual comparison of different hyperspectral pansharpening methods for Salinas dataset. (a)Reference; (b) PCA; (c) GFPCA; (d) HySure; (e) MGH; (f) CNMF; (g) Sparse Representation; (h) Proposed.
Figure 5. Visual comparison of different hyperspectral pansharpening methods for Salinas dataset. (a)Reference; (b) PCA; (c) GFPCA; (d) HySure; (e) MGH; (f) CNMF; (g) Sparse Representation; (h) Proposed.
Remotesensing 10 00445 g005
Figure 6. Visual comparison of difference images (light blue means small differences) between each fused HS image and the reference HS image (Salinas dataset). (a) PCA; (b) GFPCA; (c) HySure; (d) MGH; (e) CNMF; (f) Sparse Representation; (g) Proposed.
Figure 6. Visual comparison of difference images (light blue means small differences) between each fused HS image and the reference HS image (Salinas dataset). (a) PCA; (b) GFPCA; (c) HySure; (d) MGH; (e) CNMF; (f) Sparse Representation; (g) Proposed.
Remotesensing 10 00445 g006
Figure 7. Visual comparison of different hyperspectral pansharpening methods for Paiva University dataset. (a) Reference; (b) PCA; (c) GFPCA; (d) HySure; (e) MGH; (f) CNMF; (g) Sparse Representation; (h) Proposed.
Figure 7. Visual comparison of different hyperspectral pansharpening methods for Paiva University dataset. (a) Reference; (b) PCA; (c) GFPCA; (d) HySure; (e) MGH; (f) CNMF; (g) Sparse Representation; (h) Proposed.
Remotesensing 10 00445 g007
Figure 8. Visual comparison of difference image (light blue means small differences) between each fused HS image and the reference HS image (Paiva University dataset). (a) PCA; (b) GFPCA; (c) HySure; (d) MGH; (e) CNMF; (f) Sparse Representation; (g) Proposed.
Figure 8. Visual comparison of difference image (light blue means small differences) between each fused HS image and the reference HS image (Paiva University dataset). (a) PCA; (b) GFPCA; (c) HySure; (d) MGH; (e) CNMF; (f) Sparse Representation; (g) Proposed.
Remotesensing 10 00445 g008
Figure 9. Visual comparison of different hyperspectral pansharpening methods for Washington DC dataset. (a) Reference; (b) PCA; (c) GFPCA; (d) HySure; (e) MGH; (f) CNMF; (g) Sparse Representation; (h) Proposed.
Figure 9. Visual comparison of different hyperspectral pansharpening methods for Washington DC dataset. (a) Reference; (b) PCA; (c) GFPCA; (d) HySure; (e) MGH; (f) CNMF; (g) Sparse Representation; (h) Proposed.
Remotesensing 10 00445 g009aRemotesensing 10 00445 g009b
Figure 10. Visual comparison of difference image (light blue means small differences) between each fused HS image and the reference HS image (Washington DC dataset). (a) PCA; (b) GFPCA; (c) HySure; (d) MGH; (e) CNMF; (f) Sparse Representation; (g) Proposed.
Figure 10. Visual comparison of difference image (light blue means small differences) between each fused HS image and the reference HS image (Washington DC dataset). (a) PCA; (b) GFPCA; (c) HySure; (d) MGH; (e) CNMF; (f) Sparse Representation; (g) Proposed.
Remotesensing 10 00445 g010
Figure 11. Visual comparison of different hyperspectral pan-sharpening methods for Hyperion dataset. (a) Interpolated HS image; (b) P image; (c) PCA; (d) GFPCA; (e) HySure; (f) MGH; (g) CNMF; (h) Sparse Representation; (i) Proposed.
Figure 11. Visual comparison of different hyperspectral pan-sharpening methods for Hyperion dataset. (a) Interpolated HS image; (b) P image; (c) PCA; (d) GFPCA; (e) HySure; (f) MGH; (g) CNMF; (h) Sparse Representation; (i) Proposed.
Remotesensing 10 00445 g011
Figure 12. Visual comparison of different hyperspectral pan-sharpening methods for Hyperion dataset. (a) Interpolated HS image; (b) P image; (c) PCA; (d) GFPCA; (e) HySure; (f)MGH; (g) CNMF; (h) Sparse Representation; (i) Proposed.
Figure 12. Visual comparison of different hyperspectral pan-sharpening methods for Hyperion dataset. (a) Interpolated HS image; (b) P image; (c) PCA; (d) GFPCA; (e) HySure; (f)MGH; (g) CNMF; (h) Sparse Representation; (i) Proposed.
Remotesensing 10 00445 g012
Table 1. Quality metrics of different methods for Salinas datasets.
Table 1. Quality metrics of different methods for Salinas datasets.
IndexMethod
PCAGFPCAHysureMGHCNMFSparseProposed
CC0.53410.93350.94850.95690.93850.94430.9586
SAM10.58153.55532.15232.26942.39192.73811.9475
RMSE0.06610.02620.02240.02310.02200.02090.0188
ERGAS6.63332.81572.21762.36042.01452.00871.1626
UIQI0.90080.97760.98770.98380.98670.98770.9884
Table 2. Quality metrics of different methods for Pavia University dataset.
Table 2. Quality metrics of different methods for Pavia University dataset.
IndexMethod
PCAGFPCAHySureMGHCNMFSparseProposed
CC0.89670.82030.94040.93080.87230.90120.9440
SAM6.22879.24136.56236.28327.28208.45056.1135
RMSE0.04890.06640.03850.04750.05480.04840.0330
ERGAS6.86526.93433.48424.60845.86094.68803.4226
UIQI0.78240.73560.83810.81330.79080.77390.8310
Table 3. Quality metrics of different methods for Washington DC dataset.
Table 3. Quality metrics of different methods for Washington DC dataset.
IndexMethod
PCAGFPCAHySureMGHCNMFSparseProposed
CC0.77240.78290.87820.87710.77200.81790.8940
SAM8.812310.75457.39137.94538.658810.01457.7436
RMSE0.01230.01390.00840.01320.01200.01190.0081
ERGAS33.407039.599927.199835.317831.202730.879625.2085
UIQI0.92270.89710.95410.93500.93910.95170.9584
Table 4. Quality metrics of different methods for Hyperion dataset.
Table 4. Quality metrics of different methods for Hyperion dataset.
IndexMethod
PCAGFPCAHySureMGHCNMFSparseProposed
CC0.75740.74300.59840.90870.87540.82730.9123
SAM3.75634.624812.14452.70903.04674.56562.6637
RMSE0.03530.03850.10120.02250.02490.03620.0220
ERGAS8.55449.902022.09465.61596.64009.05905.4775
UIQI0.98350.97970.87380.99220.99100.97750.9927
Table 5. Quality metrics of different methods for Hyperion dataset.
Table 5. Quality metrics of different methods for Hyperion dataset.
IndexMethod
PCAGFPCAHySureMGHCNMFSparseProposed
CC0.70770.82340.75430.93410.92190.87810.9344
SAM4.72245.73346.85963.08034.35485.46513.0230
RMSE0.03990.02530.03960.01850.01890.02060.0174
ERGAS12.579710.037115.83186.26456.79878.65566.5841
UIQI0.78860.73240.78370.94180.90250.85370.9446

Share and Cite

MDPI and ACS Style

Dong, W.; Xiao, S.; Li, Y.; Qu, J. Hyperspectral Pansharpening Based on Intrinsic Image Decomposition and Weighted Least Squares Filter. Remote Sens. 2018, 10, 445. https://0-doi-org.brum.beds.ac.uk/10.3390/rs10030445

AMA Style

Dong W, Xiao S, Li Y, Qu J. Hyperspectral Pansharpening Based on Intrinsic Image Decomposition and Weighted Least Squares Filter. Remote Sensing. 2018; 10(3):445. https://0-doi-org.brum.beds.ac.uk/10.3390/rs10030445

Chicago/Turabian Style

Dong, Wenqian, Song Xiao, Yunsong Li, and Jiahui Qu. 2018. "Hyperspectral Pansharpening Based on Intrinsic Image Decomposition and Weighted Least Squares Filter" Remote Sensing 10, no. 3: 445. https://0-doi-org.brum.beds.ac.uk/10.3390/rs10030445

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop