Next Article in Journal
Good Practices for Object-Based Accuracy Assessment
Previous Article in Journal
The Application of ALOS/PALSAR InSAR to Measure Subsurface Penetration Depths in Deserts
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Method of Panchromatic Image Modification for Satellite Imagery Data Fusion

Department of Remote Sensing, Photogrammetry and Imagery Intelligence, Institute of Geodesy, Faculty of Civil Engineering and Geodesy, Military University of Technology, 01-476 Warszawa, Poland
*
Author to whom correspondence should be addressed.
Submission received: 10 May 2017 / Revised: 9 June 2017 / Accepted: 16 June 2017 / Published: 21 June 2017

Abstract

:
The standard ratio of spatial resolution between bands for high resolution satellites is 1:4, which is typical when combining images obtained from the same sensor. However, the cost of simultaneously purchasing a set of panchromatic and multispectral images is still relatively high. There is therefore a need to develop methods of data fusion of very high resolution panchromatic imagery with low-cost multispectral data (e.g., Landsat). Combining high resolution images with low resolution images broadens the scope of use of satellite data, however, it is also accompanied by the problem of a large ratio between spatial resolutions, which results in large spectral distortions in the merged images. The authors propose a modification of the panchromatic image in such a way that it includes the spectral and spatial information from both the panchromatic and multispectral images to improve the quality of spectral data integration. This fusion is done based on a weighted average. The weight is determined using a coefficient, which determines the ratio of the amount of information contained in the corresponding pixels of the integrated images. The effectiveness of the author’s algorithm had been tested for six of the most popular fusion methods. The proposed methodology is ideal mainly for statistical and numerical methods, especially Principal Component Analysis and Gram-Schmidt. The author’s algorithm makes it possible to lower the root mean square error by up to 20% for the Principal Component Analysis. The spectral quality was also increased, especially for the spectral bands extending beyond the panchromatic image, where the correlation rose by 18% for the Gram-Schmidt orthogonalization.

Graphical Abstract

1. Introduction

In recent years, there had been significant developments in satellite techniques. There is already easy access to data from very high resolution satellites (VHRS) [1] that record imagery with resolutions reaching even around 30 cm, for example WorldView-3 in its panchromatic channel. Multispectral images are usually characterized by a spatial resolution about four times lower than the resolution panchromatic imagery from the same sensor. Therefore, to integrate high spatial resolution data with a high spectral resolution, pansharpening algorithms are used [2]. They are used both for satellite and aerial data [3,4,5,6] and for data obtained from low altitudes [7,8]. The results of such image fusion are often used in urban and environmental analyses [9,10].
Currently, there are many pansharpening methods and their modifications [3,11,12,13]. All, however, are adapted to merge data obtained from sensors with a small spatial resolution ratio. However, the cost of simultaneously purchasing a set of panchromatic and multispectral images is still relatively high. There is therefore a need to develop methods of data fusion for very high spatial resolution panchromatic (PAN) data with multispectral (MS) data, which is available free of charge. It should, however, be taken into account that these data have a low spatial resolution, e.g., the Landsat series of satellites has a resolution of 30 m in its multispectral channels. The possibility of combining such low resolution data with images obtained by VHRS, which have a GSD (Ground Sampling Distance) between tens of centimetres to several meters, would greatly broaden the scope of use of satellite data. However, such a process is also accompanied by the problem of a large ratio between spatial resolutions, which results in large spectral distortions in the merged images. These distortions depend on the method used. Pansharpening methods can be divided into those which provide higher spectral quality and those which provide a higher spatial quality. Depending on the further use of the fused images, in one instance it will be important to enhance the spectral properties, and in the other the spatial. Of course, the ideal would be to generate an image of both high spatial and spectral quality, but this is difficult to achieve. For this reason, many modifications of the existing methods are being proposed. However, they are usually designed to process data with a standard ratio of spatial resolutions. The higher the difference in resolutions, the more difficult it is to maintain both a high spectral and spatial quality. Methods focused on the transfer of spatial elements will give adequate results essentially independently of the ratio of GSD. Greater problems arise in the case of the spectral resolution, as spectral information from one pixel is transferred to tens of pixels on the fused image.
The first discussions about combining satellite imagery with different spatial resolutions had already appeared in 1991 [4]. The paper compared the results of integrating multispectral satellite images from Landsat TM (GSD = 30 m) with a panchromatic SPOT image (GSD = 10 m). Integration was performed using three methods: Intensity–Hue–Saturation (IHS), Principal Component Analysis (PCA), and High Pass Filter (HPF). The greatest spectral distortions were recorded for the IHS method, and the smallest for HPF. However, one should note here that the ratio of the spatial resolution does not differ much from the standard, as it was 1:3. The quality of the pansharpened imagery data obtained using the PAN from EROS B and eight multispectral channels from Landsat were also investigated. The worst results were reached when using the IHS, PCA and Brovey transform methods, and the HPF method gave the best results, with the correlation coefficient between the fused imagery and the original multispectral image ranging from 0.25 to 0.75 depending on the method and area [14]. These results indicate a much lower spectral quality of the sharpened images when the resolution ratio is high, therefore, in this article, we try to solve the problem presented by proposing a new algorithm.
Improving the quality of integration for data with high spatial resolution ratios could allow the use of low-resolution color imagery for a variety of analyses such as crop identification, defining the boundaries of areas with different levels of urbanization, detection of diverse tree species, or even determining pollution levels in a selected area. Each of these analyses requires the preservation of a high spectral quality as provided by the original multispectral imagery. However, the spatial resolution will affect the accuracy with which we define the boundaries of the analyzed areas. Of course, there are images of sufficiently high spatial and spectral resolution, but as discussed above, considering the economic aspect, it is worth trying to integrate small pixel panchromatic images with universally available low-resolution multi-spectral images. In addition, this approach may be an opportunity to expand the spectral information of archival imagery data collected only in the panchromatic range. In this article, the authors describe their research into improving the quality of integrating satellite data with high spatial resolution ratios, but it should be borne in mind that this approach may also apply to aerial data or even to close-range photogrammetry products.

1.1. Review of Pansharpening Methods

There are methods of integrating satellite images based on various assumptions and mathematical rules. The multitude of possibilities for data integration stems from the difficulties in obtaining satisfactory results in relation to two key issues: maintaining the spectral characteristics of the input multispectral image, and at the same time retaining maximum detail [15]. These methods can be classified in several categories: component-substitution-based methods, multiresolution-analysis-based methods, reconstruction-based-methods and model-based methods [5,16]. First, there are two basic groups of data fusion methods: based on the color (color space transformations) and statistical or numerical methods (statistical analysis and image algebra) [17,18].
The most commonly used approaches are based on component substitution. These methods show spectral distortion, but give results that are well suited for visual interpretation [5]. One of the most popular methods of this group is the IHS method. It is based on the transformation of the color space. The low spatial resolution multispectral image is transformed from the RGB space to IHS. The next step is to substitute the intensity component with the high spatial resolution panchromatic image. The coefficients of the transformation result from the geometries describing the transition from the Cartesian RGB space to the cone describing the IHS space [19].
The PCA method is based on a similar concept. The first component of Principal Component Analysis of the MS image is replaced with a panchromatic image. The PCA method is based on a statistical approach, therefore, it is included in the group of statistical or numerical methods. The covariance matrix, eigenvalues and eigenvectors are calculated. Some elements of this approach are used in a variety of data fusion techniques for different types of data [20,21]. Both in the case of the IHS and PCA methods, the contrast of the PAN image should be stretched so that the mean and variance were approximately equal to those same values on, respectively the intensity image, or the first component [4]. Thus, the component substitution pansharpening methods can be improved by histogram matching [22].
One of the simplest methods, the multiplicative method, is based on the algebra of images, and consists of a simple multiplication of the PAN image with images in each of the MS channels. A similar method is the Brovey transformation, which is the multiplicative method simply modified by the normalization of the results [23].
The HPF method is also a popular method of integrating imaging data. In this method matrix algebra is used for high-pass filtering of the higher spatial resolution image. The post-filtration PAN image is combined with the MS image as a weighted summation. The quality of the HPF integration method is mainly influenced by the size of the kernel [24,25]. It is suggested to adopt a filter, which is at least two times greater in relation to the ratio between the integrated imagery [15].
The Gram-Schmidt method is another numerical method that guarantees a sharp image with low spectral distortions. The essence of this method is the use of the Gram-Schmidt orthogonalization, which consists of transforming a set of linearly independent vectors of a unitary space into a set of orthogonal vectors. An n-dimensional vectors’ space is considered, where each band is a high dimensional vector, with the first vector being a simulated panchromatic image, which is the weighted sum of the intensity of successive MS image bands. After the orthogonalization step, the simulated panchromatic image is replaced by a high resolution panchromatic image. Performing a reverse transformation gives a color image of higher spatial resolution [26].
The above-mentioned methods usually guarantee either high spectral quality or high spatial quality. Wavelet transformation is capable of analyzing spectral information and spatial information simultaneously. This method is based on the use of low pass filters and high pass filters in the image frequency domain [27,28,29]. However, these methods are more suited to standard spatial resolution ratios. With such high ratios as those investigated in this paper (1:60), spectral distortions occur in the image after fusion and it is necessary to develop an approach that will improve the spectral quality of the integration of such spatially different data.

1.2. Methods of Spectral Quality Assessment of Fused Images

Assessing the integration quality can be carried out in two ways, i.e., quantitatively or qualitatively, after previously performing a spectral reduction by the same factor [9,30]. The qualitative method is a subjective method, depending on the experience of the observer. Image quality is assessed primarily based on visual inspection. Sharpness, contrast, texture, etc. are taken into account. This method cannot be represented by rigorous mathematical models [10].
The quantitative analysis is described by mathematical equations based on statistical data. There are many proposed indicators to assess the spectral quality based on different statistical parameters. They can be divided into two basic groups: indicators based on the calculation of RMSE and indicators based on a correlation analysis. Among the first group, the most popular are: Relative Dimensionless Global Error (ERGAS) [31], its simplified form nQ% [32] and the Relative Average Spectral Error RASE [33]. The key parameter in each of these indicators is the Root Mean Square Error (RMSE), calculated as the square root of the sum of the squares of the residuals divided by the number of pixels, which had been shown in Equation (1)
R M S E = ( M S O R   M S F U S ) 2 N
where M S O R —original MS image, M S F U S —post-fusion MS image and N—number of pixels.
The ideal value of the RMSE is 0. When analyzing the equations for RASE (Equation (2)) or nQ% (Equation (3)), it can be deduced that they should also achieve values close to 0. The RASE indicator is an average relative error for any number of channels and is independent of the number of channels and the ratio of spatial resolutions of the integrated images [33,34,35].
R A S E = 100 M · 1 K · i = 1 K R M S E k 2 .
where R M S E k —root mean square error in the k-th channel, K—number of channels and M—average pixel value in K channels of the original MS image.
Another parameter is the nQ% (3) [23,32], which is a simplified form of the popular ERGAS index. In the formula describing nQ%, there is no GSD quotient of the combined images (as is the case with ERGAS). Therefore, nQ%, similar to RASE, is independent of the spatial resolution of the integrated images.
n Q % = 100 · 1 K · i = 1 K R M S E k 2 M k 2
where M k —average pixel value of the Kth channel of the original MS image.
Both RASE and nQ% are indicators for performing a global evaluation of the spectral quality of the data fusion process. One of the most common indicators from the second group (based on correlation analyses) is the correlation coefficient (Equation (4)), which is calculated separately for each spectral channel.
C C ( M S F U S ,   M S O R ) = C O V (   M S F U S ,   M S O R ) S D (   M S F U S , ) · S D (   ( M S O R , )
where M S O R —original MS image, M S F U S —post-fusion MS image, CC—correlation coefficient, COV—covariance and SD—standard deviation.
The standard deviation SD describes the data distribution around the average value of pixels in the image. Covariance is, however an un-normalized measure of the relationship between the analyzed data. Normalization is achieved by dividing the covariance by the product of standard deviations of the processed data. This quotient is the correlation coefficient. It is a normalized measure of relationship between the original MS image and the post-fusion image. The correlation coefficient values are within the range [−1, 1]. The closer the absolute value of CC is to 1, the more correlated the data are, so the greater the similarity between the original image and the sharpened multispectral image [36,37].
The correlation coefficient is also used to calculate other indicators. One of the most popular is the Q indicator (Universal Image Quality Index), given by Equation (5) [38].
Q = C O V ( M S F U S ,   M S O R ) S D ( M S F U S , ) · S D ( M S O R , ) · 2 · M ( M S F U S , ) · M ( M S O R , ) M ( M S F U S , ) 2 · M ( M S O R , ) 2 · 2 · S D ( M S F U S , ) · S D ( M S O R , ) S D ( M S F U S , ) 2 · S D ( M S O R , ) 2
where M ( M S F U S ,   ) ,   M ( M S O R , ) —average pixel value for the fused and original MS images, respectively, for the analyzed spectral channel
The above equation consists of three components, where the first is the previously described correlation coefficient. The second component is a measure of the proximity of the average brightness of the pixels of the c.
ompared images, while the third component describes the similarity of contrasts. Similarity values fall within the range of [0, 1], meaning that the ideal value of Q is 1 [38]. The average value of this indicator from successive channels can be considered as a global assessment of the spectral quality of the data integration process [39].

2. Materials and Methods

2.1. Experimental Data

The research work was based on satellite data in the form of a panchromatic WorldView-2 satellite image and Landsat 8 multispectral imagery. The panchromatic image was acquired in the 0.45–0.90 micron spectral range with a spatial resolution of 0.5 m, while the Landsat 8 multispectral imagery consisted of 11 channels. In the eighth channel is the panchromatic image. The other channels hold multispectral information. For each channel, the information is registered in a narrow spectral range, corresponding to a specific color: deep blues and violets—band 1; the visible range—bands 2, 3, and 4; near infrared—band 5; and shortwave infrared—bands 6 and 7. Channel 9 holds information from a very narrow spectral range, which is dedicated to cloud detection. The final channels provide thermal infrared data. The data from visible range, near infrared range and shortwave infrared range are registered with spatial resolution of 30 m, while the data from thermal infrared are registered with spatial resolution of 100 m [40].
The first seven MS channels were used in the data fusion process. This made it possible to analyze the proposed algorithm in both the visible and near-infrared range, which overlapped with the spectral range of the PAN image, as well as in relation to data that exceeded beyond the spectral range of the PAN image (channels 6 and 7). The test case had a multispectral range that exceeded the range of the pan image. On the one hand, this may introduce some spectral distortions in bands with the range of the PAN image. On the other hand, however, this can increase the amount of information within the modified PAN image and improve its spectral quality in the channels exceeding the spatial range of the PAN image. The multispectral channels are characterized by a spatial resolution of 30 m, thus the ratio of the spatial resolution of the data used in the research is 1:60.
Images used in the research depicted a fragment of the city of Warsaw city (Poland). It is a highly urbanized area. It consists of many structures (such as single-family housing, tall buildings and green areas) which are typical for urban area. Therefore, for such a highly diverse area, it is highly desirable for the data fusion to guarantee a high spectral quality of the sharpened image. However, with such an unfavorable ratio of pixel dimensions, the information recorded for one MS pixel will be the average resulting from the reflectivity of all objects from an area 30 m × 30 m, whereas the high spatial resolution PAN image will further contribute to an increase in color distortions on the final image.

2.2. N-Dimensional Intensity Component

In the IHS space, color is described by three components: intensity, hue and saturation, with the panchromatic image only recorded on the intensity component. Hence often this component of the multispectral images is identified with the storing of spatial information. The transformation of the RGB color space to the IHS is conducted as a transition from the RGB cube to the IHS cone. The method of determining the intensity, hue and saturation components is a result of the geometrical relationships between the two systems (Figure 1).
The point P with coordinates (r, g, b) presents the pixel’s color. The vector P connects the origin with point P, while its projection the gray line (the diagonal of the RGB cube) is a graphical representation of the intensity. When determining the intensity equation, we take into consideration vector P and vector W (a, a, a) shown in Figure 1, where a = min (r, g, b) [19].
I ( r , g , b ) = P · W | W | = r a + g a + b a 3 a 2 = r + g + b 3
where I —intensity for the channels: R, G, B, P —vector between the origin of the RGB space and point P   ( r , g , b ) , W —vector between the origin of the RGB space and point ( a , a , a ) , r ,   g ,   b —coordinates of point P in the RGB space, which in turn are the pixel values in the R, G and B channels.
Equation (6) is typical for the three-dimensional RGB color space. However, in the case of satellite images, we usually need to consider a wider spectral range, represented by more than three channels. It can be interpreted as a hypercube, constructed analogously to the RGB cube, with the intensity component determined from Equation (7), which is a modification of Equation (6).
I n = b 1 + b 2 + + b n n
where n —number of channels, I n —Intensity for n channels and b 1 , 2 , , n —pixel value in each of the n channels.
This approach allows the global intensity to be defined as the sum of the intensity components of all of the channels of the processed image. This is important when integrating imagery data, as thanks to this we are no longer limited to only three channels. It is therefore possible to increase the similarity of the modified PAN image (presented by the authors) to its original counterpart.

2.3. Modified PAN Image

2.3.1. Algorithm of PAN Image Modification

Popular methods of pansharpening make it possible to maintain either high spatial or spectral quality of the output image. There is a noticeable relationship in that the spectral quality decreases together with an improvement in spatial quality [41]. The method proposed by the authors provides high spectral quality of the sharpened image, while simultaneously maintaining a high spatial resolution.
First, the spatial information of the panchromatic image is used to generate a new simulated panchromatic image, which is then used in the data fusion process, performed by various methods. However, it must be taken into account that when combining imagery with high spatial resolution ratios, there will always be associated color distortions for the details sharpened using the panchromatic image. In addition, different features of the data acquisition sensors will negatively affect the image correlation between the MS data and the post-fusion image. This article focuses on improving the spectral quality of the data fusion of image with a high ratio between GSDs. The authors propose a modification to the panchromatic image to match it to the intensity of the MS image, where spatial information is stored in this image. For this purpose, the factor describing the ratio of the spatial information contained in the MS image and that of the PAN image is calculated. This information from the MS image is isolated by performing a Principal Component Analysis, which assumes that the first component (PC1) consists essentially of spatial information. It focuses the maximum amount of information common to all channels of the MS image, which can be identified with topographical features [19]. The authors used this to determine the coefficient by which to modify the PAN image, by dividing the first PCA component by the panchromatic image. The resulting image of this process is further referred to as the RATIO image. By determining the ratio of PC1/PAN it is possible to describe the variability of information of the integrated data, taking into account that the higher the value of the pixel, the more information is registered therein. If the information is identical in both images, the ratio will be 1. In the case where the PAN image will contain more information than the MS mage, the coefficient will take a value of less than 1. If, however, the MS image has more information than the PAN, this factor will be greater than 1. By multiplying this determined factor by the intensity for a selected number of channels of the MS image (calculated according to the Equation (3)) it is possible to generate a new image intensity. As a result of this operation, in areas where the information had not changed, the intensity of individual pixels will also remain unchanged. If the PAN image will contain more information, then the intensity of the MS image will be multiplied by a factor of less than 1, so its value will decrease, which will weaken the information from the MS image. However, if the PAN image will contain less information, the intensity of the image MS will be multiplied by a factor greater than 1, so its value will increase, which will amplify the information from the MS image.
The image obtained as a result of this operation is a component of the total intensity of both the MS image and the PAN image. Adding the PAN image to the new, calculated intensity is necessary to preserve the correct spectral information of scene details, which had occurred only on the high-resolution image, especially details in the shadows or details characterized by low pixel values. The modified panchromatic image is a weighted sum of the MS and PAN image intensities. This summation of images is conducted in accordance with Equation (8).
P A N M O D = w 1 · I M S + w 2 · P A N
where P A N   M O D —modified PAN image, P A N —original PAN image, I M S —intensity of the MS image and w 1 , w 2 —weights.
The correct assignment of weights is extremely important in this process. They are individually calculated for each pixel and are dependent on determined factor describing the ratio of the information contained in both images. The sum of weights should be 1, which implies that:
w 2 = 1 w 1
However, w 1 is calculated as the product of coefficient k and the predetermined factor describing the ratio of the information contained in both images.
w 1 = k · P C 1 P A N
where P C 1 P A N is the earlier described ratio of the first component of the Principal Component Analysis of the MS image to the PAN image.
Ultimately, the relationship between the original and modified panchromatic images is described by Equation (11).
P A N M O D = k · P C 1 P A N · I M S + ( 1 k · P C 1 P A N ) · P A N
Adjusting the weights for each individual pixel is particularly important, especially in the integration of imagery with a high spatial resolution ratio. For example, when integrating Landsat 8 imagery with World View 2 data, the information on one pixel of the MS image is averaged from 60 pixels of the PAN image. It is of course possible to perform pansharpening by obtaining this same information from just the PAN image, however one must bear in mind, that most of the new pixels would be characterized by distorted spectral information, especially when integrated sensors have different properties. By modifying the PAN image according authors’ method, so that it itself already integrates the properties of the sensors, it is possible to reduce spectral distortions. The proposed solution guarantees the adjustment of weights depending on the amount of information contained in the integrated images. The size of the weights is determined by the factor which describes the ratio of spatial information in both images. A greater weight will be given to the intensity of the MS image when the indicator shows greater information content in the MS data and a smaller weight will be assigned to the intensity of the MS data when the PAN image contains more information.
The proposed algorithm for integrating remote sensing data with highly differentiated spatial resolutions is shown in Figure 2.

2.3.2. k Factor

The weighing coefficient k aims to ensure the optimum summation of images, making it possible to maintain a sufficiently high spatial quality while improving spectral quality. The higher is the ratio, the higher is the similarity between the fused MS image and the original MS image, however, at the same time, the spatial quality of the sharpened image deteriorates. The value of the coefficient should be evaluated to ensure an improvement in the spectral quality, without significant deterioration of spatial quality. This coefficient was empirically determined, based on the research carried out by the authors. It takes on values in the 0.05–0.5 range depending on the ratio of the spatial resolutions of the integrated data, the number of MS image channels (which translates to the intensity values on the MS image), and the radiometric resolution of the two images. For the test data with a ratio of 1:60, the weighting factor was 0.1. This value makes it possible to improve the spectral quality while preserving the highest spatial quality. This value was empirically determined (30 samples), by examining the effect of the k coefficient on the spatial quality of the modified panchromatic images. Too low a value of this coefficient would generate an image very similar to the original PAN image (Figure 3a), not being able to improve the spectral quality. On the other hand, too high a value of the coefficient would give an intensity similar to that of the low resolution image (Figure 3b), which would result in distortions of the spatial information.
Images modified using different k-values ranging from 0.05 to 0.5 were researched. Figure 4 shows chosen images showing the variability of the visual quality of the image along with the change in the coefficient value.
The value of k below 0.05 would not cause significant changes in pixel DN, so it would not be possible to add new spectral information into the panchromatic image. However, a value of k greater than 0.5 would not be able to transfer spatial information of the high resolution image. The key task was to find such a value that would improve spectral quality and at the same time not cause much loss of spatial information. For this purpose, the spatial quality of the modified panchromatic images was examined by comparing them with the original high resolution image. The RMSE value for the modified images was calculated (Figure 5a) as well as the correlation coefficient between the high frequency data of the original PAN image and the modified images using the author’s method (Figure 5b).
Both functions are very sensitive to the value of the k factor. In the range of k = 0.05 to k = 0.5, the RMSE changes by 40, and the correlation coefficient by 0.10. By analyzing both relationships, one can see a common characteristic. Both curves (both the RMSE and the correlation coefficient) have an inflection point for k = 0.1. Below this value a smaller RMSE decrease occurred, and in the case of the correlation coefficient, the function was approximately constant. The correlation coefficient was 0.93 for k = 0.2, 0.94 for k = 0.1 and 0.94 for k = 0.05. This means that between consecutive images modified with k in the range of 0.0–0.1, there is less spatial information difference than for images produced using k greater than 0.1. As can be seen from the RMSE analysis and the correlation coefficient, the value of 0.1 is the limit value at which the spatial quality of the modified images is dramatically reduced.

3. Results

The ratio of the spatial resolution was 1:60, so a value, which significantly reduces the correlation between the original MS image and the post-fusion image. When integrating data from the same satellite, the correlation coefficient is usually in the range 0.80–1.00 [42,43,44], whereas, with data from different sensors, the value drops to the range of 0.60–0.85 [14], depending on the method used. This is closely related to the fact that the spectral quality deteriorates as the GSD ratio of the integrated images increases. The authors’ aim was to modify the panchromatic image in such a way, which would increase the spectral quality after data fusion. Studies have been conducted for six basic pansharpening methods. Methods based on the color and statistical or numerical methods were selected. The results of classical satellite data integration (using the original PAN image) and the approach proposed by the authors (using the PAN modified by the authors’ algorithm) were compared in the research. In the latter part of this article, the results of the classical approach are denoted by the subscript CLASS. However, the results for the approach proposed by the authors are denoted by the subscript MOD, due to the use of a modified high resolution image.

3.1. Visual Assessment

Firstly, the created images were evaluated visually. Figures show the source data used in this research (Figure 6) as well as the fused images obtained by implementing selected pansharpening methods using the original panchromatic image and the image which had been modified using the proprietary algorithm (Figure 7 and Figure 8).
Figure 7 and Figure 8 show the fragments of the images obtained using PCA, Gram-Schmidt and multiplicative data integration. For these methods, the improvement of spectral quality was the highest, as shown by the values of the indicators, described in detail in Section 3.2. Two examples are shown: when the intensity of the MS image was higher than the original PAN image, and when the intensity of the MS image was lower than that of the original PAN image.
In the case where the intensity of the MS image was higher than the original PAN image, the pixel values of the modified images increased. For this reason, these images appear to be brighter in some areas (Figure 7). The highest increase in brightness (for the presented part of the image) occurred for the multiplicative method. The reverse situation occurs when the intensity of the MS image is lower than the original PAN image. The pixels values decreased, which can be noticed mainly on the forested areas (especially for the Gram-Schmidt orthogonalization). These changes are the result of an increase in the similarity of the PAN image to the intensity of the low resolution image. They are negligible, and it is difficult to notice them during a visual comparison of the images. That is why a quantitative analysis, based on mathematical formulas, was carried out further on in this paper.
Visual analysis is not sufficient to identify subtle spectral differences, however, it allows us to conclude that the use of the new PAN image make it possible to maintain a sufficiently high level of detail. For images with the highest spectral quality one would expect the greatest spatial distortions. As can be seen in Figure 7 and Figure 8, it is possible to identify the same terrain details both before and after using the modified high resolution image.

3.2. Spectral Quality

3.2.1. Analysis of Correlation Coefficient

When integrating data obtained from various sensors characterized by a large difference in spatial resolutions, the greatest distortions relate to the spectral quality. The basic parameter for describing the spectral quality is the correlation coefficient. It was calculated and analyzed for all performed methods and listed in Table 1 and shown in the Figure 9.
An evaluation of the correlation coefficients had shown, the best results were obtained for the HPF method, where the values are highest (mean 0.86). However, based on previous studies [14], it is known that the high spectral quality of the HPF method also leads to low spatial quality. Much better spatial results can be obtained using the PCA method with not much lower correlation values. Here starts the important role of the research presented in this article on improving the spectral quality while maintaining sufficient spatial quality.
By comparing the values of the correlation coefficient for methods based on the original PAN image and those based on the modified image, it can be seen that the proposed approach gives generally better results. A detailed analysis of the spectral quality had been done focusing on two aspects: determining which method of integration carried out using the authors’ algorithm, gives the best results and in which channels the image spectral quality improved most.
In the case of the Brovey transformation, PCA, multiplicative and Gram-Schmidt methods, there was an increase in the correlation coefficient on average by 5%, 4%, 7% and 12%, respectively, in relation to the value for the classical approach. For the HPF method, the correlation coefficient differences were within the measurement’s margin of error. Deterioration was observed only for the IHS method, where quality index fell by 5%. This means that the authors’ proposed modified PAN image method is well suited for statistical and numerical methods, and not those based on the color space transformation from RGB to IHS. Modification of the intensity component causes a color change, which results in deterioration of the correlation between the pixels values of the MS image and the fused image.
An analysis of the average value of the correlation coefficient suggests that the best results were obtained after applying the modified PAN image to the Gram-Schmidt pansharpening method process. Similar conclusions can be reached taking into account the correlation coefficient in each subsequent spectral band. This has a direct connection with the second aspect of the spectral quality analysis that had been described in this paper. This aspect focuses on determining in which channels the image spectral quality improved most. For the PCA method, the lowest increase (3%) was seen in channels 1–5, and the largest (5%) in channels 6 and 7. As for the multiplicative method, the change is rather constant in all channels (about 7%). The greatest differences were observed for the Gram-Schmidt transformation, where in channels 1–5 the correlation coefficient increased on average by 0.05 (changed by 7–12% depending on the channel). For this method, the correlation coefficient in the other channels increased by 0.07 (channel 6) and 0.08 (channel 7), which improved correlation by 15% and 18%, respectively.

3.2.2. Analysis of Other Spectral Quality Indicators

Another basic parameter characterizing image quality is the root mean square error (RMSE), which is based on differences in the values of the corresponding pixels in the sharpened image and in the original MS image. The RMSE values for all merged images, both in the classical approach and in the modified approach, were calculated in this research. A comparison was then made between the error values for the corresponding integration results with the selected methods, as shown in Figure 10.
The chart shows the decrease of the RMSE value after applying the modified PAN image for selected integration methods for data of different resolutions. An increase of the RMSE occurred only for RGB color transitions on IHS, which demonstrates a deterioration in spectral quality in this case. Hence, the negative values on the graph. The quality of the HPF method did not change significantly, but the greatest improvement was obtained for the PCA fusion. The RMSE value has decreased by about 20% compared to the value of this error after data integration according to the classical approach. Improvements in spectral quality were also reported after pansharpening using the multiplicative and Gram-Schmidt methods (on average about 5% for both methods) and the Brovey transformation (6%). The results of the RMSE change analysis overlap with the conclusions drawn after the correlation study between the images obtained after the fusion process and the original MS image. The justification for the best results being obtained with the PCA method is the use of the same matrix when modifying the PAN image and during the pansharpening process.
Both the correlation coefficient and RMSE make it possible to evaluate the spectral quality of the individual image channels. This assessment was also made globally using the RASE, nQ and Q indicators, where the average value of this indicator from all channels was input for Q. Figure 11 shows the differences between the values of the indicators calculated for the merged images using the original PAN image (classic approach) and the values of the same indicators calculated for the images generated after applying the PAN image modified by the authors’ algorithm (modified approach).
For all methods, except for IHS, the RASE and nQ% values have decreased (hence the positive difference), which demonstrates an improved spectral quality after applying the PAN image modified by the authors’ algorithm. The proposed approach is not appropriate for color space transformations from RGB to IHS, so in this case, the original values of the indicators were lower than those obtained after the modification of the approach. In the global assessment, the highest level of spectral quality improvement is noticeable for the PCA method. The RASE indicator decreased by 7.7% and nQ% by 3.1%, which is 16% and 21% of the original value, respectively. The analysis of RASE and nQ% indicators also shows that a subtle improvement in spectral quality also takes place in the HPF method, which was difficult to deduce from previous analyzes. RASE and nQ% values decreased by 0.1%. In both cases, this represents 0.4% of the original value, which is a small amount and is within the measurement error range. In addition, the Universal Image Quality Index (Q) supports the approach proposed by the authors (Table 2). After using the modified panchromatic image, the index values were close to 1, except of course for the IHS method. The original values (Q_CLASS) were lower, resulting in negative differences. Values increased on average by 0.05 for the statistical methods studied, with the best results for the Gram-Schmidt, Brovey and PCA methods. The Q index increased by 12%, 14% and 10%, respectively, in relation to the value for the classical approach (Figure 12). An analysis of the changes in Q values for individual spectral bands showed the same trend as for the correlation coefficient. The highest improvement was recorded in channels 6 and 7, especially for PCA (18% in channel 6 and 16% in channel 7) and Gram-Schmidt orthogonalization (15% in channel 6 and 18% in channel 7).
All of the performed spectral quality analyses show that the proposed approach is well suited for statistical and numerical methods, especially to the PCA and Gram-Schmidt methods, for which the greatest improvement in the indicators had been achieved. Each indicator obtained lower values for the IHS method, which confirms that the proposed approach is not appropriate for color-based methods. Indicators that characterize spectral quality in individual image channels showed that the highest changes occurred in channels 6 and 7, i.e., those beyond the spectral range of the panchromatic image. This demonstrates the correctness of the authors’ algorithm, especially for integrating data from different spectral ranges.

4. Discussion

The use of a PAN image modified by the authors’ algorithm for image data fusion results in an improved spectral quality in all channels, but to varying degrees depending on the method. The Gram-Schmidt and PCA methods showed significantly higher spectral improvement, mainly in channels 6 and 7. The PCA method is based on the analysis of the information contained in the individual channels. The PAN image used has a spectral range of 0.45–0.90 microns, which coincides with the range of the visible and near infrared spectral channels (bands 2, 3, 4, and 5). When modifying the PAN image, information from channels 1 to 7 of the MS image were used, and thus not only from the visible and the near infrared range, but also from the medium wave infrared range. Therefore, the range of the information of the original PAN image had been extended to include information recorded in the seventh low-resolution image channel. This was new information introduced into the modified PAN image. This information was isolated by Principal Component Analysis. The same eigenvalues were used during pansharpening, which allowed for a faithful reproduction of the information contained in the individual channels. It can be concluded that the proposed approach is most appropriate for the PCA method, particularly in cases where it is essential to preserve the spectral quality of the entire spectral range of the MS image, as supported by the analyses carried out in this research with the use of selected methods.
The highest correlation was maintained for the HPF method, as in earlier similar studies [4,14]. The application of the approach proposed by the authors gives, in this case, a spectral improvement within measuring error, visible only in a global assessment. One could therefore question the need to use a modified panchromatic image if the original PAN image already allows for a high spectral quality. One cannot however forget about the spatial quality of the image, which varies depending on the method. Thus, data integration based on high-pass filtering provides high spectral quality but low spatial quality. Application of the Principal Component Analysis gives slightly higher spectral distortions, but with better spatial results. Applying the PAN image modified using the authors’ algorithm for data fusion makes it possible to maintain a high level of detail, while at the same time also improving the spectral quality. In this case the spectral quality for PCA is almost as good as HPF. It is therefore reasonable to assume that the proposed approach is most appropriate for the PCA method only if the aim is to achieve the highest possible correlation while maintaining a high spatial quality. The Principal Component Analysis gives results comparable to the multiplicative method only in channels 6 and 7, where for both methods the indicators have similar values, especially the correlation coefficient and Q.
The choice of the data integration method can be limited by various factors, such as the availability of adequate software. It is not always possible to access algorithms that guarantee the best spectral and spatial results. It is therefore worth knowing what improvements to accuracy can be obtained for the integration of data using commonly adopted methods according to the modified approach proposed by the authors. So if the purpose of the performed analysis is to define which method’s results can be most improved, then the authors’ approach should be used primarily for the Gram-Schmidt and PCA methods, where the spectral quality improvement had been the greatest. Gram-Schmidt orthogonalization improved significantly, especially in terms of the correlation with the original multispectral image, and in the case of PCA, the mean square error was reduced by as much as 20%. The spectral quality increased with these methods in every spectral channel, but it was most evident in channels 6 and 7. Therefore, it is worth considering the approach proposed by the authors especially when integrating satellite data with high spatial resolution ratios, where the spectral range of the MS image goes beyond the spectral range of the PAN image. Classical approaches typically result in reduced spectral quality in spectral channels outside the spectral range of the PAN image. The algorithm proposed by the authors enables the spectral quality of these channels to be approximated to the quality of the visible and near infrared channels (coinciding with the PAN). This is in line with the authors’ expectations. The equation describing the modified PAN image includes an intensity calculated not only from three, but from all channels of the MS image, which introduces new information to the modified PAN image. This extends the spectral range of the panchromatic image to the MS image range. The authors’ purpose was to pre-integrate spatial and spectral information in a modified panchromatic image, in order to improve the spectral similarity between the original MS image and the merged image. The analyses carried out proved that this assumption was correct, especially in the medium-wave infrared channels.
Relatively high changes were also made to the multiplicative and Brovey transformations, where spectral quality improved by 5–7% across all studied channels. The restriction to the Brovey method is that it only applies to three image channels, but for applications that require, for example, a natural color image, use of the approach outlined in this article would still improve the quality of the integration process. The lower degree of improvement is related to the data fusion algorithm. Both methods are based on simple image multiplication, in contrast to the PCA and Gram-Schmidt methods which are based on component replacement. Replacing an appropriate component with a high resolution image is more sensitive to introducing new information and modifications than multiplying that same panchromatic image by a multispectral image. This is confirmed by better results for the PCA and Gram-Schmidt methods.

5. Conclusions

This paper describes research on the integration of satellite data obtained from different satellite sensors. The ratio between spatial resolutions of the merged Landsat 8 and WorldView-2 imagery was 1:60. The main problem when integrating data with a large GSD ratio, using traditional methods, was a much lower spectral correlation between the original color image and the sharpened image, than in the case of integrating data which had been obtained by the same sensor, where the ratio was 1:4. The authors’ aim was to improve the spectral quality of the post-fusion image. Therefore, the authors propose a modification of the PAN image to merge the spatial information from the panchromatic and multispectral images. First, a modified PAN image was generated in accordance with the authors’ method, and then it was examined how the pansharpening process will proceed using selected traditional methods. The results were compared with images obtained after the integration of the same MS image with the original PAN image. As a result of this research, it was found that the proposed approach makes it possible to improve the spectral quality while still maintaining a high spatial quality. The PAN image modified using the proprietary algorithm is particularly well suited for statistical and numerical methods, especially for the PCA and Gram-Schmidt methods, which gave the best results.
The effectiveness of the author’s algorithm had been tested for six of the most popular fusion methods. The proposed methodology is ideal mainly for statistical and numerical methods, especially the Principal Component Analysis and the Gram-Schmidt. The author’s algorithm makes it possible to lower the root mean square error by up to 20% for the Principal Component Analysis. The spectral quality was also increased, especially for the spectral bands extending beyond the panchromatic image, where the correlation rose by 18% for the Gram-Schmidt orthogonalization. The only pansharpening method for which the approach proposed by the authors did not give better results is the IHS method, which is based on the transformation of the color space. However, in the case of the method based on high frequency filtering (HPF), changes were within the margin of error of the measurements.
The research work conducted by the authors opens wide horizons for further work on this subject. The approach presented in this paper is the beginning of research into improving the quality of integration of data with high GSD ratios. First, in further work, it is worth attempting to determine the appropriate factors and determine the mathematical rules of the weighting factor in the equation describing the modified panchromatic image. In addition, upcoming experiments will aim to further improve the quality of data integration, including color-based methods. The high spatial resolution ratio data fusion process can also be extended to aerial data, imagery from unmanned aerial vehicles and close-range photogrammetry products. This issue remains open to integration of data acquired from different altitudes. It will certainly require further modification of the approach presented by the authors in this article.

Acknowledgments

This paper has been supported by the Military University of Technology, the Faculty of Civil Engineering and Geodesy, Geodesy Institute.

Author Contributions

The work presented in this paper was carried out in collaboration between both authors. Aleksandra Grochala and Michal Kedzierski designed the method. Aleksandra Grochala carried the laboratory experiments, interpreted the results and wrote the paper. Both authors have contributed to, seen, and approved the manuscript.

Conflicts of Interest

The authors declare no conflict interest.

References

  1. Mikrut, S. The Influence of JPEG Compression on the Automatic Extraction of Cropland Boundaries with Subpixel Accuracy Using Multispectral Images. In Geodesy and Environmental Engineering Commission; Polish Academy of Science–Cracow Branch: Krakow, Poland, 2006; pp. 97–111. [Google Scholar]
  2. Aiazzi, B.; Baronti, S.; Selva, M. Improving component substitution pansharpening through multivariate regression of MS + Pan data. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3230–3239. [Google Scholar] [CrossRef]
  3. Zhang, Y.; Hong, G. An IHS and wavelet integrated approach to improve pansharpening visual quality of natural colour IKONOS and QuickBird images. Inf. Fusion 2005, 6, 225–234. [Google Scholar] [CrossRef]
  4. Chavez, P.S.; Sides, S.C.; Anderson, A.J. Comparisation of Three Different Methods to Merge Multiresolution and Multispectral Data: Landsat TM and SPOT Panchromatic. Photogramm. Eng. Remote Sens. 1991, 57, 265–303. [Google Scholar]
  5. Li, X.; Li, L.; He, M. A Novel Pansharpening Algorithm for WorldView-2 Satellite Images. Available online: http://www.ipcsit.com/vol31/004-ICIII2012-C0010.pdf (accessed on 20 June 2017).
  6. Bobkowska, K.; Przyborski, M.; Szulwic, J. A Method of Selecting Light Sources from Night Satellite Scenes. In Proceedings of the SGEM 2015 GeoConference Ecology and Environmental Protection, Albena, Bulgaria, 18–24 June 2015; pp. 1314–2704. [Google Scholar]
  7. Kedzierski, M.; Wilinska, M.; Wierzbicki, D.; Fryskowska, A.; Delis, P. Image Data Fusion for Flood Plain Mapping. In Proceedings of the 9th International Conference on Environmental Engineering, Vilnius, Lithuania, 22–23 May 2014. [Google Scholar]
  8. Jenerowicz, A.; Woroszkiewicz, M. The Pan-Sharpening of Satellite and UAV Imagery for Agricultural Applications. In SPIE Remote Sensing; International Society for Optics and Photonics: Bellingham, WA, USA, 2016. [Google Scholar] [CrossRef]
  9. Fonseca, L.; Namikawa, L.; Castejon, E.; Carvalho, L.; Pinho, C.; Pagamisse, A. Image Fusion for Remote Sensing Applications. In Image Fusion and Its Applications; Zheng, Y., Ed.; InTech: Rijeka, Croatia, 2011. [Google Scholar] [CrossRef]
  10. Shi, W.; Zhu, C.; Tian, Y.; Nichol, J. Wavelet-based image fusion and quality assessment. Int. J. Appl. Earth Obs. Geoinf. 2005, 6, 241–251. [Google Scholar] [CrossRef]
  11. Zhang, H.K.; Huang, B. A new look at image fusion methods from a Bayesian perspective. Remote Sens. 2015, 7, 6828–6861. [Google Scholar] [CrossRef]
  12. Helmy, A.K.; El-Tawel, G.S. An integrated scheme to improve pan-sharpening visual quality of satellite images. Egypt. Inf. J. 2015, 16, 121–131. [Google Scholar] [CrossRef]
  13. Jelének, J.; Kopačková, V.; Koucká, L.; Mišurec, J. Testing a modified PCA-based sharpening approach for image fusion. Remote Sens. 2016, 8. [Google Scholar] [CrossRef]
  14. Fryskowska, A.; Wojtkowska, M.; Delis, P.; Grochala, A. Some Aspects of Satellite Imagery Integration from EROS B and LANDSAT 8. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Prague, Czech Republic, 12–19 July 2016; pp. 647–652. [Google Scholar]
  15. Borkowski, A.; Głowienka, E.; Hejmanowska, B.; Kwiatkowska-Malina, J.; Kwolek, M.; Michałowska, K.; Mikrut, S.; Pekala, A.; Pirowski, T.; Zabrzeska-Gasiorek, B. GIS and Remote Sensing in Environmental Monitoring; Głowienka, E., Ed.; Rzeszow School of Engineering and Economics, Neiko Print & Publishing: Tarnobrzeg, Poland, 2015; pp. 7–48. [Google Scholar]
  16. Israa, A.; Javier, M. Multispectral Image Pansharpening based on the Contourlet Transform. In Information Optics and Photonics; Springer: New York, NY, USA, 2010; pp. 247–261. [Google Scholar]
  17. Pohl, C. Tools and Methods Used in Data Fusion. In Future Trends in Remote Sensing; Springer: Rotterdam, The Netherlands, 1998; Volume 32, pp. 391–399. [Google Scholar]
  18. Pohl, C. Tools and Methods for Fusion of Images of different Spatial Resolution. In Proceedings of the International Archives of Photogrammetry and Remote Sensing, Valladolid, Spain, 3–4 June 1999. [Google Scholar]
  19. Liu, J.G.; Mason, P.J. Essential Image Processing and GIS for Remote Sensing; Willey-Blackwell: London, UK, 2009; pp. 57–85. [Google Scholar]
  20. Jolliffe, I. Principal Component Analysis; John Wiley & Sons Ltd.: Hoboken, NJ, USA, 2002. [Google Scholar]
  21. Kazimierski, W.; Stateczny, A. Fusion of Data from AIS and Tracking Radar for the Needs of ECDIS. In Proceedings of the 2013 Signal Processing Symposium (SPS), Piscataway, NJ, USA, 5–7 June 2013; pp. 1–6. [Google Scholar]
  22. Xie, B.; Zhang, H.K.; Huang, B. Revealing Implicit Assumptions of the Component Substitution Pansharpening Methods. Remote Sens. 2017, 9. [Google Scholar] [CrossRef]
  23. Du, Q.; Younan, N.H.; King, R.; Shah, V.P. On the performance evaluation of pan-sharpening techniques. IEEE Geosci. Remote Sens. Lett. 2007, 4, 518–522. [Google Scholar] [CrossRef]
  24. Stathaki, T. Image Fusion: Algorithms and Applications; Elsevier: Amsterdam, The Netherlands, 2011; pp. 36–38. [Google Scholar]
  25. Pirowski, T. Rank of fusion methods of remotely sensed images of various resolution—Formal assessment of merging Landsat TM and IRS-PAN data. Arch. Photogramm. Remote Sens., 2009, 20, 343–358. [Google Scholar]
  26. Maurer, T. How to pan-sharpen images using the Gram-Schmidt pan-sharpen method-a recipe. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Hannover, Germany, 21–24 May 2013. [Google Scholar]
  27. You, X.; Du, L.; Cheung, Y.M.; Chen, Q. A Blind Watermarking Scheme Using New Nontensor Product Wavelet Filter Banks. IEEE Trans. Image Process. 2010, 19, 3271–3284. [Google Scholar] [CrossRef] [PubMed]
  28. Dong, L.; Yang, Q.; Wu, H.; Xiao, H.; Xu, M. High quality multi-spectral and panchromatic image fusion technologies based on Curvelet transform. Neurocomputing 2015, 159, 268–274. [Google Scholar] [CrossRef]
  29. Li, S.; Yang, B. Hybrid multiresolution method for multisensor multimodal image fusion. IEEE Sens. J. 2010, 10, 1519–1526. [Google Scholar]
  30. Helmy, A.K.; Nasr, A.H.; El-Taweel, G.S. Assessment and Evaluation of Different Data Fusion Techniques. Int. J. Comput. 2010, 4, 107–115. [Google Scholar]
  31. Wald, L.; Ranchin, T.; Mangolini, M. Fusion of Satellite Images of Different Spatial Resolutions: Assessing in quality of resulting images. Photogramm. Eng. Remote Sens. 1997, 63, 691–699. [Google Scholar]
  32. Wald, L. Quality of High Resolution Synthesised Images: Is There a Simple Criterion? Available online: https://hal.archives-ouvertes.fr/hal-00395027/document (accessed on 20 June 2017).
  33. Ranchin, T.; Wald, L. Fusion of High Spatial and Spectral Resolution Images: The ARSIS Concept and its Implementation. Photogramm. Eng. Remote Sens. 2000, 66, 49–61. [Google Scholar]
  34. Pirowski, T. The integration of remote sensing data acquired with various sensors—A proposal of merged image assessment. Geoinform. Pol. 2006, 8, 59–75. [Google Scholar]
  35. Hnatushenko, V.V.; Vasyliev, V.V. Remote Sensing Image Fusion Using ICA and Optimized Wavelet Transform. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Prague, Czech Republic, 12–19 July 2016; pp. 653–659. [Google Scholar]
  36. Yakhdani, M.F.; Azizi, A. Quality Assessment of Image Fusion Techniques for Multisensor High Resolution Satellite Images—Case Study: IRS-P5 and IRS-P6 Satellite Images. In ISPRS TC VII Symposium—100 Years ISPRS; Wagner, W., Székely, B., Eds.; IAPRS: Vienna, Austria, 2010; Volume 37, pp. 204–209. [Google Scholar]
  37. Han, S.S.; Li, H.T.; Gu, H.Y. The Study on Image Fusion for High Spatial Resolution Remote Sensing Images. In Proceedings of the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Beijing, China, 3–11 July 2008; pp. 1159–1163. [Google Scholar]
  38. Wang, Z.; Bovik, A.C.; Lu, L. Why is image quality assessment so difficult? In Proceedings of the 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Orlando, FL, USA, 13–17 May 2002. [Google Scholar]
  39. Osińska-Skotak, K. Assessment of different image fusion methods on example WorldView-2 images. Arch. Photogramm. Remote Sens. 2012, 24, 231–244. [Google Scholar]
  40. NASA. Available online: http://landsat.gsfc.nasa.gov (accessed on 10 January 2017).
  41. Lillo-Saavedra, M.; Gonzalo, L. Spectral or spatial quality for fused satellite imagery? A trade-off solution using the wavelet à trous algorithm. Int. J. Remote Sens. 2006, 27, 1453–1464. [Google Scholar] [CrossRef]
  42. Tu, T.M.; Lee, Y.C.; Chang, C.P.; Huang, P.S. Adjustable intensity-hue-saturation and Brovey transform fusion technique for IKONOS/QuickBird imagery. Opt. Eng. 2005, 44, 116201. [Google Scholar] [CrossRef]
  43. Yang, J.; Zhang, J.; Huang, G. A parallel computing paradigm for pan-sharpening algorithms of remotely sensed images on a multi-core computer. Remote Sens. 2014, 6, 6039–6063. [Google Scholar] [CrossRef]
  44. Alparone, L.; Baronti, S.; Garzelli, A.; Nencini, F. A global quality measurement of pan-sharpened multispectral imagery. IEEE Geosci. Remote Sens. Lett. 2004, 1, 313–317. [Google Scholar] [CrossRef]
Figure 1. Transformation from Red-Green-Blue (RGB) space to Intensity-Hue-Saturation (IHS) space [19].
Figure 1. Transformation from Red-Green-Blue (RGB) space to Intensity-Hue-Saturation (IHS) space [19].
Remotesensing 09 00639 g001
Figure 2. Diagram of the proposed pansharpening approach.
Figure 2. Diagram of the proposed pansharpening approach.
Remotesensing 09 00639 g002
Figure 3. (a) Panchromatic (PAN) image and (b) intensity of multispectral (MS) image.
Figure 3. (a) Panchromatic (PAN) image and (b) intensity of multispectral (MS) image.
Remotesensing 09 00639 g003
Figure 4. (a) Modified PAN image with k = 0.05; (b) modified PAN image with k = 0.1; (c) modified PAN image with k = 0.2; and (d) modified PAN image with k = 0.5 (k is the weighing factor).
Figure 4. (a) Modified PAN image with k = 0.05; (b) modified PAN image with k = 0.1; (c) modified PAN image with k = 0.2; and (d) modified PAN image with k = 0.5 (k is the weighing factor).
Remotesensing 09 00639 g004
Figure 5. The influence of k ratio on: (a) RMSE (Root Mean Square Error); and (b) correlation coefficient between high frequency information of the original PAN image and modified PAN images.
Figure 5. The influence of k ratio on: (a) RMSE (Root Mean Square Error); and (b) correlation coefficient between high frequency information of the original PAN image and modified PAN images.
Remotesensing 09 00639 g005
Figure 6. (a) Original panchromatic (PAN) image; and (b) original multispectral (MS) image.
Figure 6. (a) Original panchromatic (PAN) image; and (b) original multispectral (MS) image.
Remotesensing 09 00639 g006
Figure 7. (a) PCA (Principal Component Analysis) fused image for classical approach; (b) PCA fused image for author’s approach; (c) Gram-Schmidt fused image for classical approach; (d) Gram-Schmidt fused image for author’s approach; (e) multiplicative fused image for classical approach; and (f) multiplicative fused image for author’s approach (the case when the intensity of the MS image was higher than the original PAN image).
Figure 7. (a) PCA (Principal Component Analysis) fused image for classical approach; (b) PCA fused image for author’s approach; (c) Gram-Schmidt fused image for classical approach; (d) Gram-Schmidt fused image for author’s approach; (e) multiplicative fused image for classical approach; and (f) multiplicative fused image for author’s approach (the case when the intensity of the MS image was higher than the original PAN image).
Remotesensing 09 00639 g007
Figure 8. (a) PCA fused image for classical approach; (b) PCA fused image for author’s approach; (c) Gram-Schmidt fused image for classical approach; (d) Gram-Schmidt fused image for author’s approach; (e) multiplicative fused image for classical approach; and (f) multiplicative fused image for author’s approach (the case when the intensity of the MS image was lower than the original PAN image).
Figure 8. (a) PCA fused image for classical approach; (b) PCA fused image for author’s approach; (c) Gram-Schmidt fused image for classical approach; (d) Gram-Schmidt fused image for author’s approach; (e) multiplicative fused image for classical approach; and (f) multiplicative fused image for author’s approach (the case when the intensity of the MS image was lower than the original PAN image).
Remotesensing 09 00639 g008
Figure 9. (a) Change in the average value of the correlation coefficient after applying the PAN image modified according to authors’ algorithm; and (b) change in the average value of the correlation coefficient after applying the PAN image modified according to authors’ algorithm in relation to the value for the classical approach.
Figure 9. (a) Change in the average value of the correlation coefficient after applying the PAN image modified according to authors’ algorithm; and (b) change in the average value of the correlation coefficient after applying the PAN image modified according to authors’ algorithm in relation to the value for the classical approach.
Remotesensing 09 00639 g009
Figure 10. RMSE (Root Mean Square Error) value decreases after applying PAN image modified according to authors’ algorithm for selected data fusion methods.
Figure 10. RMSE (Root Mean Square Error) value decreases after applying PAN image modified according to authors’ algorithm for selected data fusion methods.
Remotesensing 09 00639 g010
Figure 11. (a) Change in the RASE (Relative Average Spectral Error) value after applying the PAN image modified according to authors’ algorithm; and (b) change in the nQ% value after applying the PAN image modified according to authors' algorithm.
Figure 11. (a) Change in the RASE (Relative Average Spectral Error) value after applying the PAN image modified according to authors’ algorithm; and (b) change in the nQ% value after applying the PAN image modified according to authors' algorithm.
Remotesensing 09 00639 g011
Figure 12. (a) Change in the average value of Q index (Universal Image Quality Index) after applying PAN image modified according to authors’ algorithm; and (b) change in average value of Q index after applying PAN image modified according to authors’ algorithm in relation to the value for the classical approach.
Figure 12. (a) Change in the average value of Q index (Universal Image Quality Index) after applying PAN image modified according to authors’ algorithm; and (b) change in average value of Q index after applying PAN image modified according to authors’ algorithm in relation to the value for the classical approach.
Remotesensing 09 00639 g012
Table 1. Correlation coefficient for original and modified fused images.
Table 1. Correlation coefficient for original and modified fused images.
Landsat BandB1B2B3B4B5B6B7Average
IHSCLASS-0.690.630.62---0.64
IHSMOD-0.660.600.58---0.61
BroveyCLASS-0.670.640.62---0.65
BroveyMOD-0.700.670.65---0.68
HPFCLASS0.860.860.860.860.860.870.860.86
HPFMOD0.860.860.860.860.860.870.860.86
PCACLASS0.780.770.740.730.700.620.630.71
PCAMOD0.810.790.760.750.720.650.660.74
multiplicativeCLASS0.570.600.620.650.520.580.610.59
multiplicativeMOD0.610.640.670.690.550.620.660.64
Gram-SchmidtCLASS0.600.580.540.530.470.460.430.52
Gram-SchmidtMOD0.640.630.600.590.520.530.510.57
Table 2. Q index (Universal Image Quality Index) for original and modified fused images.
Table 2. Q index (Universal Image Quality Index) for original and modified fused images.
Landsat BandB1B2B3B4B5B6B7Average
IHSCLASS-0.550.410.30---0.42
IHSMOD-0.520.380.27---0.39
BroveyCLASS-0.500.470.43---0.47
BroveyMOD-0.550.530.51---0.53
HPFCLASS0.770.790.790.800.810.830.810.80
HPFMOD0.770.790.790.800.810.830.810.80
PCACLASS0.690.670.620.600.510.410.440.56
PCAMOD0.720.710.660.640.560.480.510.61
multiplicativeCLASS0.520.560.590.630.540.600.610.58
multiplicativeMOD0.560.610.640.670.570.640.650.62
Gram-SchmidtCLASS0.550.550.510.510.490.480.430.50
Gram-SchmidtMOD0.590.590.570.570.540.550.510.56

Share and Cite

MDPI and ACS Style

Grochala, A.; Kedzierski, M. A Method of Panchromatic Image Modification for Satellite Imagery Data Fusion. Remote Sens. 2017, 9, 639. https://0-doi-org.brum.beds.ac.uk/10.3390/rs9060639

AMA Style

Grochala A, Kedzierski M. A Method of Panchromatic Image Modification for Satellite Imagery Data Fusion. Remote Sensing. 2017; 9(6):639. https://0-doi-org.brum.beds.ac.uk/10.3390/rs9060639

Chicago/Turabian Style

Grochala, Aleksandra, and Michal Kedzierski. 2017. "A Method of Panchromatic Image Modification for Satellite Imagery Data Fusion" Remote Sensing 9, no. 6: 639. https://0-doi-org.brum.beds.ac.uk/10.3390/rs9060639

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop