Open Access
5 November 2013 Algorithms for Moderate Resolution Imaging Spectroradiometer cloud-free image compositing
Hai-bing Xiang, Jin-song Liu, Chun-xiang Cao, Min Xu
Author Affiliations +
Abstract
The presence of clouds is the biggest obstacle in the investigation of land cover, and many techniques have been developed to detect clouds. However, few indicators have been proposed for the detection of cloud-free conditions. To address this, we propose two indicators for use in compositing 8-day cloud-free images: the B 1/7 ratio is the ratio of the band 1 reflectance to the band 7 reflectance of Terra surface reflectance images (MOD09GA), and saturation refers to the color saturation of these images. Here, we describe the principles underlying these two indicators and analyze their characteristics for vegetation, water, urban, and nonvegetation pixels under cloud-free, cloud shadow, and cloudy conditions using MOD09GA from October 16 to 23, 2007, in North China (sample A) and using data published by the U.S. Geological Survey (USGS). We found that the B 1/7 ratio and saturation are suitable for extracting cloud-free pixels over land and water, respectively; therefore, we combined these two indicators to develop a single-unified model. In particular, our results demonstrate that the pixels exhibiting the lowest B 1/7 ratios should be adopted as cloud-free pixels over land when the value of B 1/7 for land surfaces is between 0 and 1, and the surface reflectance of Moderate Resolution Imaging Spectroradiometer band 1 is less than 0.3. Otherwise, the pixels exhibiting the greatest saturation values should be adopted. We used our model to composite cloud-free images for two additional regions in China: the Tarim basin (sample B) from October 2 to 9, 2012, and the coastal areas of southeastern China (sample C) from April 15 to 23, 2013. We compared the cloud-free images of these regions with the 8-day surface reflectance product (MOD09A1) with respect to clouds, cloud shadow, and cirrus clouds, and we found that our proposed cloud-free image compositing approach can accurately eliminate both clouds and cirrus clouds. Specifically, the percentage of residual cloud pixels in sample C was found to be less than that in MOD09A1. Moreover, in the cloud-free images obtained using our newly developed method, cloud-free pixels are typically associated with greater sensor zenith angles and smaller scatter angles than those in MOD09A1. However, our method retains some limitations. In particular, 9.68, 33.22, and 33.00% of cloud-shadow pixels remain in the cloud-free images for samples A, B, and C, respectively.

1.

Introduction

Clouds play an important role in determining weather, climate, and other short-term environmental change, and more than 50% of the Earth’s daily surface area is covered by clouds.1,2 Moreover, these clouds hamper observation of the land surface by the Moderate Resolution Imaging Spectroradiometer (MODIS); therefore, accurate cloud detection is considered to be a crucial component in eliminating the impact of clouds on remote-sensing products.3,4

Three types of information can be used to detect clouds: spectral, texture, and multitemporal information. Spectral information was first utilized to distinguish clouds from the Earth surface using classification methods,58 in which thresholds in the value of one band or the difference between two bands were considered.3,4,9 Subsequently, texture information was used to distinguish clouds from the surface based on thresholds in heteroscedasticity. Such identification is possible because the reflectance of cloud is greater than that of the land surface, which results in greater variance along cloud edges, particularly over the oceans.1012 With increases in the availability of remote-sensing data, researchers have started to use multitemporal information to distinguish clouds from the Earth surface based on the differences between appropriate benchmarks and series of daily images; such methods are based on the assumption that the reflectance of the Earth surface is more stable than that of cloud-affected pixels, which exhibit frequent variations.1316 Many satellite-based instruments, ranging from the Advanced Very High Resolution Radiometer (AVHRR) to MODIS, employ these methods. For instance, the International Satellite Cloud Climatology Project (ISCCP)17,18 and the AVHRR Processing Scheme Over Cloud Land and Ocean (APOLLO) cloud detection algorithm19,20 employ only spectral information, and the Cloud Advanced Very High Resolution Radiometer (CLAVR) employs a series of spectral and spatial variability tests to detect clouds.1,2123 Some researchers have also considered the use of temporal information for cloud detection.4,9,2426 However, the computational power required for such methods increases with the complexity of the cloud detection algorithms adopted. Ultimately, obtaining composite cloud-free images based on the results of cloud detection has become a time-consuming process. This is evidenced particularly by the case of MOD09A1 published by the United States Geological Survey (USGS), in which only the pixels with the highest quality are considered; in this instance, cloud-cover pixels, cloud-shadow pixels, high-angle pixels, and high-aerosol pixels have been removed.27

Obtaining data relating to land surface characteristics is one of the primary challenges in land cover research. To address this, some algorithms have been proposed, in which cloud-free pixels are selected based on quantitative physical indexes such as the maximum normalized difference vegetation index (NDVI)28 and the lowest MODIS band 3 surface reflectance obtained during a specific time period.29 Maximum NDVI has been used previously as an indicator in gridded vegetation indices (MOD13) for vegetation monitoring. However, the use of this index typically leads to the selection of the pixel with the greatest sensor angle. Conversely, the minimum reflectance value for MODIS band 3, which represents the blue band, tends to consider shadow pixels as cloud-free pixels. Ma et al. used min[tan1(B1/(B2B1)] (where B1 and B2 are the bands 1 and 2 references of MODIS, respectively) to select cloud-free pixels.4 However, this model was shown to be ineffective in distinguishing between clouds, nonvegetated land, and shadows on vegetated land. The Canada Centre for Remote Sensing (CCRS) developed a new compositing technology that employs six criteria to obtain pixels of six different classes, in addition to mixed pixels. The CCRS researchers also compared max(NDVI), min(B1), and max[max(B2,B6)/B3] with their own multicriteria scheme and reported some details regarding the effects of the bidirectional reflectance distribution function (BRDF);9 however, many criteria and classification steps are required for the CCRS compositing technology, and this method is associated with considerable uncertainty.

The present study aims to obtain highly accurate composite cloud-free images by employing the fewest criteria possible. According to Kaufman et al.30 and Levy et al.,31 the reference ratio B1/7 between reflectances of 0.66 and 2.1 μm (corresponding to bands 1 and 7 of MODIS, respectively) is a good indicator of cloud cover for land areas. Moreover, as clouds are typically white or gray color, their saturation is lower than that of water pixels in a hue–saturation–intensity (HSI) model, making saturation a good indicator of cloud-free regions for water areas. Therefore, in the present study, we propose B1/7 ratio and saturation as appropriate indicators for the selection of cloud-free pixels on land and water, respectively, and present a set of algorithms for compositing cloud-free images for China.

2.

Methods

2.1.

Underlying Principles of the B1/7 Ratio

Previous studies have shown that 2.1-μm-wavelength images are not affected considerably by atmospheric scattering;30 thus, this wavelength can pass through all types of aerosol except dust. Moreover, for vegetated areas, the apparent reflectance at 2.1 μm is twice that at 0.66 μm when a given region is cloud free. Accordingly, the apparent reflectance at 2.1 μm is often used to estimate the surface reflectance at 0.66 μm.30 These wavelengths (i.e., 0.66 and 2.1 μm) are the central wavelengths of MODIS bands 1 and 7, respectively, and are typically expressed as B1 and B7. The reflectance ratio of these bands, B1/7, for a given land surface has been shown to be positively correlated with scattering angle and the vegetation index derived from the MODIS near- and short-wave infrared bands.31

Eq. (1)

B1/7=B1/7NDVISWIR+0.002Θ0.27,
where Θ is the scattering angle and defined as in Eq. (2). B1/7NDVISWIR is a piecewise function related to the vegetation index for shortwave infrared radiation [Eq. (3)], and NDVISWIR is the vegetation index derived from MODIS near- and short-wave infrared bands [see Eq. (4)].

Eq. (2)

Θ=cos1[cosθ0cosθ+sinθ0sinθcos(φφ0)].

In Eq. (2), θ0, θ, φ0, and φ are the solar zenith angle, the sensor zenith angle, the azimuth of the sun, and the azimuth of the sensor, respectively.

Eq. (3)

{B1/7NDVISWIR=0.48B1/7NDVISWIR=0.58B1/7NDVISWIR=0.48+0.2(NDVISWIR0.25)forNDVISWIR<0.25NDVISWIR>0.750.25NDVISWIR0.75

Eq. (4)

NDVISWIR=(ρ5ρ7)/(ρ5+ρ7).

In Eq. (4), ρ5 and ρ7 are the apparent reflectance of MODIS bands 5 and 7, respectively.

It is clear from Eq. (3) that B1/7NDVISWIR ranges from 0.48 to 0.58 and is correlated positively with NDVISWIR. Moreover, according to a previously described geometric optics model,32 NDVISWIR is proportional to the sensor zenith angle (i.e., NDVISWIR increases with increasing sensor zenith angle for a given vegetation pixel), and B1/7 increases with NDVISWIR.

As the Terra satellite is sun synchronous, θ0 and φ0 can be considered constants, such that θ and φ are the primary factors determining Θ. Observations collected by the Terra satellite can be separated into two groups based on (φφ0): the backward hemisphere with (φφ0) 0 to 90 deg or 270 to 360 deg and the forward hemisphere with (φφ0) 90 to 270 deg. Values of both Θ and B1/7 are typically smaller for the forward hemisphere observations than for the backward hemisphere; therefore, the pixels of the forward hemisphere observations are typically selected first. To sum up, selecting the pixel with the smallest B1/7 for a given period to represent a cloud-free pixel is equivalent to choosing the pixel in which the sum of NDVISWIR and Θ is the smallest.

2.2.

Underlying Principles of Saturation

The HSI model scales color in hue, saturation, and intensity spaces. Hue characterizes the main color received by the observer. Conversely, saturation represents the degree to which the solid color is diluted by white, and intensity is a measure of the brightness of the color. Thus, saturation decreases as the amount of white in the mix increases33 and can be defined by a saturation equation as follows:

Eq. (5)

S=13(R+B+G)[min(R,G,B)],
where S represents saturation, and R, G, and B represent red, green, and blue rays, respectively; all variables lie between 0 and 1. The min(R,G,B) function finds the variable with the lowest value.

Pixels can be classified as cloud-free, cloud shadow, or cloud pixels depending on the locations of the sensor, sun, and clouds (Fig. 1). The saturation of cloud-free, cloud shadow, and cloud pixels can be expressed as shown in Eqs. (6), (7), and (8), respectively.

Eq. (6)

Sa=13(rR+rG+rB)[min(rR,rG,rB)],
where Sa is the saturation of a cloud-free pixel, and rR, rG, and rB are the reflectances of MODIS bands 1, 4, and 3, respectively.

Eq. (7)

Sb=13(tR*rR+tG*rG+tB*rB)[min(tR*rR,tG*rG,tB*rB)],
where Sb represents the saturation of a cloud-shadow pixel, t is the transmissivity of the cloud, and tR, tG, and tB are the transmissivity values of cloud for MODIS bands 1, 4, and 3, respectively.

Eq. (8)

Sc=13*[min(tR*rR+ρR,tB*rB+ρR,tG*rG+ρG)](tR*rR+tG*rG+tB*rB)+ρR+ρG+ρB

Eq. (9)

ρ=1t,

Fig. 1

A model map of cloud-free, cloud shadow, and cloud pixels.

JARS_7_1_073486_f001.png

In Eq. (8), Sc is the saturation of a cloud pixel, ρ is the reflectance of cloud, and ρR, ρG, and ρB are the reflectance of the cloud for MODIS bands 1, 4 and 3, respectively.

The diameters of droplets and dust in clouds and fog in the atmosphere are typically in the range 5 to 100 μm, which is much greater than the wavelength range of visible light. Therefore, random scattering occurs when visible light passes through these particles.34 The scattering coefficients are the same for the red, green, and blue bands; therefore, Eqs. (7) and (8) can be simplified to produce Eqs. (10) and (11), respectively.

Eq. (10)

Sb=13t*(rR+rG+rB)[t*min(rR,rG,rB)]

Eq. (11)

Sc=13*t*[min(rR,rB,rG)]+ρt*(rR+rG+rB)+ρ.

When the cloud is so thick that t approaches 0, Sb also approaches 0; in such cases, the true value of Sb is 0 and Sc tends to 0, which is in accordance with our knowledge that clouds and fog are typically white or gray. Moreover, when thinner clouds or fewer aerosols are present in the atmosphere, more visible light will reach the Earth’s surface; in such cases, Sb will be equal to Sa, and Sc will be smaller than Sa according to Eqs. (10) and (11).

2.3.

Algorithm for Integrated Cloud-Free Image Synthesis

We developed a combined algorithm for cloud-free image compositing by combining B1/7 with saturation. Our model adopts the land surface reflectance images for a specific period as the candidate dataset (MOD09GA) and calculates B1/7 from the surface reflectance data of bands 1 and 7 and pixel saturation S from the surface reflectance data of bands 1, 4, and 3. Assuming a land surface mask (LSM) of 1, which means it is land pixel, B1/7 of (0, 1], and B1<0.3, the pixel exhibiting the lowest value of B1/7 is selected as a cloud-free pixel; otherwise, the pixel exhibiting the greatest saturation is selected as a cloud-free pixel. The LSM is a land/water flag and can be extracted from the 1-km State QA Descriptions (16-bit) layer in MOD09GA. Our newly developed model can be used to extract an index image, HC_time, in which the value of each pixel corresponds to the date of the image least contaminated by cloud, selected from a set of multitemporal remote-sensing images obtained within a certain period. Then, cloud-free images for the surface reflectance (LSREF_clear) can be built based on this index image. The compositing workflow is presented in Fig. 2.

Fig. 2

A flow chart illustrating the application of the integrated compositing algorithm.

JARS_7_1_073486_f002.png

3.

Data Processing and Analysis

3.1.

Data Processing

We employed three samples in the present study, hereafter referred to as samples A, B, and C (Fig. 3). We used sample A to assess the feasibility of distinguishing cloud-free pixels for different land cover types, considering a particular region in northern China (34°53′ to 43°26′ N, 113°49′ to 122°41′ E) that includes Hebei Province, Beijing, Tianjin, Shandong Province, and Bohai Bay. Sample A includes nine land cover classes, divided according to the Land Cover Type Yearly L3 Global 500-m product (MCD12Q1) (Fig. 4).

Fig. 3

A map of the study area. The BaseMap is from ArcGIS Online.

JARS_7_1_073486_f003.png

Fig. 4

The land-cover types for sample A.

JARS_7_1_073486_f004.png

We used samples B and C to validate the algorithm for deserts, snowy regions, and tropical areas often covered by clouds. The region considered for sample B is located to the south of the Tarim basin and the north of the Tianshan Mountains; 74% and 10% of this sample can be considered deserts and snowy areas, respectively. Conversely, the study region for sample C is located in southern China, adjacent to the South China Sea, in an area often covered by cloud; thus, it was difficult to obtain cloud-free images for this region.

We employed three separate MODIS products, all of which can be downloaded freely using the Warehouse Inventory Search Tool (WIST): the Surface Reflectance 8-Day L3 Global 500 m (MOD09A1), the Surface Reflectance Daily L2G Global 1 km and 500 m (MOD09GA), and the MODIS Land Cover Type Yearly L3 Global 1 km SIN Grid product (MCD12Q1). MO09A1 provides bands 1 to 7 at a resolution of 500 m in an 8-day gridded level-3 product in sinusoidal projection. Each MOD09A1 pixel contains the best possible level-2 gridded (L2G) observation for a given 8-day period, selected based on high-observation coverage, low-view angle, the absence of clouds or cloud shadow, and aerosol loading. MOD09GA provides bands 1 to 7 in a daily-gridded L2G product in sinusoidal projection including 500-m reflectance values and 1-km observation and geolocation statistics. MCD12Q1 incorporates five different land cover classification schemes, derived using a supervised decision-tree classification method. We used land cover type 4 of the MODIS-derived net primary production (NPP) scheme and type 5. Further details of the data used are presented in Table 1.

Table 1

MODIS data list.

Study areaProduct nameDate (YYYY-MM-DD)Count
A H26–27, V04–05MO09A12007-10-16 to 231×4
MOD09GA2007-10-16 to 238×4
MCD12Q120071×4
B H24V05MO09A12012-12-2 to 91×1
MOD09GA2012-12-2 to 98×1
MCD12Q120121×1
C H28V06MO09A12013-04-15 to 221×1
MOD09GA2013-04-15 to 228×1
MCD12Q120121×1
H and V are horizontal and vertical, respectively, in sinusoidal projection. Counts are given as n×m, where n and m represent the numbers of phases and images, respectively.

All preprocessing of these data was performed using the MODIS Reprojection Tool (MRT). Sample A was used to conduct geographic transformations and to mosaic the four parts of the data. Finally, spatial subsets of the images were obtained using a 500×500m mask for the study area. Samples B and C were transformed from the hierarchical data format (HDF) to the tagged image file format (TIFF) for which the projection was sinusoidal and the resolution was 463.31×463.31m.

3.2.

Analysis of B1/7 Ratio

To estimate the ability of B1/7 to identify cloud-free pixels according to the MODIS land cover type 4 for NPP, we reclassified the land cover into four groups: water, vegetation (including all vegetation cover classes), nonvegetation, and urban. We selected the samples according to the 1-km State QA Descriptions in MOD09GA ( https://lpdaac.usgs.gov/products/modis_products_table/mod09ga), which includes several features including cloud state and a cloud shadow mask. The scatter maps of B1 and B7 for these samples for different land cover types under cloud-free, cloud, and cloud-shadow conditions are illustrated in Fig. 5.

Fig. 5

A scatter map for B1 and B7: (a) cloud-free pixels used for cloud comparison, (b) cloud pixels, (c) cloud-free pixels used for shadow comparison, and (d) cloud-shadow pixels. C, S, CFC, and CFS represent cloud, shadow, cloud free for the cloud-comparison pixel, and cloud free for the shadow-comparison pixel, respectively. The samples for (a) and (b) and for (c) and (d) were obtained from different daily images of the same pixels.

JARS_7_1_073486_f005.png

Some fundamental conclusions can be deduced from the comparison of Figs. 5(a) and 5(b). B1/7 was found to be greater than 1 for cloud pixels and very high (i.e., greater than that under cloud-free conditions) for water areas under cloudy conditions. For urban and vegetation areas under cloud-free conditions, B1/7 was found to be less than and greater than 1, respectively. Conversely, under cloudy conditions, we found B1/7 to be around 1 and greater than the value under cloud-free conditions. Comparison of Figs. 5(c) and 5(d) demonstrates that B1/7 of shadow pixels for land areas is around 1 and is higher than B1/7 under cloud-free conditions. However, for water areas, we found B1/7 to be less than 0 (primarily because B1 is less than 0), except under cloud-free conditions. In summary, our results demonstrate that B1/7 is typically higher for cloud pixels than for cloud-free pixels (Table 2).

Table 2

Slopes of the regression equation (intercept=0) for different land cover types under different cloud cover conditions.

Land coverCloud-free_shadowShadowCloud-free_cloudCloud
Vegetation0.5740.7310.5741.029
Water0.7352.5654.135
Urban0.6040.7080.7191.679
Nonvegetation0.5990.7420.6420.701
Cloud0.5491.611
Note: For water pixels under cloud shadow conditions, we found Bi to be −0.01; therefore, we did not calculate the slope for this relationship. We also omitted the calculation of the slope for cloud pixels in cloud shadow.

To investigate our results further, we plotted B1/7 values for cloud-free, cloud shadow, and cloud conditions for the different land cover types (Fig. 6) using the data presented in Table 2. We found B1/7 to be in the range 0 to 1 for land areas under cloud-free conditions; moreover, our results suggest that B1/7 is always lower under cloud-free conditions than under cloudy conditions. Comparison of shadow and cloud-free pixels demonstrates the existence of several pixels, in which B1/7 is lower for shadow pixels than for cloud-free pixels. However, no such pixels were observed over water areas. Therefore, in cases where a given pixel could be considered to be cloud-free, cloud shadow, or cloud depending on the observation time, we selected the period with the minimum value of B1/7 to represent the period with clear conditions.

Fig. 6

A comparison of the B1/7 values for cloud-free, cloud shadow, and cloud conditions for the following land cover types: (a) urban, (b) water, (c) vegetation (Veg), (d) nonvegetation (NoV). C, S, CFC, and CFS represent cloud, shadow, cloud free for the cloud-comparison pixel, and cloud free for the shadow-comparison pixel, respectively. The samples for (a) and (b) and for (c) and (d) were obtained from different daily images of the same pixels.

JARS_7_1_073486_f006.png

3.3.

Analysis of Saturation

To investigate whether the pixel exhibiting the maximum saturation can be considered a cloud-free pixel, we calculated saturation from MODIS land surface reflectance using Eq. (5) (Fig. 7). In particular, we calculated R, G, and B from bands 1, 4, and 3, respectively. In general, we found that the saturation of cloud pixels to be lower than that of cloud-free pixels for all types of land cover, although the saturation for cloud shadow pixels for water and nonvegetation areas was found to be lower than that for cloud-free pixels. Moreover, our results suggest that the saturation of shadow pixels increases with increasing percentage of vegetation cover [Figs. 7(a), 7(c), and 7(d)]. For land cover classes including vegetation, urban, and nonvegetation, the saturation of cloud-free pixels is typically greater than that of cloud and shadow pixels. Based on the above analysis, we suggest that the maximum saturation can act as a good cloud-free pixel index for water and nonvegetation areas.

Fig. 7

A comparison of the saturation values for cloud-free, shadow, and cloud conditions for the following land cover types: (a) urban, (b) water, (c) vegetation (Veg), and (d) nonvegetation (NoV). C, S, CFC, and CFS represent cloud, shadow, cloud free for the cloud-comparison pixel, and cloud free for the shadow-comparison pixel, respectively. The samples are the same as those shown in Fig. 6.

JARS_7_1_073486_f007.png

4.

Results and Discussion

The algorithm proposed here for compositing cloud-free images is similar to that of MOD09A1, in which both select cloud-free pixels from MODIS daily images and combine these pixels. We investigated samples A, B, and C using both the MOD09A1 method and our compositing technique both qualitatively and quantitatively through visual evaluation and analysis of indicators.

4.1.

Visual Evaluation

Based on visual evaluation, our proposed algorithm performs better in the selection of cloud-free pixels over water (including marine, coastal, lake, reservoir, and river environments) than the MOD09A1 product (Fig. 8). For example, MOD09A1 erroneously defined a cloud pixel as water under the clear-sky conditions [red circles in Figs. 8(a1) and 8(a2)]. Thus, our algorithm performs better than MOD09A1 in such cloud regions, although our method is less spatially consistent than MOD09A1 [e.g., the rectangle for sample B in Figs. 8(b1) and 8(b2)]. Moreover, for sample C [Figs. 8(c1) and 8(c2)], our algorithm removed the cloud in the lower left of the image, whereas MOD09A1 did not. Finally, the ocean color in the cloud-free images obtained using our method is more spatially homogeneous than that obtained from the MOD09A1 product.

Fig. 8

MOD09A1 and cloud-free images composited by the integrated method for samples A, B, and C: (a1) MOD09A1 for sample A; (a2) cloud-free image composited by the integrated method for sample A; (b1) MOD09A1 for sample B; (b2) cloud-free image composited by the integrated method for sample B; (c1) MOD09A1 for sample C; and (c2) cloud-free image composited by the integrated method for sample C.

JARS_7_1_073486_f008.png

4.2.

Indicator Evaluation

Residual cloud, cloud shadow, and mixed pixels can be found in the composite image, and quantitative evaluation of the relative proportions of these residual pixels is essential. The 1-km State QA Descriptions (16-bit) dataset of the MOD09GA product includes 11 indicators, including five quantitative indicators that are useful for the estimation of the quality of image composition: cloud state, internal cloud, cloud shadow, cirrus detected, and pixels adjacent to cloud.

In the original dataset of daily images, each pixel includes three classes: every day is cloudy; every day is noncloudy; there are at least one cloudy day and at most seven cloudy days throughout the 8-day period. For this third class, the accuracy of the cloud-free composited image has been evaluated previously according to Eq. (12).

Eq. (12)

Pi=Mi/Mclear_oud_i,
where i is a state of cloud-free, cloud, cloud shadow, cirrus, and so on, Mi represents the pixel number of a given state remaining in the cloud-free image, and Mclear_oud_i represents the number of pixels exhibiting the state in at least one day and at most seven days throughout the 8-day period. Based on Eq. (12), we conducted a comparative evaluation of the quality of the images produced using our compositing technique (Table 3).

Table 3

A comparative evaluation of the quality of our composited cloud-free images (units: %).

Parameter nameDescriptionMOD_ACOM_AMOD_BCOM_BMOD_CCOM_C
Cloud stateClear95.2978.7779.5476.0192.7986.54
Cloudy1.646.2211.4910.739.2012.78
Mixed7.9827.7430.6035.5019.3329.37
Cloud shadow3.029.6813.9233.2212.5432.00
Cirrus detectedNone99.1798.5298.0597.9198.2897.86
Small0.873.280.220.663.457.15
Average0.652.491.954.173.044.14
High0.790.922.061.872.071.36
Internal cloud0.872.882.958.695.554.16
Pixel is adjacent to cloud2.768.938.7024.5912.1721.17
Note: MOD represents the MOD09A1 product; COM represents cloud-free images obtained using our method; and A, B, and C represent the three samples.

It is clear that our method performs well in removing cloud and cirrus pixels, but less well for cloud-shadow pixels. In the composite image, the internal cloud in the coastal areas of southeastern China (sample C) is lower than that in the MOD09A1 product, indicating that fewer residual cloud pixels remain in the composite image than in the MOD09A1 product. Moreover, slightly less high cirrus can be found in the composite image than in the MOD09A1 product, although there is slightly more medium height and small cirrus in the composited image. However, residual small cirrus is less than 5% in samples A and B, and that in sample C is 7.15%. More residual cloud shadow pixels are present in the composite image than in the MOD09A1 product with residual cloud shadow greater than 30% in the Northwest Territories (sample B) and the coastal areas of southeastern China (sample C).

Table 4 presents average values for sensor zenith angle, scattering angle, and relative azimuth angle for samples A, B, and C for both the composite image and the MOD09A1 product. It is clear that, regardless of whether the B1/7 ratio or the maximum saturation method is adopted, the average sensor zenith angle is greater for the composite image than for the MOD09A1 product. This occurs because our method exhibits preference for nonnadir pixels, whereas the MOD09A1 product exhibits preference for nadir pixels. Furthermore, the average scatter angle for the composite image is less than that for the MOD09A1 product, and the average scatter angle obtained according to the maximum saturation method is less than that obtained using the B1/7 ratio method. This may have occurred because our method mistakes cloud-shadow pixels for cloud-free pixels, particularly when the maximum saturation method is adopted. The average relative azimuth angle for the composite image is approximately 90 to 270 deg, whereas that of the MOD09A1 product is less than 45 deg or greater than 315 deg. This likely occurs because our method preferentially selects the forward hemisphere pixels as cloud-free pixels, whereas the MOD09A1 product preferentially selects backward hemisphere observations pixels as cloud-free pixels. Thus, it is clear that our method exhibits obvious differences from the MOD09A1 algorithm.

Table 4

A comparative analysis of scattering angle, sensor zenith angle, and relative azimuth angle (units: deg).

Parameter nameDescriptionMOD_ACOM_AMOD_BCOM_BMOD_CCOM_C
Scatter angleB1/7128.33122.79117.36119.68119.11116.75
Saturation126.09118.18109.75108.19143.99132.66
Sensor zenith angleB1/711.2934.6515.1244.5735.6138.29
Saturation21.2238.8034.3341.5223.7734.59
Relative azimuth angleB1/734.72177.5916.76216.05342.8634.69
Saturation31.56162.5612.16140.0957.8197.73

The selection of the maximum saturation pixel as a cloud-free pixel is likely the key factor causing the differences that are apparent from Table 4. The atmosphere has a difference to the extinction coefficient of the red, green, and blue, the longer the light transmission path in the atmosphere, the greater the energy difference among the red, green, and blue, and then the greater the saturation. Therefore, pixels exhibiting large zenith angles are likely to be selected as cloud-free pixels, although the maximum saturation method is more likely to preferentially select such pixels than the B1/7 method. Thus, when backward (i.e., back to the sun) observations are used, the transmission distance for reflected sunlight is short, and the pixel saturation is low. Conversely, when forward (i.e., toward the sun) observations are used, the transmission distance for scattered sunlight is long, and the pixel saturation is high. Therefore, our method preferentially selects forward observation pixels as cloud-free pixels. The incident light of cloud-free pixels is primarily reflected sunlight, whereas that of shadow pixels is primarily light scattered by the surrounding objects. Therefore, the saturation of cloud-shadow pixel is greater than that of cloud-free pixel, and then shadow pixels can be more easily mistaken for cloud-free pixels.

To verify whether our results have been affected by land-use type, we selected two indicators of internal cloud and cloud shadow and used Eq. (12) to calculate Pi for residual cloud and shadow for the composited cloud-free image for the water, vegetation, urban, snow, and nonvegetation land cover categories (Table 5). For this, we used land cover data from the MCD12Q1 type 5 dataset. Water bodies, nonvegetated areas, urban and built-up areas, and snow and ice were extracted directly from the dataset; the vegetation category was obtained by combining the evergreen needleleaf trees, evergreen broadleaf trees, deciduous needleleaf trees, and deciduous broadleaf trees classes.

Table 5

Pi analysis for residual cloud and shadow in the composited cloud-free images for different land cover types.

SampleLand cover typePercentage of land cover (%)Internal cloud (%)Cloud shadow (%)
MODCOMMODCOM
AWater69.330.384.043.0914.64
Vegetation20.130.611.773.0910.78
Built-up8.490.793.051.608.13
Nonvegetation1.900.652.340.7510.95
BWater0.542.1412.0213.2146.07
Vegetation6.270.301.107.4122.40
Built-up0.630.003.560.0130.68
Snow8.1827.2343.2953.5048.26
Nonvegetation84.381.967.8111.9431.58
CWater51.114.687.1811.5535.32
Vegetation46.955.371.8712.9327.31
Built-up1.764.772.4916.1131.48
Nonvegetation0.166.435.0813.9930.11
Note: MOD represents the MOD09A1 product; and COM represents the cloud-free images obtained by our method.

It is clear that, in the cloud-free images from North China (sample A) and the Northwest Territories (sample B), the percentage of residual cloud for the vegetation, built-up, bare ground, and other land use types is small but still greater than corresponding percentages for the MOD09A1 product. In the cloud-free image for the Northwest Territories (sample B), the percentage of residual cloud pixels in snow areas was found to be 43.29% (27.3%) using our method (the MOD09A1 product). Thus, it is clear that the MOD09A1 product performs significantly better than our method in this respect. In the cloud-free image of the coastal areas of southeastern China (sample C), considerably more residual cloud pixels remain over water than in the MOD09A1 product. Moreover, in the Northwest Territories (sample B) and coastal areas of southeastern China (sample C), the number of residual cloud shadow pixels is relatively high with percentages greater than 20%. In the Northwest Territories (sample B), the composited image was produced primarily using the B1/7 method; here, Pi of the vegetation were found to be significantly lower than those of water, indicating that the B1/7 method performs better than the saturation method in removing cloud shadow.

5.

Conclusions and Future Work

Our newly developed integrated cloud-free image compositing algorithm requires only bands 1, 3, 4, and 7 of MODIS images, all of which are readily available, and the algorithm complexity is smaller than MOD09A1. The B1/7 method is suitable for extracting cloud-free pixels for land areas, whereas the saturation method is suitable for extracting cloud-free pixels for marine environments, coastal areas, and other water bodies. Our proposed method can eliminate cloud effects and produces slightly higher values for the percentage area ratio of residual cloud pixels and pixels adjacent to cloud in cloud-free images than those in the MOD09A1 product. Moreover, our proposed method performs well in terms of removing cirrus, particularly high cirrus. The calculation accuracy of our method depends on land cover type: the algorithm performs best in the removal of cloud and shadow for vegetation, followed by urban areas, bare land, and water, but performs relatively poorly for snow and ice. Conversely, the MOD09A1 product exhibits no such variation in performance with land cover type. Moreover, our algorithm preferentially selects cloud-free image pixels with larger zenith angles and smaller scattering angles and prefers forward observations to backward observations, which is in stark contrast to the MOD09A1 product. However, our method has some limitations in compositing cloud-free images. For example, the method can only be used to process multiday images obtained in daytime; it also produces more residual cloud pixels than the MOD09A1 product, although it performs similarly to the MOD09A1 product in terms of removing cloud and shadow in ice and snow areas. These limitations need further improvements.

Acknowledgments

.This work was supported by the National Natural Science Foundation of China (Nos. 41240004, 40871073, and 40971118), the National High Technology Research and Development Program (2009AA12Z225), Key Basic Research Projects of Hebei Province (No. 08966712D), the Natural Science Foundation of Hebei Province (No. D2009000299), and the Doctor Fund projects of Hebei Normal University (No. 2011B03). MODIS data were downloaded from the Warehouse Inventory Search Tool (WIST) and the BaseMap of Fig. 1 was from ArcGIS Online for free. The authors wish to express their thanks to USGS and ESRI company for the free data, Chen Hui (professor) and the staff of the Institute of Remote Sensing and Digital Earth of the Chinese Academy of Sciences for fruitful discussions. The authors also wish to express their thanks to the anonymous reviewers for their assistance in refining the manuscript.

References

1. 

L. L. StoweS. K. VemuryA. V. Rao, “AVHRR clear-sky radiation data sets at NOAA/NESDIS,” Adv. Space Res., 14 (1), 113 –116 (1994). http://dx.doi.org/10.1016/0273-1177(94)90358-1 ASRSDW 0273-1177 Google Scholar

2. 

C. Yanget al., “Study on cloud detection and cloud phase retrieval in Changchun area,” Remote Sens. Technol. Appl., 25 (2), 245 –250 (2010). Google Scholar

3. 

Q.-J. He, “A daytime cloud detection algorithm for FY-3A/VIRR data,” Int. J. Remote Sens., 32 (21), 6811 –6822 (2011). http://dx.doi.org/10.1080/01431161.2010.523730 IJSEDK 0143-1161 Google Scholar

4. 

Z.-H. Maet al., “Cloud removing from MODIS based on spectrum analysis,” Remote Sens. Inf., (4), 3 –8 (2009). Google Scholar

5. 

G. Poliet al., “Dynamic threshold cloud detection algorithm improvement for AVHRR and SEVIRI images,” in 2010 IEEE Int. Geoscience and Remote Sensing Symposium, 4146 –4149 (2010). Google Scholar

6. 

A. AnzaloneF. IsgroD. Tegolo, “Combining fuzzy c-mean and normalized convolution for cloud detection in IR images,” Fuzzy Logic and Applications, 140 –147 Springer-Verlag, Berlin, Heidelberger (2009). Google Scholar

7. 

F. Aireset al., “A land and ocean microwave cloud classification algorithm derived from AMSU-A and -B, trained using MSG-SEVIRI infrared and visible observations,” Mon. Weather Rev., 139 (8), 2347 –2366 (2011). http://dx.doi.org/10.1175/MWR-D-10-05012.1 MWREAB 0027-0644 Google Scholar

8. 

D. Qiuet al., “The study on removal of satellite image shadow based on SAM and MASK,” Remote Sens. Inf., (5), 12 (2010). Google Scholar

9. 

Y. LuoA. P. TrishchenkoK. V. Khlopenkov, “Developing clear-sky, cloud and cloud shadow mask for producing clear-sky composites at 250-meter spatial resolution for the seven MODIS land bands over Canada and North America,” Remote Sens. Environ., 112 (12), 4167 –4185 (2008). http://dx.doi.org/10.1016/j.rse.2008.06.010 RSEEA7 0034-4257 Google Scholar

10. 

I. V. Z. MaraisJ. A. Du PreezW. H. Steyn, “An optimal image transform for threshold-based cloud detection using heteroscedastic discriminant analysis,” Int. J. Remote Sens., 32 (6), 1713 –1729 (2011). http://dx.doi.org/10.1080/01431161003621619 IJSEDK 0143-1161 Google Scholar

11. 

S. Mackieet al., “Generalized Bayesian cloud detection for satellite imagery. Part 1: technique and validation for night-time imagery over land and sea,” Int. J. Remote Sens., 31 (10), 2573 –2594 (2010). http://dx.doi.org/10.1080/01431160903051703 IJSEDK 0143-1161 Google Scholar

12. 

S. Mackieet al., “Generalized Bayesian cloud detection for satellite imagery. Part 2: technique and validation for daytime imagery,” Int. J. Remote Sens., 31 (10), 2595 –2621 (2010). http://dx.doi.org/10.1080/01431160903051711 IJSEDK 0143-1161 Google Scholar

13. 

A. LyapustinY. WangR. Frey, “An automatic cloud mask algorithm based on time series of MODIS measurements,” J. Geophys. Res. Atmos., 113 D16207 (2008). http://dx.doi.org/10.1029/2007JD009641 JGRDE3 0148-0227 Google Scholar

14. 

H. Tang, “MODIS09 image time series anomaly detection methods based on statistical characteristics of spectrum time-varying,” in Remote Sensing Quantitative Retrieve Algorithm Symposium, (2010). Google Scholar

15. 

O. Hagolleet al., “A multi-temporal method for cloud detection, applied to FORMOSAT-2, VEN mu S, LANDSAT and SENTINEL-2 images,” Remote Sens. Environ., 114 (8), 1747 –1755 (2010). http://dx.doi.org/10.1016/j.rse.2010.03.002 RSEEA7 0034-4257 Google Scholar

16. 

A. D. FraserR. A. MassomK. J. Michael, “A method for compositing polar MODIS satellite images to remove cloud cover for landfast sea-ice detection,” IEEE Trans. Geosci. Remote Sens., 47 (9), 3272 –3282 (2009). http://dx.doi.org/10.1109/TGRS.2009.2019726 IGRSD2 0196-2892 Google Scholar

17. 

W. B. RossowL. C. GarderA. A. Lacis, “Global, seasonal cloud variations from satellite radiance measurements. Part I: sensitivity of analysis,” J. Clim., 2 (5), 419 –458 (1989). http://dx.doi.org/10.1175/1520-0442(1989)002<0419:GSCVFS>2.0.CO;2 JLCLEL 0894-8755 Google Scholar

18. 

W. B. RossowA. A. Lacis, “Global, seasonal cloud variations from satellite radiance measurements. Part II: cloud properties and radiative effects,” J. Clim., 3 (11), 1204 –1253 (1990). http://dx.doi.org/10.1175/1520-0442(1990)003<1204:GSCVFS>2.0.CO;2 JLCLEL 0894-8755 Google Scholar

19. 

K. T. KriebelR. W. SaundersG. Gesell, “Optical properties of clouds derived from fully cloudy AVHRR pixels,” Beitr. Phys. Atmos., 62 (3), 165 –171 (1989). Google Scholar

20. 

G. Gesell, “An algorithm for snow and ice detection using AVHRR data: an extension to the APOLLO software package,” Int. J. Remote Sens., 10 (4–5), 897 –905 (1989). http://dx.doi.org/10.1080/01431168908903929 IJSEDK 0143-1161 Google Scholar

21. 

L. L. Stoweet al., “Global distribution of cloud cover derived from NOAA/AVHRR operational satellite data,” Adv. Space Res., 11 (3), 51 –54 (1991). http://dx.doi.org/10.1016/0273-1177(91)90402-6 ASRSDW 0273-1177 Google Scholar

22. 

S. A. Ackermanet al., “Cloud detection with MODIS. Part II: validation,” J. Atmos. Oceanic Technol., 25 (7), 1073 –1086 (2008). http://dx.doi.org/10.1175/2007JTECHA1053.1 JAOTES 0739-0572 Google Scholar

23. 

R. A. Freyet al., “Cloud detection with MODIS. Part I: improvements in the MODIS cloud mask for collection 5,” J. Atmos. Oceanic Technol., 25 (7), 1057 –1072 (2008). http://dx.doi.org/10.1175/2008JTECHA1052.1 JAOTES 0739-0572 Google Scholar

24. 

A. GafurovA. Bárdossy, “Cloud removal methodology from MODIS snow cover product,” Hydrol. Earth Syst. Sci., 13 (7), 1361 –1373 (2009). http://dx.doi.org/10.5194/hess-13-1361-2009 HESSCF 1027-5606 Google Scholar

25. 

J. Dozieret al., “Time–space continuity of daily maps of fractional snow cover and albedo from MODIS,” Adv. Water Resour., 31 (11), 1515 –1526 (2008). http://dx.doi.org/10.1016/j.advwatres.2008.08.011 AWREDI 0309-1708 Google Scholar

26. 

Q. Xiaet al., “Estimation of daily cloud-free, snow-covered areas from MODIS based on variational interpolation,” Water Resour. Res., 48 (9), W09523 (2012). http://dx.doi.org/10.1029/2011WR011072 WRERAQ 0043-1397 Google Scholar

27. 

E. F. VermoteS. Y. KotchenovaJ. P. Ray, MODIS Surface Reflectance User’s Guide, (2011) http://modis-sr.ltdri.org/products/MOD09_UserGuide_v1_3.pdf April ). 2011). Google Scholar

28. 

Y. J. KaufmanD. Tanre, “Atmospherically resistant vegetation index (ARVI) for EOS-MODIS,” IEEE Trans. Geosci. Remote Sens., 30 (2), 261 –270 (1992). http://dx.doi.org/F10.1109/36.134076 IGRSD2 0196-2892 Google Scholar

29. 

E. F. VermoteA. Vermeulen, “Atmospheric Correction Algorithm Spectral Reflectances (MOD 09),” 26 http://modis.gsfc.nasa.gov/data/atbd/atbd_mod08.pdf Google Scholar

30. 

Y. J. Kaufmanet al., “The MODIS 2.1 μm channel: correlation with visible reflectance for use in remote sensing of aerosol,” IEEE Trans. Geosci. Remote Sens., 35 (5), 1286 –1300 (1997). http://dx.doi.org/10.1109/36.628795 IGRSD2 0196-2892 Google Scholar

31. 

R. C. Levyet al., “Second-generation operational algorithm: retrieval of aerosol properties over land from inversion of Moderate Resolution Imaging Spectroradiometer spectral reflectance,” J. Geophys. Res., 112 D13211 (2007). http://dx.doi.org/10.1029/2006JD007811 JGREA2 0148-0227 Google Scholar

32. 

X. Xu, Remote Sensing Physics, Peking University Press, Beijing (2006). Google Scholar

33. 

R. C. GonzalezR. E. Woods, Digital Image Processing Second Edition, Publishing House of Electronics Industry, Beijing (2007). Google Scholar

34. 

Y. Zhao, The Principle and Application of Remote Sensing Application Analysis, Science Press, Beijing (2009). Google Scholar

Biography

JARS_7_1_073486_d001.png

Hai-bing Xiang is pursuing his PhD in the Institute of Remote Sensing and Digital Earth of Chinese Academy of Sciences, Beijing, China. He received his bachelor’s degree in 2008 and master’s degree in 2011 from the Department of Cartography and Geographic Information Systems in Hebei Normal University, Shijiazhuang, China. He had participated in a number of national and provincial Natural Science Foundation of research topics. The study includes population density of multiscale calculating method, detection and removal of the stripes in MODIS image, the algorithms on MODIS clear-sky image compositing, and the ground temperature retrieval algorithm. Currently, he is engaging in the research of forest structural parameters retrieval of remote sensing.

JARS_7_1_073486_d002.png

Jin-song Liu received his PhD degree in ecology from College of Life Science, Hebei Normal University, Shijiazhuang, China, in 2011. Since 2011, he has been a professor of GIS in College of Mathematics and Information Science, Hebei Normal University, Shijiazhuang, China. From 1995 to 2006, he worked as a teacher in Department of Geography, Hebei Normal University on GIS. Since 2007, he has worked in Hebei Key Laboratory of Mathematic Calculation and Application and paid more attention to remote sensing and population geography, especially in the applications of MODIS images and population geographic information system of Hebei province. Currently, he is managing director and secretary of Hebei Population Society, and he is also a reviewer of Journal of Remote Sensing. His major research interests include quantitative remote sensing inversion, population geography, and GIS.

JARS_7_1_073486_d003.png

Chun-xiang Cao received her MSc degree in Kyoto Prefectural University, Japan, in 1996, and received her DSc degree from Hiroshima University, Japan, in 2000. She is director at State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing and Digital Earth of Chinese Academy of Sciences, Beijing, China. Her research fields include environmental health assessment, remote sensing of forest health, and others. She has managed many major research projects, including National Natural Science Foundation of China, National “973” and National “863” projects, and so on. She has published 31 SCI articles and an academic monograph, “Diagnosis of environmental health by remote sensing,” in past five years.

JARS_7_1_073486_d004.png

Min Xu is an assistant research fellow at State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing and Digital Earth of Chinese Academy of Sciences, Beijing, China. He received his Dr. degree from Graduate School of the Chinese Academy of Sciences and his master’s degree from Wuhan University, China. He got “Triple-A” outstanding student in Graduate School of the Chinese Academy of Sciences in 2009. His current research fields involve application of RS and GIS technologies in public health and published 36 research papers including 16 SCI indexed paper.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Hai-bing Xiang, Jin-song Liu, Chun-xiang Cao, and Min Xu "Algorithms for Moderate Resolution Imaging Spectroradiometer cloud-free image compositing," Journal of Applied Remote Sensing 7(1), 073486 (5 November 2013). https://doi.org/10.1117/1.JRS.7.073486
Published: 5 November 2013
Lens.org Logo
CITATIONS
Cited by 6 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Clouds

MODIS

Reflectivity

Vegetation

Composites

Short wave infrared radiation

Sensors

Back to Top