|
1.IntroductionClouds play an important role in determining weather, climate, and other short-term environmental change, and more than 50% of the Earth’s daily surface area is covered by clouds.1,2 Moreover, these clouds hamper observation of the land surface by the Moderate Resolution Imaging Spectroradiometer (MODIS); therefore, accurate cloud detection is considered to be a crucial component in eliminating the impact of clouds on remote-sensing products.3,4 Three types of information can be used to detect clouds: spectral, texture, and multitemporal information. Spectral information was first utilized to distinguish clouds from the Earth surface using classification methods,5–8 in which thresholds in the value of one band or the difference between two bands were considered.3,4,9 Subsequently, texture information was used to distinguish clouds from the surface based on thresholds in heteroscedasticity. Such identification is possible because the reflectance of cloud is greater than that of the land surface, which results in greater variance along cloud edges, particularly over the oceans.10–12 With increases in the availability of remote-sensing data, researchers have started to use multitemporal information to distinguish clouds from the Earth surface based on the differences between appropriate benchmarks and series of daily images; such methods are based on the assumption that the reflectance of the Earth surface is more stable than that of cloud-affected pixels, which exhibit frequent variations.13–16 Many satellite-based instruments, ranging from the Advanced Very High Resolution Radiometer (AVHRR) to MODIS, employ these methods. For instance, the International Satellite Cloud Climatology Project (ISCCP)17,18 and the AVHRR Processing Scheme Over Cloud Land and Ocean (APOLLO) cloud detection algorithm19,20 employ only spectral information, and the Cloud Advanced Very High Resolution Radiometer (CLAVR) employs a series of spectral and spatial variability tests to detect clouds.1,21–23 Some researchers have also considered the use of temporal information for cloud detection.4,9,24–26 However, the computational power required for such methods increases with the complexity of the cloud detection algorithms adopted. Ultimately, obtaining composite cloud-free images based on the results of cloud detection has become a time-consuming process. This is evidenced particularly by the case of MOD09A1 published by the United States Geological Survey (USGS), in which only the pixels with the highest quality are considered; in this instance, cloud-cover pixels, cloud-shadow pixels, high-angle pixels, and high-aerosol pixels have been removed.27 Obtaining data relating to land surface characteristics is one of the primary challenges in land cover research. To address this, some algorithms have been proposed, in which cloud-free pixels are selected based on quantitative physical indexes such as the maximum normalized difference vegetation index (NDVI)28 and the lowest MODIS band 3 surface reflectance obtained during a specific time period.29 Maximum NDVI has been used previously as an indicator in gridded vegetation indices (MOD13) for vegetation monitoring. However, the use of this index typically leads to the selection of the pixel with the greatest sensor angle. Conversely, the minimum reflectance value for MODIS band 3, which represents the blue band, tends to consider shadow pixels as cloud-free pixels. Ma et al. used (where and are the bands 1 and 2 references of MODIS, respectively) to select cloud-free pixels.4 However, this model was shown to be ineffective in distinguishing between clouds, nonvegetated land, and shadows on vegetated land. The Canada Centre for Remote Sensing (CCRS) developed a new compositing technology that employs six criteria to obtain pixels of six different classes, in addition to mixed pixels. The CCRS researchers also compared max(NDVI), , and with their own multicriteria scheme and reported some details regarding the effects of the bidirectional reflectance distribution function (BRDF);9 however, many criteria and classification steps are required for the CCRS compositing technology, and this method is associated with considerable uncertainty. The present study aims to obtain highly accurate composite cloud-free images by employing the fewest criteria possible. According to Kaufman et al.30 and Levy et al.,31 the reference ratio between reflectances of 0.66 and 2.1 μm (corresponding to bands 1 and 7 of MODIS, respectively) is a good indicator of cloud cover for land areas. Moreover, as clouds are typically white or gray color, their saturation is lower than that of water pixels in a hue–saturation–intensity (HSI) model, making saturation a good indicator of cloud-free regions for water areas. Therefore, in the present study, we propose ratio and saturation as appropriate indicators for the selection of cloud-free pixels on land and water, respectively, and present a set of algorithms for compositing cloud-free images for China. 2.Methods2.1.Underlying Principles of the RatioPrevious studies have shown that 2.1-μm-wavelength images are not affected considerably by atmospheric scattering;30 thus, this wavelength can pass through all types of aerosol except dust. Moreover, for vegetated areas, the apparent reflectance at 2.1 μm is twice that at 0.66 μm when a given region is cloud free. Accordingly, the apparent reflectance at 2.1 μm is often used to estimate the surface reflectance at 0.66 μm.30 These wavelengths (i.e., 0.66 and 2.1 μm) are the central wavelengths of MODIS bands 1 and 7, respectively, and are typically expressed as and . The reflectance ratio of these bands, , for a given land surface has been shown to be positively correlated with scattering angle and the vegetation index derived from the MODIS near- and short-wave infrared bands.31 where is the scattering angle and defined as in Eq. (2). is a piecewise function related to the vegetation index for shortwave infrared radiation [Eq. (3)], and is the vegetation index derived from MODIS near- and short-wave infrared bands [see Eq. (4)].In Eq. (2), , , , and are the solar zenith angle, the sensor zenith angle, the azimuth of the sun, and the azimuth of the sensor, respectively. In Eq. (4), and are the apparent reflectance of MODIS bands 5 and 7, respectively. It is clear from Eq. (3) that ranges from 0.48 to 0.58 and is correlated positively with . Moreover, according to a previously described geometric optics model,32 is proportional to the sensor zenith angle (i.e., increases with increasing sensor zenith angle for a given vegetation pixel), and increases with . As the Terra satellite is sun synchronous, and can be considered constants, such that and are the primary factors determining . Observations collected by the Terra satellite can be separated into two groups based on (): the backward hemisphere with () 0 to 90 deg or 270 to 360 deg and the forward hemisphere with () 90 to 270 deg. Values of both and are typically smaller for the forward hemisphere observations than for the backward hemisphere; therefore, the pixels of the forward hemisphere observations are typically selected first. To sum up, selecting the pixel with the smallest for a given period to represent a cloud-free pixel is equivalent to choosing the pixel in which the sum of and is the smallest. 2.2.Underlying Principles of SaturationThe HSI model scales color in hue, saturation, and intensity spaces. Hue characterizes the main color received by the observer. Conversely, saturation represents the degree to which the solid color is diluted by white, and intensity is a measure of the brightness of the color. Thus, saturation decreases as the amount of white in the mix increases33 and can be defined by a saturation equation as follows: where represents saturation, and , , and represent red, green, and blue rays, respectively; all variables lie between 0 and 1. The function finds the variable with the lowest value.Pixels can be classified as cloud-free, cloud shadow, or cloud pixels depending on the locations of the sensor, sun, and clouds (Fig. 1). The saturation of cloud-free, cloud shadow, and cloud pixels can be expressed as shown in Eqs. (6), (7), and (8), respectively. where is the saturation of a cloud-free pixel, and , , and are the reflectances of MODIS bands 1, 4, and 3, respectively. where represents the saturation of a cloud-shadow pixel, is the transmissivity of the cloud, and , , and are the transmissivity values of cloud for MODIS bands 1, 4, and 3, respectively.In Eq. (8), is the saturation of a cloud pixel, is the reflectance of cloud, and , , and are the reflectance of the cloud for MODIS bands 1, 4 and 3, respectively. The diameters of droplets and dust in clouds and fog in the atmosphere are typically in the range 5 to 100 μm, which is much greater than the wavelength range of visible light. Therefore, random scattering occurs when visible light passes through these particles.34 The scattering coefficients are the same for the red, green, and blue bands; therefore, Eqs. (7) and (8) can be simplified to produce Eqs. (10) and (11), respectively. When the cloud is so thick that approaches 0, also approaches 0; in such cases, the true value of is 0 and tends to 0, which is in accordance with our knowledge that clouds and fog are typically white or gray. Moreover, when thinner clouds or fewer aerosols are present in the atmosphere, more visible light will reach the Earth’s surface; in such cases, will be equal to , and will be smaller than according to Eqs. (10) and (11). 2.3.Algorithm for Integrated Cloud-Free Image SynthesisWe developed a combined algorithm for cloud-free image compositing by combining with saturation. Our model adopts the land surface reflectance images for a specific period as the candidate dataset (MOD09GA) and calculates from the surface reflectance data of bands 1 and 7 and pixel saturation from the surface reflectance data of bands 1, 4, and 3. Assuming a land surface mask (LSM) of 1, which means it is land pixel, of (0, 1], and , the pixel exhibiting the lowest value of is selected as a cloud-free pixel; otherwise, the pixel exhibiting the greatest saturation is selected as a cloud-free pixel. The LSM is a land/water flag and can be extracted from the 1-km State QA Descriptions (16-bit) layer in MOD09GA. Our newly developed model can be used to extract an index image, HC_time, in which the value of each pixel corresponds to the date of the image least contaminated by cloud, selected from a set of multitemporal remote-sensing images obtained within a certain period. Then, cloud-free images for the surface reflectance (LSREF_clear) can be built based on this index image. The compositing workflow is presented in Fig. 2. 3.Data Processing and Analysis3.1.Data ProcessingWe employed three samples in the present study, hereafter referred to as samples A, B, and C (Fig. 3). We used sample A to assess the feasibility of distinguishing cloud-free pixels for different land cover types, considering a particular region in northern China (34°53′ to 43°26′ N, 113°49′ to 122°41′ E) that includes Hebei Province, Beijing, Tianjin, Shandong Province, and Bohai Bay. Sample A includes nine land cover classes, divided according to the Land Cover Type Yearly L3 Global 500-m product (MCD12Q1) (Fig. 4). We used samples B and C to validate the algorithm for deserts, snowy regions, and tropical areas often covered by clouds. The region considered for sample B is located to the south of the Tarim basin and the north of the Tianshan Mountains; 74% and 10% of this sample can be considered deserts and snowy areas, respectively. Conversely, the study region for sample C is located in southern China, adjacent to the South China Sea, in an area often covered by cloud; thus, it was difficult to obtain cloud-free images for this region. We employed three separate MODIS products, all of which can be downloaded freely using the Warehouse Inventory Search Tool (WIST): the Surface Reflectance 8-Day L3 Global 500 m (MOD09A1), the Surface Reflectance Daily L2G Global 1 km and 500 m (MOD09GA), and the MODIS Land Cover Type Yearly L3 Global 1 km SIN Grid product (MCD12Q1). MO09A1 provides bands 1 to 7 at a resolution of 500 m in an 8-day gridded level-3 product in sinusoidal projection. Each MOD09A1 pixel contains the best possible level-2 gridded (L2G) observation for a given 8-day period, selected based on high-observation coverage, low-view angle, the absence of clouds or cloud shadow, and aerosol loading. MOD09GA provides bands 1 to 7 in a daily-gridded L2G product in sinusoidal projection including 500-m reflectance values and 1-km observation and geolocation statistics. MCD12Q1 incorporates five different land cover classification schemes, derived using a supervised decision-tree classification method. We used land cover type 4 of the MODIS-derived net primary production (NPP) scheme and type 5. Further details of the data used are presented in Table 1. Table 1MODIS data list.
H and V are horizontal and vertical, respectively, in sinusoidal projection. Counts are given as n×m, where n and m represent the numbers of phases and images, respectively. All preprocessing of these data was performed using the MODIS Reprojection Tool (MRT). Sample A was used to conduct geographic transformations and to mosaic the four parts of the data. Finally, spatial subsets of the images were obtained using a mask for the study area. Samples B and C were transformed from the hierarchical data format (HDF) to the tagged image file format (TIFF) for which the projection was sinusoidal and the resolution was . 3.2.Analysis of RatioTo estimate the ability of to identify cloud-free pixels according to the MODIS land cover type 4 for NPP, we reclassified the land cover into four groups: water, vegetation (including all vegetation cover classes), nonvegetation, and urban. We selected the samples according to the 1-km State QA Descriptions in MOD09GA ( https://lpdaac.usgs.gov/products/modis_products_table/mod09ga), which includes several features including cloud state and a cloud shadow mask. The scatter maps of and for these samples for different land cover types under cloud-free, cloud, and cloud-shadow conditions are illustrated in Fig. 5. Some fundamental conclusions can be deduced from the comparison of Figs. 5(a) and 5(b). was found to be greater than 1 for cloud pixels and very high (i.e., greater than that under cloud-free conditions) for water areas under cloudy conditions. For urban and vegetation areas under cloud-free conditions, was found to be less than and greater than 1, respectively. Conversely, under cloudy conditions, we found to be around 1 and greater than the value under cloud-free conditions. Comparison of Figs. 5(c) and 5(d) demonstrates that of shadow pixels for land areas is around 1 and is higher than under cloud-free conditions. However, for water areas, we found to be less than 0 (primarily because is less than 0), except under cloud-free conditions. In summary, our results demonstrate that is typically higher for cloud pixels than for cloud-free pixels (Table 2). Table 2Slopes of the regression equation (intercept=0) for different land cover types under different cloud cover conditions.
Note: For water pixels under cloud shadow conditions, we found Bi to be −0.01; therefore, we did not calculate the slope for this relationship. We also omitted the calculation of the slope for cloud pixels in cloud shadow. To investigate our results further, we plotted values for cloud-free, cloud shadow, and cloud conditions for the different land cover types (Fig. 6) using the data presented in Table 2. We found to be in the range 0 to 1 for land areas under cloud-free conditions; moreover, our results suggest that is always lower under cloud-free conditions than under cloudy conditions. Comparison of shadow and cloud-free pixels demonstrates the existence of several pixels, in which is lower for shadow pixels than for cloud-free pixels. However, no such pixels were observed over water areas. Therefore, in cases where a given pixel could be considered to be cloud-free, cloud shadow, or cloud depending on the observation time, we selected the period with the minimum value of to represent the period with clear conditions. 3.3.Analysis of SaturationTo investigate whether the pixel exhibiting the maximum saturation can be considered a cloud-free pixel, we calculated saturation from MODIS land surface reflectance using Eq. (5) (Fig. 7). In particular, we calculated , , and from bands 1, 4, and 3, respectively. In general, we found that the saturation of cloud pixels to be lower than that of cloud-free pixels for all types of land cover, although the saturation for cloud shadow pixels for water and nonvegetation areas was found to be lower than that for cloud-free pixels. Moreover, our results suggest that the saturation of shadow pixels increases with increasing percentage of vegetation cover [Figs. 7(a), 7(c), and 7(d)]. For land cover classes including vegetation, urban, and nonvegetation, the saturation of cloud-free pixels is typically greater than that of cloud and shadow pixels. Based on the above analysis, we suggest that the maximum saturation can act as a good cloud-free pixel index for water and nonvegetation areas. 4.Results and DiscussionThe algorithm proposed here for compositing cloud-free images is similar to that of MOD09A1, in which both select cloud-free pixels from MODIS daily images and combine these pixels. We investigated samples A, B, and C using both the MOD09A1 method and our compositing technique both qualitatively and quantitatively through visual evaluation and analysis of indicators. 4.1.Visual EvaluationBased on visual evaluation, our proposed algorithm performs better in the selection of cloud-free pixels over water (including marine, coastal, lake, reservoir, and river environments) than the MOD09A1 product (Fig. 8). For example, MOD09A1 erroneously defined a cloud pixel as water under the clear-sky conditions [red circles in Figs. 8(a1) and 8(a2)]. Thus, our algorithm performs better than MOD09A1 in such cloud regions, although our method is less spatially consistent than MOD09A1 [e.g., the rectangle for sample B in Figs. 8(b1) and 8(b2)]. Moreover, for sample C [Figs. 8(c1) and 8(c2)], our algorithm removed the cloud in the lower left of the image, whereas MOD09A1 did not. Finally, the ocean color in the cloud-free images obtained using our method is more spatially homogeneous than that obtained from the MOD09A1 product. 4.2.Indicator EvaluationResidual cloud, cloud shadow, and mixed pixels can be found in the composite image, and quantitative evaluation of the relative proportions of these residual pixels is essential. The 1-km State QA Descriptions (16-bit) dataset of the MOD09GA product includes 11 indicators, including five quantitative indicators that are useful for the estimation of the quality of image composition: cloud state, internal cloud, cloud shadow, cirrus detected, and pixels adjacent to cloud. In the original dataset of daily images, each pixel includes three classes: every day is cloudy; every day is noncloudy; there are at least one cloudy day and at most seven cloudy days throughout the 8-day period. For this third class, the accuracy of the cloud-free composited image has been evaluated previously according to Eq. (12). where is a state of cloud-free, cloud, cloud shadow, cirrus, and so on, represents the pixel number of a given state remaining in the cloud-free image, and represents the number of pixels exhibiting the state in at least one day and at most seven days throughout the 8-day period. Based on Eq. (12), we conducted a comparative evaluation of the quality of the images produced using our compositing technique (Table 3).Table 3A comparative evaluation of the quality of our composited cloud-free images (units: %).
Note: MOD represents the MOD09A1 product; COM represents cloud-free images obtained using our method; and A, B, and C represent the three samples. It is clear that our method performs well in removing cloud and cirrus pixels, but less well for cloud-shadow pixels. In the composite image, the internal cloud in the coastal areas of southeastern China (sample C) is lower than that in the MOD09A1 product, indicating that fewer residual cloud pixels remain in the composite image than in the MOD09A1 product. Moreover, slightly less high cirrus can be found in the composite image than in the MOD09A1 product, although there is slightly more medium height and small cirrus in the composited image. However, residual small cirrus is less than 5% in samples A and B, and that in sample C is 7.15%. More residual cloud shadow pixels are present in the composite image than in the MOD09A1 product with residual cloud shadow greater than 30% in the Northwest Territories (sample B) and the coastal areas of southeastern China (sample C). Table 4 presents average values for sensor zenith angle, scattering angle, and relative azimuth angle for samples A, B, and C for both the composite image and the MOD09A1 product. It is clear that, regardless of whether the ratio or the maximum saturation method is adopted, the average sensor zenith angle is greater for the composite image than for the MOD09A1 product. This occurs because our method exhibits preference for nonnadir pixels, whereas the MOD09A1 product exhibits preference for nadir pixels. Furthermore, the average scatter angle for the composite image is less than that for the MOD09A1 product, and the average scatter angle obtained according to the maximum saturation method is less than that obtained using the ratio method. This may have occurred because our method mistakes cloud-shadow pixels for cloud-free pixels, particularly when the maximum saturation method is adopted. The average relative azimuth angle for the composite image is approximately 90 to 270 deg, whereas that of the MOD09A1 product is less than 45 deg or greater than 315 deg. This likely occurs because our method preferentially selects the forward hemisphere pixels as cloud-free pixels, whereas the MOD09A1 product preferentially selects backward hemisphere observations pixels as cloud-free pixels. Thus, it is clear that our method exhibits obvious differences from the MOD09A1 algorithm. Table 4A comparative analysis of scattering angle, sensor zenith angle, and relative azimuth angle (units: deg).
The selection of the maximum saturation pixel as a cloud-free pixel is likely the key factor causing the differences that are apparent from Table 4. The atmosphere has a difference to the extinction coefficient of the red, green, and blue, the longer the light transmission path in the atmosphere, the greater the energy difference among the red, green, and blue, and then the greater the saturation. Therefore, pixels exhibiting large zenith angles are likely to be selected as cloud-free pixels, although the maximum saturation method is more likely to preferentially select such pixels than the method. Thus, when backward (i.e., back to the sun) observations are used, the transmission distance for reflected sunlight is short, and the pixel saturation is low. Conversely, when forward (i.e., toward the sun) observations are used, the transmission distance for scattered sunlight is long, and the pixel saturation is high. Therefore, our method preferentially selects forward observation pixels as cloud-free pixels. The incident light of cloud-free pixels is primarily reflected sunlight, whereas that of shadow pixels is primarily light scattered by the surrounding objects. Therefore, the saturation of cloud-shadow pixel is greater than that of cloud-free pixel, and then shadow pixels can be more easily mistaken for cloud-free pixels. To verify whether our results have been affected by land-use type, we selected two indicators of internal cloud and cloud shadow and used Eq. (12) to calculate for residual cloud and shadow for the composited cloud-free image for the water, vegetation, urban, snow, and nonvegetation land cover categories (Table 5). For this, we used land cover data from the MCD12Q1 type 5 dataset. Water bodies, nonvegetated areas, urban and built-up areas, and snow and ice were extracted directly from the dataset; the vegetation category was obtained by combining the evergreen needleleaf trees, evergreen broadleaf trees, deciduous needleleaf trees, and deciduous broadleaf trees classes. Table 5Pi analysis for residual cloud and shadow in the composited cloud-free images for different land cover types.
Note: MOD represents the MOD09A1 product; and COM represents the cloud-free images obtained by our method. It is clear that, in the cloud-free images from North China (sample A) and the Northwest Territories (sample B), the percentage of residual cloud for the vegetation, built-up, bare ground, and other land use types is small but still greater than corresponding percentages for the MOD09A1 product. In the cloud-free image for the Northwest Territories (sample B), the percentage of residual cloud pixels in snow areas was found to be 43.29% (27.3%) using our method (the MOD09A1 product). Thus, it is clear that the MOD09A1 product performs significantly better than our method in this respect. In the cloud-free image of the coastal areas of southeastern China (sample C), considerably more residual cloud pixels remain over water than in the MOD09A1 product. Moreover, in the Northwest Territories (sample B) and coastal areas of southeastern China (sample C), the number of residual cloud shadow pixels is relatively high with percentages greater than 20%. In the Northwest Territories (sample B), the composited image was produced primarily using the method; here, of the vegetation were found to be significantly lower than those of water, indicating that the method performs better than the saturation method in removing cloud shadow. 5.Conclusions and Future WorkOur newly developed integrated cloud-free image compositing algorithm requires only bands 1, 3, 4, and 7 of MODIS images, all of which are readily available, and the algorithm complexity is smaller than MOD09A1. The method is suitable for extracting cloud-free pixels for land areas, whereas the saturation method is suitable for extracting cloud-free pixels for marine environments, coastal areas, and other water bodies. Our proposed method can eliminate cloud effects and produces slightly higher values for the percentage area ratio of residual cloud pixels and pixels adjacent to cloud in cloud-free images than those in the MOD09A1 product. Moreover, our proposed method performs well in terms of removing cirrus, particularly high cirrus. The calculation accuracy of our method depends on land cover type: the algorithm performs best in the removal of cloud and shadow for vegetation, followed by urban areas, bare land, and water, but performs relatively poorly for snow and ice. Conversely, the MOD09A1 product exhibits no such variation in performance with land cover type. Moreover, our algorithm preferentially selects cloud-free image pixels with larger zenith angles and smaller scattering angles and prefers forward observations to backward observations, which is in stark contrast to the MOD09A1 product. However, our method has some limitations in compositing cloud-free images. For example, the method can only be used to process multiday images obtained in daytime; it also produces more residual cloud pixels than the MOD09A1 product, although it performs similarly to the MOD09A1 product in terms of removing cloud and shadow in ice and snow areas. These limitations need further improvements. Acknowledgments.This work was supported by the National Natural Science Foundation of China (Nos. 41240004, 40871073, and 40971118), the National High Technology Research and Development Program (2009AA12Z225), Key Basic Research Projects of Hebei Province (No. 08966712D), the Natural Science Foundation of Hebei Province (No. D2009000299), and the Doctor Fund projects of Hebei Normal University (No. 2011B03). MODIS data were downloaded from the Warehouse Inventory Search Tool (WIST) and the BaseMap of Fig. 1 was from ArcGIS Online for free. The authors wish to express their thanks to USGS and ESRI company for the free data, Chen Hui (professor) and the staff of the Institute of Remote Sensing and Digital Earth of the Chinese Academy of Sciences for fruitful discussions. The authors also wish to express their thanks to the anonymous reviewers for their assistance in refining the manuscript. ReferencesL. L. StoweS. K. VemuryA. V. Rao,
“AVHRR clear-sky radiation data sets at NOAA/NESDIS,”
Adv. Space Res., 14
(1), 113
–116
(1994). http://dx.doi.org/10.1016/0273-1177(94)90358-1 ASRSDW 0273-1177 Google Scholar
C. Yanget al.,
“Study on cloud detection and cloud phase retrieval in Changchun area,”
Remote Sens. Technol. Appl., 25
(2), 245
–250
(2010). Google Scholar
Q.-J. He,
“A daytime cloud detection algorithm for FY-3A/VIRR data,”
Int. J. Remote Sens., 32
(21), 6811
–6822
(2011). http://dx.doi.org/10.1080/01431161.2010.523730 IJSEDK 0143-1161 Google Scholar
Z.-H. Maet al.,
“Cloud removing from MODIS based on spectrum analysis,”
Remote Sens. Inf.,
(4), 3
–8
(2009). Google Scholar
G. Poliet al.,
“Dynamic threshold cloud detection algorithm improvement for AVHRR and SEVIRI images,”
in 2010 IEEE Int. Geoscience and Remote Sensing Symposium,
4146
–4149
(2010). Google Scholar
A. AnzaloneF. IsgroD. Tegolo,
“Combining fuzzy c-mean and normalized convolution for cloud detection in IR images,”
Fuzzy Logic and Applications, 140
–147 Springer-Verlag, Berlin, Heidelberger
(2009). Google Scholar
F. Aireset al.,
“A land and ocean microwave cloud classification algorithm derived from AMSU-A and -B, trained using MSG-SEVIRI infrared and visible observations,”
Mon. Weather Rev., 139
(8), 2347
–2366
(2011). http://dx.doi.org/10.1175/MWR-D-10-05012.1 MWREAB 0027-0644 Google Scholar
D. Qiuet al.,
“The study on removal of satellite image shadow based on SAM and MASK,”
Remote Sens. Inf.,
(5), 12
(2010). Google Scholar
Y. LuoA. P. TrishchenkoK. V. Khlopenkov,
“Developing clear-sky, cloud and cloud shadow mask for producing clear-sky composites at 250-meter spatial resolution for the seven MODIS land bands over Canada and North America,”
Remote Sens. Environ., 112
(12), 4167
–4185
(2008). http://dx.doi.org/10.1016/j.rse.2008.06.010 RSEEA7 0034-4257 Google Scholar
I. V. Z. MaraisJ. A. Du PreezW. H. Steyn,
“An optimal image transform for threshold-based cloud detection using heteroscedastic discriminant analysis,”
Int. J. Remote Sens., 32
(6), 1713
–1729
(2011). http://dx.doi.org/10.1080/01431161003621619 IJSEDK 0143-1161 Google Scholar
S. Mackieet al.,
“Generalized Bayesian cloud detection for satellite imagery. Part 1: technique and validation for night-time imagery over land and sea,”
Int. J. Remote Sens., 31
(10), 2573
–2594
(2010). http://dx.doi.org/10.1080/01431160903051703 IJSEDK 0143-1161 Google Scholar
S. Mackieet al.,
“Generalized Bayesian cloud detection for satellite imagery. Part 2: technique and validation for daytime imagery,”
Int. J. Remote Sens., 31
(10), 2595
–2621
(2010). http://dx.doi.org/10.1080/01431160903051711 IJSEDK 0143-1161 Google Scholar
A. LyapustinY. WangR. Frey,
“An automatic cloud mask algorithm based on time series of MODIS measurements,”
J. Geophys. Res. Atmos., 113 D16207
(2008). http://dx.doi.org/10.1029/2007JD009641 JGRDE3 0148-0227 Google Scholar
H. Tang,
“MODIS09 image time series anomaly detection methods based on statistical characteristics of spectrum time-varying,”
in Remote Sensing Quantitative Retrieve Algorithm Symposium,
(2010). Google Scholar
O. Hagolleet al.,
“A multi-temporal method for cloud detection, applied to FORMOSAT-2, VEN mu S, LANDSAT and SENTINEL-2 images,”
Remote Sens. Environ., 114
(8), 1747
–1755
(2010). http://dx.doi.org/10.1016/j.rse.2010.03.002 RSEEA7 0034-4257 Google Scholar
A. D. FraserR. A. MassomK. J. Michael,
“A method for compositing polar MODIS satellite images to remove cloud cover for landfast sea-ice detection,”
IEEE Trans. Geosci. Remote Sens., 47
(9), 3272
–3282
(2009). http://dx.doi.org/10.1109/TGRS.2009.2019726 IGRSD2 0196-2892 Google Scholar
W. B. RossowL. C. GarderA. A. Lacis,
“Global, seasonal cloud variations from satellite radiance measurements. Part I: sensitivity of analysis,”
J. Clim., 2
(5), 419
–458
(1989). http://dx.doi.org/10.1175/1520-0442(1989)002<0419:GSCVFS>2.0.CO;2 JLCLEL 0894-8755 Google Scholar
W. B. RossowA. A. Lacis,
“Global, seasonal cloud variations from satellite radiance measurements. Part II: cloud properties and radiative effects,”
J. Clim., 3
(11), 1204
–1253
(1990). http://dx.doi.org/10.1175/1520-0442(1990)003<1204:GSCVFS>2.0.CO;2 JLCLEL 0894-8755 Google Scholar
K. T. KriebelR. W. SaundersG. Gesell,
“Optical properties of clouds derived from fully cloudy AVHRR pixels,”
Beitr. Phys. Atmos., 62
(3), 165
–171
(1989). Google Scholar
G. Gesell,
“An algorithm for snow and ice detection using AVHRR data: an extension to the APOLLO software package,”
Int. J. Remote Sens., 10
(4–5), 897
–905
(1989). http://dx.doi.org/10.1080/01431168908903929 IJSEDK 0143-1161 Google Scholar
L. L. Stoweet al.,
“Global distribution of cloud cover derived from NOAA/AVHRR operational satellite data,”
Adv. Space Res., 11
(3), 51
–54
(1991). http://dx.doi.org/10.1016/0273-1177(91)90402-6 ASRSDW 0273-1177 Google Scholar
S. A. Ackermanet al.,
“Cloud detection with MODIS. Part II: validation,”
J. Atmos. Oceanic Technol., 25
(7), 1073
–1086
(2008). http://dx.doi.org/10.1175/2007JTECHA1053.1 JAOTES 0739-0572 Google Scholar
R. A. Freyet al.,
“Cloud detection with MODIS. Part I: improvements in the MODIS cloud mask for collection 5,”
J. Atmos. Oceanic Technol., 25
(7), 1057
–1072
(2008). http://dx.doi.org/10.1175/2008JTECHA1052.1 JAOTES 0739-0572 Google Scholar
A. GafurovA. Bárdossy,
“Cloud removal methodology from MODIS snow cover product,”
Hydrol. Earth Syst. Sci., 13
(7), 1361
–1373
(2009). http://dx.doi.org/10.5194/hess-13-1361-2009 HESSCF 1027-5606 Google Scholar
J. Dozieret al.,
“Time–space continuity of daily maps of fractional snow cover and albedo from MODIS,”
Adv. Water Resour., 31
(11), 1515
–1526
(2008). http://dx.doi.org/10.1016/j.advwatres.2008.08.011 AWREDI 0309-1708 Google Scholar
Q. Xiaet al.,
“Estimation of daily cloud-free, snow-covered areas from MODIS based on variational interpolation,”
Water Resour. Res., 48
(9), W09523
(2012). http://dx.doi.org/10.1029/2011WR011072 WRERAQ 0043-1397 Google Scholar
E. F. VermoteS. Y. KotchenovaJ. P. Ray, MODIS Surface Reflectance User’s Guide,
(2011) http://modis-sr.ltdri.org/products/MOD09_UserGuide_v1_3.pdf April ). 2011). Google Scholar
Y. J. KaufmanD. Tanre,
“Atmospherically resistant vegetation index (ARVI) for EOS-MODIS,”
IEEE Trans. Geosci. Remote Sens., 30
(2), 261
–270
(1992). http://dx.doi.org/F10.1109/36.134076 IGRSD2 0196-2892 Google Scholar
E. F. VermoteA. Vermeulen,
“Atmospheric Correction Algorithm Spectral Reflectances (MOD 09),”
26 http://modis.gsfc.nasa.gov/data/atbd/atbd_mod08.pdf Google Scholar
Y. J. Kaufmanet al.,
“The MODIS 2.1 μm channel: correlation with visible reflectance for use in remote sensing of aerosol,”
IEEE Trans. Geosci. Remote Sens., 35
(5), 1286
–1300
(1997). http://dx.doi.org/10.1109/36.628795 IGRSD2 0196-2892 Google Scholar
R. C. Levyet al.,
“Second-generation operational algorithm: retrieval of aerosol properties over land from inversion of Moderate Resolution Imaging Spectroradiometer spectral reflectance,”
J. Geophys. Res., 112 D13211
(2007). http://dx.doi.org/10.1029/2006JD007811 JGREA2 0148-0227 Google Scholar
X. Xu, Remote Sensing Physics, Peking University Press, Beijing
(2006). Google Scholar
R. C. GonzalezR. E. Woods, Digital Image Processing Second Edition, Publishing House of Electronics Industry, Beijing
(2007). Google Scholar
Y. Zhao, The Principle and Application of Remote Sensing Application Analysis, Science Press, Beijing
(2009). Google Scholar
BiographyHai-bing Xiang is pursuing his PhD in the Institute of Remote Sensing and Digital Earth of Chinese Academy of Sciences, Beijing, China. He received his bachelor’s degree in 2008 and master’s degree in 2011 from the Department of Cartography and Geographic Information Systems in Hebei Normal University, Shijiazhuang, China. He had participated in a number of national and provincial Natural Science Foundation of research topics. The study includes population density of multiscale calculating method, detection and removal of the stripes in MODIS image, the algorithms on MODIS clear-sky image compositing, and the ground temperature retrieval algorithm. Currently, he is engaging in the research of forest structural parameters retrieval of remote sensing. Jin-song Liu received his PhD degree in ecology from College of Life Science, Hebei Normal University, Shijiazhuang, China, in 2011. Since 2011, he has been a professor of GIS in College of Mathematics and Information Science, Hebei Normal University, Shijiazhuang, China. From 1995 to 2006, he worked as a teacher in Department of Geography, Hebei Normal University on GIS. Since 2007, he has worked in Hebei Key Laboratory of Mathematic Calculation and Application and paid more attention to remote sensing and population geography, especially in the applications of MODIS images and population geographic information system of Hebei province. Currently, he is managing director and secretary of Hebei Population Society, and he is also a reviewer of Journal of Remote Sensing. His major research interests include quantitative remote sensing inversion, population geography, and GIS. Chun-xiang Cao received her MSc degree in Kyoto Prefectural University, Japan, in 1996, and received her DSc degree from Hiroshima University, Japan, in 2000. She is director at State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing and Digital Earth of Chinese Academy of Sciences, Beijing, China. Her research fields include environmental health assessment, remote sensing of forest health, and others. She has managed many major research projects, including National Natural Science Foundation of China, National “973” and National “863” projects, and so on. She has published 31 SCI articles and an academic monograph, “Diagnosis of environmental health by remote sensing,” in past five years. Min Xu is an assistant research fellow at State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing and Digital Earth of Chinese Academy of Sciences, Beijing, China. He received his Dr. degree from Graduate School of the Chinese Academy of Sciences and his master’s degree from Wuhan University, China. He got “Triple-A” outstanding student in Graduate School of the Chinese Academy of Sciences in 2009. His current research fields involve application of RS and GIS technologies in public health and published 36 research papers including 16 SCI indexed paper. |