Next Article in Journal
Cloud Detection for FY Meteorology Satellite Based on Ensemble Thresholds and Random Forests Approach
Next Article in Special Issue
Google Earth Engine Applications
Previous Article in Journal
Forest Height Estimation Based on Constrained Gaussian Vertical Backscatter Model Using Multi-Baseline P-Band Pol-InSAR Data
Previous Article in Special Issue
Google Earth Engine Applications Since Inception: Usage, Trends, and Potential
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The First Wetland Inventory Map of Newfoundland at a Spatial Resolution of 10 m Using Sentinel-1 and Sentinel-2 Data on the Google Earth Engine Cloud Computing Platform

1
C-CORE, 1 Morrissey Rd, St. John’s, NL A1B 3X5, Canada
2
Department of Electrical and Computer Engineering, Memorial University of Newfoundland, St. John’s, NL A1C 5S7, Canada
3
Environmental Resources Engineering, College of Environmental Science and Forestry, State University of New York, NY 13210, USA
4
Department of Geography, Environment, and Geomatics, University of Ottawa, Ottawa, ON K1N 6N5, Canada
*
Author to whom correspondence should be addressed.
Submission received: 22 October 2018 / Revised: 19 December 2018 / Accepted: 20 December 2018 / Published: 28 December 2018
(This article belongs to the Collection Google Earth Engine Applications)

Abstract

:
Wetlands are one of the most important ecosystems that provide a desirable habitat for a great variety of flora and fauna. Wetland mapping and modeling using Earth Observation (EO) data are essential for natural resource management at both regional and national levels. However, accurate wetland mapping is challenging, especially on a large scale, given their heterogeneous and fragmented landscape, as well as the spectral similarity of differing wetland classes. Currently, precise, consistent, and comprehensive wetland inventories on a national- or provincial-scale are lacking globally, with most studies focused on the generation of local-scale maps from limited remote sensing data. Leveraging the Google Earth Engine (GEE) computational power and the availability of high spatial resolution remote sensing data collected by Copernicus Sentinels, this study introduces the first detailed, provincial-scale wetland inventory map of one of the richest Canadian provinces in terms of wetland extent. In particular, multi-year summer Synthetic Aperture Radar (SAR) Sentinel-1 and optical Sentinel-2 data composites were used to identify the spatial distribution of five wetland and three non-wetland classes on the Island of Newfoundland, covering an approximate area of 106,000 km2. The classification results were evaluated using both pixel-based and object-based random forest (RF) classifications implemented on the GEE platform. The results revealed the superiority of the object-based approach relative to the pixel-based classification for wetland mapping. Although the classification using multi-year optical data was more accurate compared to that of SAR, the inclusion of both types of data significantly improved the classification accuracies of wetland classes. In particular, an overall accuracy of 88.37% and a Kappa coefficient of 0.85 were achieved with the multi-year summer SAR/optical composite using an object-based RF classification, wherein all wetland and non-wetland classes were correctly identified with accuracies beyond 70% and 90%, respectively. The results suggest a paradigm-shift from standard static products and approaches toward generating more dynamic, on-demand, large-scale wetland coverage maps through advanced cloud computing resources that simplify access to and processing of the “Geo Big Data.” In addition, the resulting ever-demanding inventory map of Newfoundland is of great interest to and can be used by many stakeholders, including federal and provincial governments, municipalities, NGOs, and environmental consultants to name a few.

Graphical Abstract

1. Introduction

Wetlands cover between 3% and 8% of the Earth’s land surface [1]. They are one of the most important contributors to global greenhouse gas reduction and climate change mitigation, and they greatly affect biodiversity and hydrological connectivity [2]. Wetland ecosystem services include flood- and storm-damage protection, water-quality improvement and renovation, aquatic and plant-biomass productivity, shoreline stabilization, plant collection, and contamination retention [3]. However, wetlands are being drastically converted to non-wetland habitats due to both anthropogenic activities, such as intensive agricultural and industrial development, urbanization, reservoir construction, and water diversion, as well as natural processes, such as rising sea levels, thawing of permafrost, changing in precipitation patterns, and drought [1].
Despite the vast expanse and benefits of wetlands, there is a lack of comprehensive wetland inventories in most countries due to the expense of conducting nation-wide mapping and the highly dynamic, remote nature of wetland ecosystems [4]. These issues result in fragmented, partial, or outdated wetland inventories in most countries worldwide, and some have no inventory available at all [5]. Although North America and some parts of Western Europe have some of the most comprehensive wetland inventories, these are also incomplete and have considerable limitations related to the resolution and type of data, as well as to developed methods [6]. These differences make these existing inventories incomparable [1] and highlight the significance of long-term comprehensive wetland monitoring systems to identify conservation priorities and sustainable management strategies for these valuable ecosystems.
Over the past two decades, wetland mapping has gained recognition thanks to the availability of remote sensing tools and data [6,7]. However, accurate wetland mapping using remote sensing data, especially on a large-scale, has long proven challenging. For example, input data should be unaffected/less affected by clouds, haze, and other disturbances to obtain an acceptable classification result [4]. Such input data can be generated by compositing a large volume of satellite images collected during a specific time period. This is of particular concern for distinguishing backscattering/spectrally similar classes (e.g., wetland), wherein discrimination is challenging using a single image. Historically, the cost of acquiring multi-temporal remote sensing data precluded such large-scale land cover (e.g., wetland) mapping [8]. Although Landsat sensors have been collecting Earth Observation (EO) data at frequent intervals since the mid-1980s [9], open-access to its entire archive has occurred since 2008 [8]. This is of great benefit for land cover mapping on a large-scale. However, much of this archived data has been underutilized to date. This is because collecting, storing, processing, and manipulating multi-temporal remote sensing data that cover a large geographic area over three decades are infeasible using conventional image processing software on workstation PC-based systems [10]. This is known as the “Geo Big Data” problem and it demands new technologies and resources capable of handling such a large volume of satellite imagery from the data science perspective [11].
Most recently, the growing availability of large-volume open-access remote sensing data and the development of advanced machine learning tools have been integrated with recent implementations of powerful cloud computing resources. This offers new opportunities for broader sets of applications at new spatial and temporal scales in the geospatial sciences and addresses the limitation of existing methods and products [12]. Specifically, the advent of powerful cloud computing resources, such as NASA Earth Exchange, Amazon’s Web Services, Microsoft’s Azure, and Google cloud platform has addressed these Geo Big Data problems. For example, Google Earth Engine (GEE) is an open-access, cloud-based platform for parallel processing of petabyte-scale data [13]. It hosts a vast pool of satellite imagery and geospatial datasets, and allows web-based algorithm development and results visualization in a reasonable processing time [14,15,16]. In addition to its computing and storage capacity, a number of well-known machine learning algorithms have been implemented, allowing batch processing using JavaScript on a dedicated application programming interface (API) [17].
Notably, the development of advanced machine learning tools further contributes to handling large multi-temporal remote sensing data [18]. This is because traditional classifiers, such as maximum likelihood, insufficiently manipulate complicated, high-dimensional remote sensing data. Furthermore, they assume that input data are normally distributed, which may not be the case [19]. However, advanced machine learning tools, such as Decision Tree (DT), Support Vector Machine (SVM), and Random Forest (RF), are independent of input data distribution and can handle large volumes of remote sensing data. Previous studies have demonstrated that both RF [20] and SVM [21] outperformed DT for classifying remote sensing data. RF and SVM have also relatively equal strength in terms of classification accuracies [22]. However, RF is much easier to execute relative to SVM, given that the latter approach requires the adjustment of a large number of parameters [23]. RF is also insensitive to noise and overtraining [24] and has shown high classification accuracies in various wetland studies [19,25].
Over the past three years, several studies have investigated the potential of cloud-computing resources using advanced machine learning tools for processing/classifying the Geo Big Data in a variety of applications. These include global surface water mapping [26], global forest-cover change mapping [27], and cropland mapping [28], as well as studies focusing on land- and vegetation-cover changes on a smaller scale [29,30]. They demonstrated the feasibility of characterizing the elements of the Earth surface at a national and global scale through advanced cloud computing platforms.
Newfoundland and Labrador (NL), a home for a great variety of flora and fauna, is one of the richest provinces in terms of wetlands and biodiversity in Canada. Most recently, the significant value of these ecosystems has been recognized by the Wetland Mapping and Monitoring System (WMMS) project, launched in 2015. Accordingly, a few local wetland maps, each covering approximately 700 km2 of the province, were produced. For example, Mahdianpari et al. (2017) introduced a hierarchical object-based classification scheme for discriminating wetland classes in the most easterly part of NL, the Avalon Peninsula, using Synthetic Aperture Radar (SAR) observations obtained from ALOS-2, RADARSAT-2, and TerraSAR-X imagery [19]. Later, Mahdianpari et al. (2018) proposed the modified coherency matrix obtained from quad-pol RADARSAT-2 imagery to improve wetland classification accuracy. They evaluated the efficiency of the proposed method in three pilot sites across NL, each of which covers 700 km2 [31]. Most recently, Mohammadimanesh et al. (2018) investigated the potential of interferometric coherence for wetland classification, as well as the synergy of coherence with SAR polarimetry and intensity features for wetland mapping in a relatively small area in NL (the Avalon Peninsula) [32]. These local-scale wetland maps exhibit the spatial distribution patterns and the characteristics of wetland species (e.g., dominant wetland type). However, such small-scale maps have been produced by incorporating different data sources, standards, and methods, making them of limited use for rigorous wetland monitoring at the provincial, national, and global scales.
Importantly, precise, comprehensive, provincial-level wetland inventories that map small to large wetland classes can significantly aid conservation strategies, support sustainable management, and facilitate progress toward national/global scale wetland inventories [33]. Fortunately, new opportunities for large-scale wetland mapping are obtained from the Copernicus programs by the European Space Agency (ESA) [34]. In particular, concurrent availability of 12-days SAR Sentinel-1 and 10-days optical Sentinel-2 (multi-spectral instrument, MSI) sensors provides an unprecedented opportunity to collect high spatial resolution data for global wetland mapping. The main purpose of these Sentinel Missions is to provide full, free, and open access data to facilitate the global monitoring of the environment and to offer new opportunities to the scientific community [35]. This highlights the substantial role of Sentinel observations for large-scale land surface mapping. Accordingly, the synergistic use of Sentinel-1 and Sentinel-2 EO data offers new avenues to be explored in different applications, especially for mapping phenomena with highly dynamic natures (e.g., wetland).
Notably, the inclusion of SAR data for land and wetland mapping is of great significance for monitoring areas with nearly permanent cloud-cover. This is because SAR signals are independent of solar radiation and the day/night condition, making them superior for monitoring geographic regions with dominant cloudy and rainy weather, such as Newfoundland, Canada. Nevertheless, multi-source satellite data are advantageous in terms of classification accuracy relative to the accuracy achieved by a single source of data [36]. This is because optical sensors are sensitive to the reflective and spectral characteristics of ground targets [37,38], whereas SAR sensors are sensitive to their structural, textural, and dielectric characteristics [39,40]. Thus, a synergistic use of two types of data offers complementary information, which may be lacking when utilizing one source of data [41,42]. Several studies have also highlighted the great potential of fusing optical and SAR data for wetland classification [25,36,41].
This study aims to develop a multi-temporal classification approach based on open-access remote sensing data and tools to map wetland classes as well as the other land cover types with high accuracy, here piloting this approach for wetland mapping in Canada. Specifically, the main objectives of this study were to: (1) Leverage open access SAR and optical images obtained from Sentinel-1 and Sentinel-2 sensors for the classification of wetland complexes; (2) assess the capability of the Google Earth Engine cloud computing platform to generate custom land cover maps, which are sufficient in discriminating wetland classes as standard land cover products; (3) compare the efficiency of both pixel-based and object-based random forest classification; and (4) produce the first provincial-scale, fine resolution (i.e., 10 m) wetland inventory map in Canada. The results of this study demonstrate a paradigm-shift from standard static products and approaches toward generating more dynamic, on-demand, large-scale wetland coverage maps through advanced cloud computing resources that simplify access to and processing of a large volume of satellite imagery. Given the similarity of wetland classes across the country, the developed methodology can be scaled-up to map wetlands at the national-scale.

2. Materials and Methods

2.1. Study Area

The study area is the Island of Newfoundland, covering an approximate area of 106,000 km2, located within the Atlantic sub-region of Canada (Figure 1). According to the Ecological Stratification Workings Group of Canada, “each part of the province is characterized by distinctive regional ecological factors, including climatic, physiography, vegetation, soil, water, fauna, and land use” [43].
In general, the Island of Newfoundland has a cool summer and a humid continental climate, which is greatly affected by the Atlantic Ocean [43]. Black spruce forests that dominate the central area, and balsam fir forests that dominate the western, northern, and eastern areas, are common on the island [44]. Based on geography, the Island of Newfoundland can be divided into three zones, namely the southern, middle, and northern boreal regions, and each is characterized by various ecoregions [45]. For example, the southern boreal zone contains the Avalon forest, Southwestern Newfoundland, Maritime Barrens, and South Avalon-Burin Oceanic Barrens ecoregions. St. John’s, the capital city, is located at the extreme eastern portion of the island, in the Maritime Barren ecoregion, and is the foggiest, windiest, and cloudiest Canadian city.
All wetland classes characterized by the Canadian Wetland Classification System (CWCS), namely bog, fen, marsh, swamp, and shallow-water [1], are found throughout the island. However, bog and fen are the most dominant classes relative to the occurrence of swamp, marsh, and shallow-water. This is attributed to the island climate, which facilitates peatland formation (i.e., extensive agglomeration of partially-decomposed organic peat under the surface). Other land cover classes are upland, deep-water, and urban/bare land. The urban and bare land classes, both having either an impervious surface or exposed soil [46], include bare land, roads, and building facilities and, thus, are merged into one single class in the final classification map.
Four pilot sites, which are representative of regional variation in terms of both landscape and vegetation, were selected across the island for in-situ data collection (see Figure 1). The first pilot site is the Avalon area, located in the south-east of the island in the Maritime Barren ecoregion, which experiences an oceanic climate of foggy, cool summers, and relatively mild winters [47]. The second and third pilot sites are Grand Falls-Windsor, located in the north-central area of the island, and Deer Lake, located in the northern portion of the island. Both fall within the Central Newfoundland ecoregion and experience a continental climate of cool summers and cold winters [47]. The final pilot site is Gros Morne, located on the extreme west coast of the island, in the Northern Peninsula ecoregion, and this site experiences a maritime-type climate with cool summers and mild winters [47].

2.2. Reference Data

In-situ data were collected via an extensive field survey of the sites mentioned above in the summers and falls of 2015, 2016 and 2017. Using visual interpretation of high resolution Google Earth imagery, as well as the CWCS definition of wetlands, potential and accessible wetland sites were flagged across the island. Accessibility via public roads, the public or private ownership of lands, and prior knowledge of the area were also taken into account for site visitation. In-situ data were collected to cover a wide range of wetland and non-wetland classes with a broad spatial distribution across NL. One or more Global Positioning System (GPS) points, depending on the size of each wetland, along with the location’s name and date were recorded. Several digital photographs and ancillary notes (e.g., dominant vegetation and hydrology) were also recorded to aid in preparing the training samples. During the first year of data collection (i.e., 2015), no limitation was set on the size of the wetland, and this resulted in the production of several small-size classified polygons. To move forward with a larger size, wetlands of size >1 ha (where possible) were selected during the years 2016 and 2017. Notably, a total of 1200 wetland and non-wetland sites were visited during in-situ data collection at the Avalon, Grand Falls-Windsor, Deer Lake, and Gros Morne pilot sites over three years. Such in-situ data collection over a wide range of wetland classes across NL captured the variability of wetlands and aided in developing robust wetland training samples. Figure 1 depicts the distribution of the training and testing polygons across the Island.
Recorded GPS points were then imported into ArcMap 10.3.1 and polygons illustrating classified delineated wetlands were generated using a visual analysis of 50 cm resolution orthophotographs and 5 m resolution RapidEye imagery. Next, polygons were sorted based on their size and alternately assigned to either training or testing groups. Thus, the training and testing polygons were obtained from independent samples to ensure robust accuracy assessment. This alternative assignment also ensured that both the training (~50%) and testing (~50%) polygons had equal numbers of small and large polygons, allowing similar pixel counts and taking into account the large variation of intra-wetland size. Table 1 presents the number of training and testing polygons for each class.

2.3. Satellite Data, Pre-Processing, and Feature Extraction

2.3.1. SAR Imagery

A total of 247 and 525 C-band Level-1 Ground Range Detected (GRD) Sentinel-1 SAR images in ascending and descending orbits, respectively, were used in this study. This imagery was acquired during the interval between June and August of 2016, 2017 and 2018 using the Interferometric Wide (IW) swath mode with a pixel spacing of 10 m and a swath of 250 km with average incidence angles varying between 30 ° and 45 ° . As a general rule, Sentinel-1 collects dual- (HH/HV) or single- (HH) polarized data over Polar Regions (i.e., sea ice zones) and dual- (VV/VH) or single- (VV) polarized data over all other zones [48]. However, in this study, we took advantage of being close to the Polar regions and thus, both HH/HV and VV/VH data were available in our study region. Accordingly, of 247 SAR ascending observations (VV/VH), 12, 120 and 115 images were collected in 2016, 2017 and 2018, respectively. Additionally, of 525 descending observations (HH/HV), 111, 260, and 154 images were acquired in 2016, 2017 and 2018, respectively. Figure 2 illustrates the number of SAR observations over the summer of the aforementioned years.
Sentinel-1 GRD data were accessed through GEE. We applied the following pre-processing steps, including updating orbit metadata, GRD border noise removal, thermal noise removal, radiometric calibration (i.e., backscatter intensity), and terrain correction (i.e., orthorectification) [49]. These steps resulted in generating the geo-coded backscatter intensity images. Notably, this is similar to the pre-processing steps implemented in the ESA’s SNAP Sentinel-1 toolbox. The unitless backscatter intensity images were then converted into normalized backscattering coefficient (σ0) values in dB (i.e., the standard unit for SAR backscattering representation). Further pre-processing steps, including incidence angle correction [50] and speckle reduction (i.e., 7 × 7 adaptive sigma Lee filter in this study) [51,52], were also carried out on the GEE platform.
Following the procedure described above, σ V V 0 , σ V H 0 , σ H H 0 , and σ H V 0 (i.e., backscatter coefficient images) were extracted. Notably, σ V V 0 observations are sensitive to soil moisture and are able to distinguish flooded from non-flooded vegetation [53], as well as various types of herbaceous wetland classes (low, sparsely vegetated areas) [54]. This is particularly true for vegetation in the early stages of growing when plants have begun to grow in terms of height, but have not yet developed their canopy [53]. σ V H 0 observations can also be useful for monitoring wetland herbaceous vegetation. This is because cross-polarized observations are produced by volume scattering within the vegetation canopy and have a higher sensitivity to vegetation structures [55]. σ H H 0 is an ideal SAR observation for wetland mapping due to its sensitivity to double-bounce scattering over flooded vegetation [41,56]. Furthermore, σ H H 0 is less sensitive to the surface roughness compared to σ V V 0 , making the former advantageous for discriminating water and non-water classes. In addition to SAR backscatter coefficient images, a number of other polarimetric features were also extracted and used in this study. Table 2 represents polarimetric features extracted from the dual-pol VV/VH and HH/HV Sentinel-1 images employed in this study. Figure 3a illustrates the span feature, extracted from HH/HV data, for the Island of Newfoundland.

2.3.2. Optical Imagery

Creating a 10 m cloud-free Sentinel-2 composition for the Island of Newfoundland over a short period of time (e.g., one month) is a challenging task due to chronic cloud cover. Accordingly, the Sentinel-2 composite was created for three-months between June and August, during the leaf-on season for 2016, 2017 and 2018. This time period was selected since it provided the most cloud-free data and allowed for maximum wall-to-wall data coverage. Furthermore, explicit wetland phenological information could be preserved by compositing data acquired during this time period. Accordingly, monthly composite and multi-year summer composite were used to obtain cloud-free or near-cloud-free wall-to-wall coverage.
Both Sentinel-2A and Sentinel-2B Level-1C data were used in this study. There were a total of 343, 563 and 1345 images in the summer of 2016, 2017 and 2018, respectively. The spatial distribution of all Sentinel-2 observations during the summers of 2016, 2017 and 2018 are illustrated in Figure 4a. Notably, a number of these observations were affected by cloud coverage. Figure 4b depicts the percentage of cloud cover distribution during these time periods. To mitigate the limitation that arises due to cloud cover, we applied a selection criteria to cloud percentage (<20%) when producing our cloud-free composite. Next, the QA60 bitmask band (a quality flag band) provided in the metadata was used to detect and mask out clouds and cirrus. Sentinel-2 has 13 spectral bands at various spatial resolutions, including four bands at 10 m, six at 20 m, and three bands at 60 m spatial resolution. For this study, only blue (0.490 µm), green (0.560 µm), red (0.665 µm), and near-infrared (NIR, 0.842 µm) bands were used. This is because the optical indices selected in this study are based on the above mentioned optical bands (see Table 2) and, furthermore, all these bands are at a spatial resolution of 10 m.
In addition to optical bands (2, 3, 4 and 8), NDVI, NDWI and MSAVI2 indices were also extracted (see Table 2). NDVI is one of the most well-known and commonly used vegetation indices for the characterization of vegetation phenology (seasonal and inter-annual changes). Using the ratioing operation (see Table 2), NDVI decreases several multiplicative noises, such as sun illumination differences, cloud shadows, as well as some atmospheric attenuation and topographic variations, within various bands of multispectral satellite images [57]. NDVI is sensitive to photosynthetically active biomasses and can discriminate vegetation/non-vegetation, as well as wetland/non-wetland classes. NDWI is also useful, since it is sensitive to open water and can discriminate water from land. Notably, NDWI can be extracted using different bands of multispectral data [58], such as green and shortwave infrared (SWIR) [59], red and SWIR [60], as well as green and NIR [61]. Although some studies reported the superiority of SWIR for extracting the water index due to its lower sensitivity to the sub-pixel non-water component [58], we used the original NDWI index proposed by [61] in this study. This is because it should provide accurate results at our target resolution and, moreover, it uses green and NIR bands of Sentinel-2 data, both of which are at a 10 m spatial resolution. Finally, MSAVI2 was used because it addresses the limitations of NDVI in areas with a high degree of exposed soil surface. Figure 3b,c demonstrates the multi-year summer composite of NDVI and NDWI features extracted from Sentinel-2 optical imagery.

2.4. Multi-Year Monthly and Summer Composite

Although several studies have used the Landsat archive to generate nearly-cloud-free Landsat composites of a large area (e.g., [62,63,64]), to the best of our knowledge, such an investigation has not yet been thoroughly examined for Sentinel-2 data. This is unfortunate since the latter data offer both improved temporal and spatial resolution relative to Landsat imagery, making them advantageous for producing high resolution land cover maps on a large-scale. For example, Roy et al. (2010) produced monthly, seasonally, and yearly composites using maximum NDVI and brightness temperature obtained from Landsat data for the conterminous United States [64]. Recent studies also used different compositing approaches, such as seasonally [62] and yearly [63] composites obtained from Landsat data in their analysis.
In this study, two different types of image composites were generated: Multi-year monthly and summer composites. Due to the prevailing cloudy and rainy weather conditions in the study area, it was impossible to collect sufficient cloud-free optical data to generate a full-coverage monthly composite of Sentinel-2 data for classification purposes. However, we produced the monthly composite (optical) for spectral signature analysis to identify the month during which the most semantic information of wetland classes could be obtained. A multi-year summer composite was produced to capture explicit phenological information appropriate for wetland mapping. As suggested by recent research [65], the multi-year spring composite is advantageous for wetland mapping in the Canada’s boreal regions. This is because such time-series data capture within-year surface variation. However, in this study, the multi-year summer composite was used given that the leaf-on season begins in late spring/early summer on the Island of Newfoundland.
Leveraging the GEE composite function, 10 m wall-to-wall, cloud-free composites of Sentinel-2 imagery, comprising original optical bands (2, 3, 4 and 8), NDVI, NDWI, and MSAVI2 indices, across the Island of Newfoundland were produced. SAR features, including σ V V 0 , σ V H 0 , σ H H 0 , σ H V 0 , span, ratio, and difference between co- and cross-polarized SAR features (see Table 2), were also stacked using GEE’s array-based computational approach. Specifically, each monthly and summer season group of images were stacked into a single median composite on a per-pixel, per band basis.

2.5. Separability Between Wetland Classes

In this study, the separability between wetland classes was determined both qualitatively, using box-and-whiskers plots, and quantitatively, using Jeffries–Matusita (JM) distance. The JM distance indicates the average distance between the density function of two classes [66]. It uses both the first order (mean) and second order (variance) statistical variables from the samples and has been illustrated to be an efficient separability measure for remote sensing data [67,68]. Given normal distribution assumptions, the JM distance between two classes is represented as
J M = 2   ( 1 e B )
where B is the Bhattacharyya (BH) distance given by
B = 1 8   ( μ i μ j ) T ( Σ i + Σ j 2 ) 1 ( μ i μ j ) +   1 2   ln ( | ( Σ i + Σ j ) / 2 | | Σ i | | Σ j | )
where μ i and Σ i are the mean and covariance matrix of class i and μ j and Σ j are the mean and covariance matrix of class j . The JM distance varies between 0 and 2, with values that approach 2 demonstrating a greater average distance between two classes. In this study, the separability analysis was limited to extracted features from optical data. This is because a detailed backscattering analysis of wetland classes using multi-frequency SAR data, including X-, C-, and L-band, has been presented in our previous study [19].

2.6. Classification Scheme

2.6.1. Random Forest

In this study, the random forest (RF) algorithm was used for both pixel-based and object-based wetland classifications. RF is a non-parametric classifier, comprised of a group of tree classifiers, and is able to handle high dimensional remote sensing data [69]. It is also more robust compared to the DT algorithm and easier to execute relative to SVM [23]. RF uses bootstrap aggregating (bagging) to produce an ensemble of decision trees by using a random sample from the given training data, and determines the best splitting of the nodes by minimizing the correlation between trees. Assigning a label to each pixel is based on the majority vote of trees. RF can be tuned by adjusting two input parameters [70], namely the number of trees (Ntree), which is generated by randomly selecting samples from the training data, and the number of variables (Mtry), which is used for tree node splitting [71]. In this study, these parameters were selected based on (a) direction from previous studies (e.g., [56,69,72]) and (b) a trial-and-error approach. Specifically, Mtry was assessed for the following values (when Ntree was adjusted to 500): (a) One third of the total number of input features; (b) the square root of the total number of input features; (c) half of the total number of input features; (d) two thirds of the total number of input features; and (e) the total number of input features. This resulted in marginal or no influence on the classification accuracies. Accordingly, the square root of the total number of variables was selected for Mtry, as suggested by [71]. Next, by adjusting the optimal value for Mtry, the parameter Ntree was assessed for the following values: (a) 100; (b) 200; (c) 300; (d) 400; (e) 500; and (f) 600. A value of 400 was then found to be appropriate in this study, as error rates for all classification models were constant beyond this point. The 601 training polygons in different categories were used to train the RF classifier on the GEE platforms (see Table 1).

2.6.2. Simple Non-Iterative Clustering (SNIC) Superpixel Segmentation

Conventional pixel-based classification algorithms rely on the exclusive use of the spectral/backscattering value of each pixel in their classification scheme. This results in “salt and pepper” noise in the final classification map, especially when high-resolution images are employed [73]. An object-based algorithm, however, can mitigate the problem that arises during such image processing by taking into account the contextual information within a given imaging neighborhood [74]. Image segmentation divides an image into regions or objects based on the specific parameters (e.g., geometric features and scaled topological relation). In this study, simple non-iterative clustering (SNIC) algorithm was selected for superpixel segmentation (i.e., object-based) analysis [75]. The algorithm starts by initializing centroid pixels on a regular grid in the image. Next, the dependency of each pixel relative to the centroid is determined using its distance in the five-dimensional space of color and spatial coordinates. In particular, the distance integrates normalized spatial and color distances to produce effective, compact and approximately uniform superpixels. Notably, there is a trade-off between compactness and boundary continuity, wherein larger compactness values result in more compact superpixels and, thus, poor boundary continuity. SNIC uses a priority queue, 4- or 8-connected candidate pixels to the currently growing superpixel cluster, to select the next pixels to join the cluster. The candidate pixel is selected based on the smallest distance from the centroid. The algorithm takes advantage of both priority queue and online averaging to evolve the centroid once each new pixel is added to the given cluster. Accordingly, SNIC is superior relative to similar clustering algorithms (e.g., Simple Linear Iterative Clustering) in terms of both memory and processing time. This is attributed to the introduction of connectivity (4- or 8-connected pixels) that results in computing fewer distances during centroid evolution [75].

2.6.3. Evaluation Indices

Four evaluation indices, including overall accuracy (OA), Kappa coefficient, producer accuracy, and user accuracy were measured using the 599 testing polygons held back for validation purposes (see Table 1). Overall accuracy determines the overall efficiency of the algorithm and can be measured by dividing the total number of correctly-labeled samples by the total number of the testing samples. The Kappa coefficient indicates the degree of agreement between the ground truth data and the predicted values. Producer’s accuracy represents the probability that a reference sample is correctly identified in the classification map. User’s accuracy indicates the probability that a classified pixel in the land cover classification map accurately represents that category on the ground [76].
Additionally, the McNemar test [77] was employed to determine the statistically significant differences between various classification scenarios in this study. Particularly, the main goals were to determine: (1) Whether a statistically significant difference exists between pixel-based and object-based classifications based on either SAR or optical data; and (2) whether a statistically significant difference exists between object-based classifications using only one type of data (SAR or optical data) and an integration of two types of data (SAR and optical data). The McNemar test is non-parametric and is based on the classification confusion matrix. The test is based on a chi-square ( χ 2 ) distribution with one degree of freedom [78,79] and assumes the number of correctly and incorrectly identified pixels are equal for both classification scenarios [77],
χ 2 = ( f 12 f 21 ) 2 f 12 + f 21
where f 12 and f 21 represent the number of pixels that were correctly identified by one classifier as compared to the number of pixels that the other method incorrectly identified, respectively.

2.7. Processing Platform

The GEE cloud computing platform was used for both the pixel-based and superpixel RF classification in this study. Both Sentinel-1 and Sentinel-2 data hosted within the GEE platform were used to construct composite images. The zonal boundaries and the reference polygons were imported into GEE using Google fusion tables. A JavaScript API in the GEE code editor was used for pre-processing, feature extraction, and classification in this study. Accordingly, we generated 10 m spatial resolution wetland maps of Newfoundland for our multi-year seasonal composites of optical, SAR, and integration of both types of data using pixel-based and object-based approaches.

3. Results

3.1. Spectral Analysis of Wetland Classes Using Optical Data

To examine the discrimination capabilities of different spectral bands and vegetation indices, spectral analysis was performed for all wetland classes. Figure 5, Figure 6 and Figure 7 illustrate the statistical distribution of reflectance, NDVI, NDWI, and MSAVI2 values for the multi-year monthly composites of June, July, and August, respectively, using box-and-whisker plots.
As shown, all visible bands poorly distinguish spectrally similar wetland classes, especially the bog, fen, and marsh classes. The shallow-water class, however, can be separated from other classes using the red band in August (see Figure 7). Among the original bands, NIR represents clear advantages when discriminating the shallow-water from other classes (see Figure 5, Figure 6 and Figure 7), but is not more advantageous for classifying herbaceous wetland classes. Overall, vegetation indices are superior when separating wetland classes compared to the original bands.
As illustrated in Figure 5, Figure 6 and Figure 7, the shallow-water class is easily distinguishable from other classes using all vegetation indices. The swamp and bog classes are also separable using the NDVI index from all three months. Although both NDVI and MSAVI2 are unable to discriminate herbaceous wetland classes using the June composite, the classes of bog and fen are distinguishable using the NDVI index obtained from the July and August composites.
The mean JM distances obtained from the multi-year summer composite for wetland classes are represented in Table 3.
According to the JM distance, shallow-water is the most separable class from other wetland classes. In general, all wetland classes, excluding shallow-water, are hardly distinguishable from each other using single optical feature and, in particular, bog and fen are the least separable classes. However, the synergistic use of all features considerably increases the separability between wetland classes, with JM values exceeding 1.4 in most cases; however, bog and fen remain hardly discernible in this case.

3.2. Classification

The overall accuracies (OA) and Kappa coefficients of different classification scenarios are presented in Table 4. Overall, the classification results using optical imagery were more advantageous relative to SAR imagery. As illustrated, the optical imagery resulted in approximately 4% improvements in both the pixel-based and object-based approaches. Furthermore, object-based classifications were found to be superior to pixel-based classifications using optical (~6.5% improvement) and SAR (~6% improvements) imagery in comparative cases. It is worth noting that the accuracy assessment in this study was carried out using the testing polygons well distributed across the whole study region.
The McNemar test revealed that the difference between the accuracies of pixel-based and object-based classifications was statistically significant when either SAR (p = 0.023) or optical (p = 0.012) data were compared (see Table 5). There was also a statistically very significant difference between object-based classifications using SAR vs. SAR/optical data (p = 0.0001) and optical vs. SAR/optical data (p = 0.008).
Figure 8 demonstrates the classification maps using SAR and optical multi-year summer composites for Newfoundland obtained from pixel- and object-based RF classifications. They illustrate the distribution of land cover classes, including both wetland and non-wetland classes, identifiable at a 10 m spatial resolution. In general, the classified maps indicate fine separation of all land cover units, including bog and fen, shallow- and deep-water, and swamp and upland, as well as other land cover types.
Figure 9 depicts the confusion matrices obtained from different methods, wherein the diagonal elements are the producer’s accuracies. The user’s accuracies of land cover classes using different classification scenarios are also demonstrated in Figure 10. Overall, the classification of wetlands have lower accuracies compared to those of the non-wetland classes. In particular, the classification of swamp has the lowest producer’s and user’s accuracies among wetland (and all) classes in this study. In contrast, the classification accuracies of bog and shallow-water are higher (both user’s and producer’s accuracies) than the other wetland classes.
Notably, all methods successfully classified the non-wetland classes with producer’s accuracies beyond 80%. Among the first four scenarios, the object-based classification using optical imagery (i.e., S4) was the most successful approach for classifying the non-wetland classes, with producer’s and user’s accuracies exceeding 90% and 80%, respectively. The wetland classes were also identified with high accuracies in most cases (e.g., bog, fen, and shallow-water) in S4.
The object-based approach, due to its higher accuracies, was selected for the final classification scheme in this study, wherein the multi-year summer SAR and optical composites were integrated (see Figure 11).
The final land cover map is noiseless and accurately represents the distribution of all land cover classes on a large-scale. As shown, the classes of bog and upland are the most prevalent wetland and non-wetland classes, respectively, in the study area. These observations agree well both with field notes recorded by biologists during the in-situ data collection and with visual analysis of aerial and satellite imagery. Figure 11 also illustrates several insets from the final land cover map in this study. The visual interpretation of the final classified map by ecological experts demonstrated that most land cover classes were correctly distinguished across the study area. For example, ecological experts noted that bogs appear as a reddish color in optical imagery (true color composite). As shown in Figure 11, most bog wetlands are accurately identified in all zoomed areas. Furthermore, small water bodies (e.g., small ponds) and the perimeter of deep water bodies are correctly mapped belonging to the shallow-water class. The upland and urban/bare land classes were also correctly distinguished.
The confusion matrix for the final classification map is illustrated in Figure 12. Despite the presence of confusion among wetland classes, the results obtained from the multi-year SAR/optical composite were extremely positive, taking into account the complexity of distinguishing similar wetland classes. As shown in Figure 12, all non-wetland classes and shallow-water were correctly identified with producer’s accuracies beyond 90%. The most similar wetland classes, namely bog and fen, were classified with producer’s accuracies exceeding 80%. The other two wetland classes were also correctly identified with a producer’s accuracy of 78% for marsh and 70% for swamp.

4. Discussion

In general, the results of the spectral analysis demonstrated the superiority of the NIR band compared to the visible bands (i.e., blue, green, and red) for distinguishing various wetland classes. This was particularly true for shallow-water, which was easily separable using NIR. This is logical, given that water and vegetation exhibit strong absorption and reflection, respectively, in this region of the electromagnetic spectrum. NDVI was found to be the most useful vegetation index. This finding is potentially explained by the high sensitivity of NDVI to photosynthetically active biomasses [57]. Furthermore, the results of the spectral analysis of wetland classes indicated that class separability using the NDVI index is maximized in July, which corresponds to the peak growing season in Newfoundland. According to the box-and-whisker plots and the JM distances, the spectral similarities of wetland classes are slightly concerning, as they revealed the difficulties in distinguishing similar wetland classes using a single optical feature, which is in agreement with a previous study [80]. However, the inclusion of all optical features significantly increased the separability between wetland classes.
As shown in Figure 9, confusion errors occurred among all classes, especially those of wetlands using the pixel-based classification approach. Notably, the highest confusion was found between the swamp and upland classes in some cases. The upland class is characterized by dry forested land, and swamps are specified as woody (forested) wetland. This results in similarities in both the visual appearance and spectral/backscattering signatures for these classes. With regard to SAR signatures, for example, the dominant scattering mechanism for both classes is volume scattering, especially when the water table is low in swamp [81], which contributes to the misclassification between the two. This is of particular concern when shorter wavelengths (e.g., C-band) are employed, given their shallower penetration depth relative to that of longer wavelengths (e.g., L-band).
Confusion was also common among the herbaceous wetland classes, namely bog, fen, and marsh. This is attributable to the heterogeneity of the landscape in the study area. As field notes suggest, the herbaceous wetland classes were found adjacent to each other without clear cut borders, making them hardly distinguishable. This is particularly severe for bog and fen, since both have very similar ecological and visual characteristics. For example, both are characterized by peatlands, dominated by ecologically similar vegetation types of Sphagnum in bogs and Graminoid in fens.
Another consideration when interpreting the classification accuracies for different wetland classes is the availability of the training samples/polygons for the supervised classification. As shown in Table 1, for example, bogs have a larger number of training polygons compared to the swamp class. This is because NL has a moist and cool climate [43], which contributes to extensive peatland formation. Accordingly, bog and fen were potentially the most visited wetland classes during in-situ data collection. This resulted in the collection of a larger number of training samples/polygons for these classes. On the other hand, the swamp class is usually found in physically smaller areas relative to those of other classes; for example, in transition zones between wetland and other land cover classes. As such, they may have been dispersed and mixed with other land cover classes, making them difficult to distinguish by the classifier.
Comparison of the classification accuracies using optical and SAR images (i.e., S1 vs. S2 and S3 vs. S4) indicated, according to all evaluation indices in this study, the superiority of the former relative to the latter for wetland mapping in most cases. This suggests that the phenological variations in vegetative productivity captured by optical indices (e.g., NDVI), as well as the contrast between water and non-water classes captured by the NDWI index are more efficient for wetland mapping in our study area than the extracted features from dual-polarimetric SAR data. This finding is consistent with the results of a recent study [12] that employed optical, SAR, and topographic data for predicting the probability of wetland occurrence in Alberta, Canada, using the GEE platform. However, it should be acknowledged that the lower success of SAR compared to optical data is, at least, partially related to the fact that the Sentinel-1 sensor does not collect full-polarimetric data at the present time. This hinders the application of advanced polarimetric decomposition methods that demand full-polarimetric data. Several studies highlighted the great potential of polarimetric decomposition methods for identifying similar wetland classes by characterizing their various scattering mechanisms using such advanced approaches [19,56].
Despite the superiority of optical data relative to SAR, the highest classification accuracy was obtained by integrating multi-year summer composites of SAR and optical imagery using the object-based approach (see Table 4(S5)). In particular, this classification scenario demonstrates an improvement of about 9% and 4.5% in overall accuracy compared to the object-based classification using the multi-year summer SAR and optical composites, respectively. This is because optical and SAR data are based on range and angular measurements and collect information about the chemical and physical characteristics of wetland vegetation, respectively [82]; thus, the inclusion of both types of observations enhances the discrimination of backscattering/spectrally similar wetland classes [41,42]. Accordingly, it was concluded that the multi-year summer SAR/optical composite is very useful for improving overall classification accuracy by capturing chemical, biophysical, structural, and phenological variations of herbaceous and woody wetland classes. This was later reaffirmed via the confusion matrix (see Figure 12) of the final classification map, wherein confusion decreased compared to classifications based on either SAR or optical data (see Figure 9). Furthermore, the McNemar test indicated that there was a very statistically significant difference (p < 0.05) for object-based classifications using SAR vs. optical/SAR (S3 vs. S5) and optical vs. optical/SAR (S4 vs. S5) models (see Table 5).
Notably, the multi-year summer SAR/optical composite improved the producer’s accuracies of marsh and swamp classes. Specifically, the inclusion of SAR and optical data improved the producer’s accuracies of marsh in the final classification map by about 14% and 11% compared to the object-based classification using SAR and optical imagery on their own, respectively. This, too, occurred to a lesser degree for swamp, wherein the producer’s accuracies improved in the final classified map by about 12% and 10% compared to those of object-based classified maps using optical and SAR imagery, respectively. The accuracies for other wetland classes, namely bog and fen, were also improved by about 4% and 5%, respectively, in this case relative to the object-based classification using the multi-year optical composite.
Despite significant improvements in the producer’s accuracies for some wetland classes (e.g., marsh and swamp) using the SAR/optical data composite, marginal to no improvements were obtained in this case for the non-wetland classes compared to classification based only on optical data. In particular, the use of SAR data does not offer substantial gains beyond the use of optical imagery for distinguishing typical land cover classes, such as urban and deep-water, nor does it present any clear disadvantages. Nevertheless, combining both types of observations addresses the limitation that arises due to the inclement weather in geographic regions with near-permanent cloud cover, such as Newfoundland. Therefore, the results reveal the importance of incorporating multi-temporal optical/SAR data for classification of backscattering/spectrally similar land cover classes, such as wetland complexes. Accordingly, given the complementary advantages of SAR and optical imagery, the inclusion of both types of data still offers a potential avenue for further research in land cover mapping on a large scale.
The results demonstrate the superiority of object-based classification compared to the pixel-based approach in this study. This is particularly true when SAR imagery was employed, as the producer’s accuracies for all wetland classes were lower than 70% (see Figure 9a). Despite applying speckle reduction, speckle noise can remain, and this affects the classification accuracy during such processing. In contrast to the pixel-based approach, object-based classification benefits from both backscattering/spectral information, as well as contextual information within a given neighborhood. This further enhances semantic land cover information and is very useful for the classification of SAR imagery [31].
As noted in a previous study [83], the image mosaicking technique over a long time-period may increase classification errors in areas of high inter-annual change, causing a signal of seasonality to be overlooked. Although this image mosaicking technique is essential for addressing the limitation of frequent cloud cover for land cover mapping using optical remote sensing data across a broad spatial scale, this was mitigated in this study to a feasible extent. In particular, to diminish the effects of multi-seasonal observations, the mosaicked image in this study was produced from the multi-year summer composite rather than the multi-year, multi-seasonal composite. The effectiveness of using such multi-year seasonal (e.g., either spring or summer) composites has been previously highlighted, given the potential of such data to capture surface condition variations beneficial for wetland mapping [65]. The overall high accuracy of this technique obtained in this study further corroborates the value of such an approach for mapping wetlands at the provincial-level.
Although the classification accuracies obtained from our previous studies were slightly better in some cases (e.g., [19,31]), our previous studies involve more time and resources when compared with the current study. For example, our previous study [19] incorporated multi-frequency (X-, C-, and L-bands), multi-polarization (full-polarimetric RADARSAT-2) SAR data to produce local-scale wetland inventories. However, the production of such inventories demanded significant levels of labor, in terms of data preparation, feature extraction, statistical analysis, and classification. Consequently, updating wetland inventories using such methods on a regular basis for a large scale is tedious and expensive. In contrast, the present study relies on open access, regularly updated remotely sensed imagery collected by the Sentinel Missions at a 10 m spatial resolution, which is of great value for provincial- and national-scale wetland inventory maps that can be efficiently and regularly updated.
As mentioned earlier, GEE is an ideal platform that hosts Sentinel-1 and Sentinel-2 data and offers advanced processing functionally. This removes the process of downloading a large number of satellite images, which are already in “analysis ready” formats [34] and, as such, offers significant built-in time saving aspects [84]. Despite these benefits, limitations with GEE are related to both the lack of atmospherically-corrected Sentinel-2 data within its archive and the parallel method of the atmospheric correction at the time of this research. This may result in uncertainty due to the bidirectional reflectance effects caused by variations in sun, sensor, and surface geometries during satellite acquisitions [12]. Such an atmospheric correction algorithm has been carried out in local applications, such as the estimation of forest aboveground biomass [85], using the Sentinel-2 processing toolbox. Notably, Level-2A Sentinel-2 bottom-of-atmosphere (BOA) data that are atmospherically-corrected are of great value for extracting the most reliable temporal and spatial information, but such data are not yet available within GEE. Recent research, however, reported the potential of including BOA Sentinel-2 data in the near future into the GEE archive [12]. Although the high accuracies of wetland classifications in this study indicated that the effects of top-of-atmosphere (TOA) reflectance could be negligible, a comparison between TOA and BOA Sentinel-2 data for wetland mapping is suggested for future research.
In the near future, the addition of more machine learning tools and EO data to the GEE API and data catalog, respectively, will further simplify information extraction and data processing. For example, the availability of deep learning approaches through the potential inclusion of TensorFlow in the GEE platform will offer unprecedented opportunities for several remote sensing tasks [13]. Currently, however, employing state-of-the-art classification algorithms across broad spatial scales requires downloading data for additional local processing tasks and uploading data back to GEE due to the lack of functionality for such processing at present. Downloading such a large amount of remote sensing data is time consuming, given bandwidth limitations, and further, its processing demands a powerful local processing machine. Nevertheless, full exploitation of deep learning methods for mapping wetlands at hierarchical levels requires abundant, high-quality representative training samples.
The approaches presented in this study may be extended to generate a reliable, hierarchical, national-scale Canadian wetland inventory map and are an essential step toward global-scale wetland mapping. However, more challenges are expected when the study area is extended to the national-scale (i.e., Canada) with more cloud cover, more fragmented landscapes, and various dominant wetland classes across the country [86]. Notably, the biggest challenge in producing automated, national-scale wetland inventories is collecting a sufficient amount of high quality training and testing samples to support dependable coding, rapid product delivery, and accurate wetland mapping on large-scale. Although using GEE for discriminating wetland and non-wetland samples could be useful, it is currently inefficient for identifying hierarchical wetland ground-truth data. There are also challenges related to inconsistency in terms of wetland definitions at the global-scale that can vary by country (e.g., Canadian Wetland Classification System, New Zealand, and East Africa) [1]. However, given recent advances in cloud computing and big data, these barriers are eroding and new opportunities for more comprehensive and dynamic views of the global extent of wetlands are arising. For example, the integration of Landsat and Sentinel data using the GEE platform will address the limitations of cloud cover and lead to production of more accurate, finer category wetland classification maps, which are of great benefit for hydrological and ecological monitoring of these valuable ecosystems [87]. The results of this study suggest the feasibility of generating provincial-level wetland inventories by leveraging the opportunities offered by cloud-computing resources, such as GEE. The current study will contribute to the production of regular, consistent, provincial-scale wetland inventory maps that can support biodiversity and sustainable management of Newfoundland and Labrador’s wetland resources.

5. Conclusions

Cloud-based computing resources and open-access EO data have caused a remarkable paradigm-shift in the field of landcover mapping by replacing the production of standard static maps with those that are more dynamic and application-specific thanks to recent advances in geospatial science. Leveraging the computational power of the Google Earth Engine and the availability of high spatial resolution remote sensing data collected by Copernicus Sentinels, the first detailed (category-based), provincial-level wetland inventory map was produced in this study. In particular, multi-year summer Sentinel-1 and Sentinel-2 data were used to map a complex series of small and large, heterogeneous wetlands on the Island of Newfoundland, Canada, covering an approximate area of 106,000 km2.
Multiple classification scenarios, including those that were pixel- versus object-based, were considered and the discrimination capacities of optical and SAR data composites were compared. The results revealed the superiority of object-based classification relative to the pixel-based approach. Although classification accuracy using the multi-year summer optical composite was found to be more accurate than the multi-year summer SAR composite, the inclusion of both types of data (i.e., SAR and optical) significantly improved the accuracies of wetland classification. An overall classification accuracy of 88.37% was achieved using an object-based RF classification with the multi-year (2016–2018) summer optical/SAR composite, wherein wetland and non-wetland classes were distinguished with accuracies beyond 70% and 90%, respectively.
This study further contributes to the development of Canadian wetland inventories, characterizes the spatial distribution of wetland classes over a previously unmapped area with high spatial resolution, and importantly, augments previous local-scale wetland map products. Given the relatively similar ecological characteristics of wetlands across Canada, future work could extend this study by examining the value of the presented approach for mapping areas containing wetlands with similar ecological characteristics and potentially those with a greater diversity of wetland classes in other Canadian provinces and elsewhere. Further extension of this study could also focus on exploring the efficiency of a more diverse range of multi-temporal datasets (e.g., the 30 years Landsat dataset) to detect and understand wetland dynamics and trends over time in the province of Newfoundland and Labrador.

Supplementary Materials

The following are available online at https://0-www-mdpi-com.brum.beds.ac.uk/2072-4292/11/1/43/s1, The 10 m wetland extent product mapped complex series of small and large wetland classes accurately and precisely.

Author Contributions

M.M. and F.M. designed and performed the experiments, analyzed the data, and wrote the paper. B.S., S.H., and E.G. contributed editorial input and scientific insights to further improve the paper. All authors reviewed and commented on the manuscript.

Funding

This project was undertaken with the financial support of the Research & Development Corporation of Government of Newfoundland and Labrador (now InnovateNL) under Grant to M. Mahdianpari (RDC 5404-2108-101) and the Natural Sciences and Engineering Research Council of Canada under Grant to B. Salehi (NSERC RGPIN2015-05027).

Acknowledgments

Field data were collected by various organizations, including Ducks Unlimited Canada, Government of Newfoundland and Labrador Department of Environment and Conservation, and Nature Conservancy Canada. The authors thank these organizations for the generous financial support and providing such valuable datasets. The authors would like to thank the Google Earth Engine team for providing cloud-computing resources and European Space Agency (ESA) for providing open-access data. Additionally, the authors would like to thank anonymous reviewers for their helpful comments and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tiner, R.W.; Lang, M.W.; Klemas, V.V. Remote Sensing of Wetlands: Applications and Advances; CRC Press: Boca Raton, FL, USA, 2015. [Google Scholar]
  2. Mitsch, W.J.; Bernal, B.; Nahlik, A.M.; Mander, Ü.; Zhang, L.; Anderson, C.J.; Jørgensen, S.E.; Brix, H. Wetlands, carbon, and climate change. Landsc. Ecol. 2013, 28, 583–597. [Google Scholar] [CrossRef]
  3. Mitsch, W.J.; Gosselink, J.G. The value of wetlands: Importance of scale and landscape setting. Ecol. Econ. 2000, 35, 25–33. [Google Scholar] [CrossRef]
  4. Gallant, A.L. The Challenges of Remote Monitoring of Wetlands. Remote Sens. 2015, 7, 10938–10950. [Google Scholar] [CrossRef] [Green Version]
  5. Maxa, M.; Bolstad, P. Mapping northern wetlands with high resolution satellite images and LiDAR. Wetlands 2009, 29, 248. [Google Scholar] [CrossRef]
  6. Tiner, R.W. Wetlands: An overview. In Remote Sensing of Wetlands; CRC Press: Boca Raton, FL, USA, 2015; pp. 20–35. [Google Scholar]
  7. Mohammadimanesh, F.; Salehi, B.; Mahdianpari, M.; Homayouni, S. Unsupervised Wishart Classfication of Wetlands in Newfoundland, Canada Using Polsar Data Based on Fisher Linear Discriminant Analysis. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, 41, 305. [Google Scholar] [CrossRef]
  8. Wulder, M.A.; Masek, J.G.; Cohen, W.B.; Loveland, T.R.; Woodcock, C.E. Opening the archive: How free data has enabled the science and monitoring promise of Landsat. Remote Sens. Environ. 2012, 122, 2–10. [Google Scholar] [CrossRef]
  9. Xie, Y.; Sha, Z.; Yu, M. Remote sensing imagery in vegetation mapping: A review. J. Plant Ecol. 2008, 1, 9–23. [Google Scholar] [CrossRef]
  10. Teluguntla, P.; Thenkabail, P.; Oliphant, A.; Xiong, J.; Gumma, M.K.; Congalton, R.G.; Yadav, K.; Huete, A. A 30-m landsat-derived cropland extent product of Australia and China using random forest machine learning algorithm on Google Earth Engine cloud computing platform. ISPRS J. Photogramm. Remote Sens. 2018, 144, 325–340. [Google Scholar] [CrossRef]
  11. Shelestov, A.; Lavreniuk, M.; Kussul, N.; Novikov, A.; Skakun, S. Exploring Google earth engine platform for Big Data Processing: Classification of multi-temporal satellite imagery for crop mapping. Front. Earth Sci. 2017, 5, 17. [Google Scholar] [CrossRef]
  12. Hird, J.N.; DeLancey, E.R.; McDermid, G.J.; Kariyeva, J. Google Earth Engine, open-access satellite data, and machine learning in support of large-area probabilistic wetland mapping. Remote Sens. 2017, 9, 1315. [Google Scholar] [CrossRef]
  13. Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote Sens. Environ. 2017, 202, 18–27. [Google Scholar] [CrossRef]
  14. Sazib, N.; Mladenova, I.; Bolten, J. Leveraging the Google Earth Engine for Drought Assessment Using Global Soil Moisture Data. Remote Sens. 2018, 10, 1265. [Google Scholar] [CrossRef]
  15. Aguilar, R.; Zurita-Milla, R.; Izquierdo-Verdiguier, E.; de By, R.A. A Cloud-Based Multi-Temporal Ensemble Classifier to Map Smallholder Farming Systems. Remote Sens. 2018, 10, 729. [Google Scholar] [CrossRef]
  16. de Lobo Lobo, F.; Souza-Filho, P.W.M.; de Moraes Novo, E.M.L.; Carlos, F.M.; Barbosa, C.C.F. Mapping Mining Areas in the Brazilian Amazon Using MSI/Sentinel-2 Imagery (2017). Remote Sens. 2018, 10, 1178. [Google Scholar] [CrossRef]
  17. Kumar, L.; Mutanga, O. Google Earth Engine Applications since Inception: Usage, Trends, and Potential. Remote Sens. 2018, 10, 1509. [Google Scholar] [CrossRef]
  18. Waske, B.; Fauvel, M.; Benediktsson, J.A.; Chanussot, J. Machine learning techniques in remote sensing data analysis. In Kernel Methods for Remote Sensing Data Analysis; Wiley Online Library: Hoboken, NJ, USA, 2009; pp. 3–24. [Google Scholar]
  19. Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F.; Motagh, M. Random forest wetland classification using ALOS-2 L-band, RADARSAT-2 C-band, and TerraSAR-X imagery. ISPRS J. Photogramm. Remote Sens. 2017, 130, 13–31. [Google Scholar] [CrossRef]
  20. Thanh Noi, P.; Kappas, M. Comparison of random forest, k-nearest neighbor, and support vector machine classifiers for land cover classification using Sentinel-2 imagery. Sensors 2018, 18, 18. [Google Scholar] [CrossRef]
  21. Huang, C.; Davis, L.S.; Townshend, J.R.G. An assessment of support vector machines for land cover classification. Int. J. Remote Sens. 2002, 23, 725–749. [Google Scholar] [CrossRef]
  22. Pal, M. Random forest classifier for remote sensing classification. Int. J. Remote Sens. 2005, 26, 217–222. [Google Scholar] [CrossRef]
  23. Rodriguez-Galiano, V.F.; Ghimire, B.; Rogan, J.; Chica-Olmo, M.; Rigol-Sanchez, J.P. An assessment of the effectiveness of a random forest classifier for land-cover classification. ISPRS J. Photogramm. Remote Sens. 2012, 67, 93–104. [Google Scholar] [CrossRef]
  24. Gislason, P.O.; Benediktsson, J.A.; Sveinsson, J.R. Random forests for land cover classification. Pattern Recognit. Lett. 2006, 27, 294–300. [Google Scholar] [CrossRef]
  25. Whyte, A.; Ferentinos, K.P.; Petropoulos, G.P. A new synergistic approach for monitoring wetlands using Sentinels-1 and 2 data with object-based machine learning algorithms. Environ. Model. Softw. 2018, 104, 40–54. [Google Scholar] [CrossRef]
  26. Pekel, J.-F.; Cottam, A.; Gorelick, N.; Belward, A.S. High-resolution mapping of global surface water and its long-term changes. Nature 2016, 540, 418. [Google Scholar] [CrossRef] [PubMed]
  27. Hansen, M.C.; Potapov, P.V.; Moore, R.; Hancher, M.; Turubanova, S.A.A.; Tyukavina, A.; Thau, D.; Stehman, S.V.; Goetz, S.J.; Loveland, T.R. High-resolution global maps of 21st-century forest cover change. Science 2013, 342, 850–853. [Google Scholar] [CrossRef] [PubMed]
  28. Xiong, J.; Thenkabail, P.S.; Gumma, M.K.; Teluguntla, P.; Poehnelt, J.; Congalton, R.G.; Yadav, K.; Thau, D. Automated cropland mapping of continental Africa using Google Earth Engine cloud computing. ISPRS J. Photogramm. Remote Sens. 2017, 126, 225–244. [Google Scholar] [CrossRef]
  29. Tsai, Y.; Stow, D.; Chen, H.; Lewison, R.; An, L.; Shi, L. Mapping Vegetation and Land Use Types in Fanjingshan National Nature Reserve Using Google Earth Engine. Remote Sens. 2018, 10, 927. [Google Scholar] [CrossRef]
  30. Huang, H.; Chen, Y.; Clinton, N.; Wang, J.; Wang, X.; Liu, C.; Gong, P.; Yang, J.; Bai, Y.; Zheng, Y. Mapping major land cover dynamics in Beijing using all Landsat images in Google Earth Engine. Remote Sens. Environ. 2017, 202, 166–176. [Google Scholar] [CrossRef]
  31. Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F.; Brisco, B.; Mahdavi, S.; Amani, M.; Granger, J.E. Fisher Linear Discriminant Analysis of coherency matrix for wetland classification using PolSAR imagery. Remote Sens. Environ. 2018, 206, 300–317. [Google Scholar] [CrossRef]
  32. Mohammadimanesh, F.; Salehi, B.; Mahdianpari, M.; Motagh, M.; Brisco, B. An efficient feature optimization for wetland mapping by synergistic use of SAR intensity, interferometry, and polarimetry data. Int. J. Appl. Earth Obs. Geoinf. 2018, 73, 450–462. [Google Scholar] [CrossRef]
  33. Ozesmi, S.L.; Bauer, M.E. Satellite remote sensing of wetlands. Wetlands Ecol. Manag. 2002, 10, 381–402. [Google Scholar] [CrossRef]
  34. d’Andrimont, R.; Lemoine, G.; van der Velde, M. Targeted Grassland Monitoring at Parcel Level Using Sentinels, Street-Level Images and Field Observations. Remote Sens. 2018, 10, 1300. [Google Scholar] [CrossRef]
  35. Aschbacher, J.; Milagro-Pérez, M.P. The European Earth monitoring (GMES) programme: Status and perspectives. Remote Sens. Environ. 2012, 120, 3–8. [Google Scholar] [CrossRef]
  36. Bwangoy, J.-R.B.; Hansen, M.C.; Roy, D.P.; De Grandi, G.; Justice, C.O. Wetland mapping in the Congo Basin using optical and radar remotely sensed data and derived topographical indices. Remote Sens. Environ. 2010, 114, 73–86. [Google Scholar] [CrossRef] [Green Version]
  37. Mahdianpari, M.; Salehi, B.; Rezaee, M.; Mohammadimanesh, F.; Zhang, Y. Very deep convolutional neural networks for complex land cover mapping using multispectral remote sensing imagery. Remote Sens. 2018, 10, 1119. [Google Scholar] [CrossRef]
  38. Rezaee, M.; Mahdianpari, M.; Zhang, Y.; Salehi, B. Deep convolutional neural network for complex wetland classification using optical remote sensing imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3030–3039. [Google Scholar] [CrossRef]
  39. Amarsaikhan, D.; Saandar, M.; Ganzorig, M.; Blotevogel, H.H.; Egshiglen, E.; Gantuyal, R.; Nergui, B.; Enkhjargal, D. Comparison of multisource image fusion methods and land cover classification. Int. J. Remote Sens. 2012, 33, 2532–2550. [Google Scholar] [CrossRef]
  40. Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F.; Brisco, B. An assessment of simulated compact polarimetric SAR data for wetland classification using random Forest algorithm. Can. J. Remote Sens. 2017, 43, 468–484. [Google Scholar] [CrossRef]
  41. van Beijma, S.; Comber, A.; Lamb, A. Random forest classification of salt marsh vegetation habitats using quad-polarimetric airborne SAR, elevation and optical RS data. Remote Sens. Environ. 2014, 149, 118–129. [Google Scholar] [CrossRef]
  42. Zhang, J. Multi-source remote sensing data fusion: Status and trends. Int. J. Image Data Fusion 2010, 1, 5–24. [Google Scholar] [CrossRef]
  43. Ecological Stratification Working Group. A National Ecological Framework for Canada; Agriculture and Agri-Food Canada, Research Branch, Centre for Land and Biological Resources Research, and Environment Canada, State of the Environment Directorate, Ecozone Analysis Branch: Ottawa/Hull, QC, Canada, 1996.
  44. South, R. Biogeography and Ecology of the Island of Newfoundland; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1983; Volume 48, ISBN 9061931010. [Google Scholar]
  45. Meades, S.J. Ecoregions of Newfoundland and Labrador; St. John’s, Newfoundland and Labrador: Parks and Natural Areas Division, Department of Environment and Conservation, Government of Newfoundland and Labrador: Corner Brook, NL, Canada, 1990.
  46. Zhang, X.; Wu, B.; Ponce-Campos, G.; Zhang, M.; Chang, S.; Tian, F. Mapping up-to-Date Paddy Rice Extent at 10 M Resolution in China through the Integration of Optical and Synthetic Aperture Radar Images. Remote Sens. 2018, 10, 1200. [Google Scholar] [CrossRef]
  47. Marshall, I.B.; Schut, P.; Ballard, M. A National Ecological Framework for Canada: Attribute Data; Environmental Quality Branch, Ecosystems Science Directorate, Environment Canada and Research Branch, Agriculture and Agri-Food Canada: Ottawa, QC, Canada, 1999.
  48. Sentinel-1-Observation Scenario—Planned Acquisitions—ESA. Available online: https://sentinel.esa.int/web/sentinel/missions/sentinel-1/observation-scenario (accessed on 13 November 2018).
  49. Sentinel-1 Algorithms. Google Earth Engine API. Google Developers. Available online: https://developers.google.com/earth-engine/sentinel1 (accessed on 13 November 2018).
  50. Gauthier, Y.; Bernier, M.; Fortin, J.-P. Aspect and incidence angle sensitivity in ERS-1 SAR data. Int. J. Remote Sens. 1998, 19, 2001–2006. [Google Scholar] [CrossRef]
  51. Lee, J.-S.; Wen, J.-H.; Ainsworth, T.L.; Chen, K.-S.; Chen, A.J. Improved sigma filter for speckle filtering of SAR imagery. IEEE Trans. Geosci. Remote Sens. 2009, 47, 202–213. [Google Scholar]
  52. Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F. The effect of PolSAR image de-speckling on wetland classification: Introducing a new adaptive method. Can. J. Remote Sens. 2017, 43, 485–503. [Google Scholar] [CrossRef]
  53. Mohammadimanesh, F.; Salehi, B.; Mahdianpari, M.; Brisco, B.; Motagh, M. Multi-temporal, multi-frequency, and multi-polarization coherence and SAR backscatter analysis of wetlands. ISPRS J. Photogramm. Remote Sens. 2018, 142, 78–93. [Google Scholar] [CrossRef]
  54. Baghdadi, N.; Bernier, M.; Gauthier, R.; Neeson, I. Evaluation of C-band SAR data for wetlands mapping. Int. J. Remote Sens. 2001, 22, 71–88. [Google Scholar] [CrossRef]
  55. Steele-Dunne, S.C.; McNairn, H.; Monsivais-Huertero, A.; Judge, J.; Liu, P.-W.; Papathanassiou, K. Radar remote sensing of agricultural canopies: A review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 2249–2273. [Google Scholar] [CrossRef]
  56. de Almeida Furtado, L.F.; Silva, T.S.F.; de Moraes Novo, E.M.L. Dual-season and full-polarimetric C band SAR assessment for vegetation mapping in the Amazon várzea wetlands. Remote Sens. Environ. 2016, 174, 212–222. [Google Scholar] [CrossRef]
  57. Jensen, J.R. Remote Sensing of the Environment: An Earth Resource Perspective 2/e; Pearson Education: Delhi, India, 2009. [Google Scholar]
  58. Ji, L.; Zhang, L.; Wylie, B. Analysis of dynamic thresholds for the normalized difference water index. Photogramm. Eng. Remote Sens. 2009, 75, 1307–1317. [Google Scholar] [CrossRef]
  59. Xu, H. Modification of normalised difference water index (NDWI) to enhance open water features in remotely sensed imagery. Int. J. Remote Sens. 2006, 27, 3025–3033. [Google Scholar] [CrossRef]
  60. Rogers, A.S.; Kearney, M.S. Reducing signature variability in unmixing coastal marsh Thematic Mapper scenes using spectral indices. Int. J. Remote Sens. 2004, 25, 2317–2335. [Google Scholar] [CrossRef]
  61. McFeeters, S.K. The use of the Normalized Difference Water Index (NDWI) in the delineation of open water features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
  62. Flood, N. Seasonal composite Landsat TM/ETM+ images using the medoid (a multi-dimensional median). Remote Sens. 2013, 5, 6481–6500. [Google Scholar] [CrossRef]
  63. Griffiths, P.; van der Linden, S.; Kuemmerle, T.; Hostert, P. A pixel-based Landsat compositing algorithm for large area land cover mapping. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 6, 2088–2101. [Google Scholar] [CrossRef]
  64. Roy, D.P.; Ju, J.; Kline, K.; Scaramuzza, P.L.; Kovalskyy, V.; Hansen, M.; Loveland, T.R.; Vermote, E.; Zhang, C. Web-enabled Landsat Data (WELD): Landsat ETM+ composited mosaics of the conterminous United States. Remote Sens. Environ. 2010, 114, 35–49. [Google Scholar] [CrossRef]
  65. Wulder, M.; Li, Z.; Campbell, E.; White, J.; Hobart, G.; Hermosilla, T.; Coops, N. A National Assessment of Wetland Status and Trends for Canada’s Forested Ecosystems Using 33 Years of Earth Observation Satellite Data. Remote Sens. 2018, 10, 1623. [Google Scholar] [CrossRef]
  66. Swain, P.H.; Davis, S.M. Remote sensing: The quantitative approach. IEEE Trans. Pattern Anal. Mach. Intell. 1981, 713–714. [Google Scholar] [CrossRef]
  67. Padma, S.; Sanjeevi, S. Jeffries Matusita based mixed-measure for improved spectral matching in hyperspectral image analysis. Int. J. Appl. Earth Obs. Geoinf. 2014, 32, 138–151. [Google Scholar] [CrossRef]
  68. Schmidt, K.S.; Skidmore, A.K. Spectral discrimination of vegetation types in a coastal wetland. Remote Sens. Environ. 2003, 85, 92–108. [Google Scholar] [CrossRef]
  69. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  70. Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F.; Larsen, G.; Peddle, D.R. Mapping land-based oil spills using high spatial resolution unmanned aerial vehicle imagery and electromagnetic induction survey data. J. Appl. Remote Sens. 2018, 12, 036015. [Google Scholar] [CrossRef]
  71. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  72. Mohammadimanesh, F.; Salehi, B.; Mahdianpari, M.; English, J.; Chamberland, J.; Alasset, P.-J. Monitoring surface changes in discontinuous permafrost terrain using small baseline SAR interferometry, object-based classification, and geological features: A case study from Mayo, Yukon Territory, Canada. GIScience Remote Sens. 2018, 1–26. [Google Scholar] [CrossRef]
  73. Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef]
  74. Benz, U.C.; Hofmann, P.; Willhauck, G.; Lingenfelder, I.; Heynen, M. Multi-resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information. ISPRS J. Photogramm. Remote Sens. 2004, 58, 239–258. [Google Scholar] [CrossRef] [Green Version]
  75. Achanta, R.; Süsstrunk, S. Superpixels and polygons using simple non-iterative clustering. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4895–4904. [Google Scholar]
  76. Congalton, R.G. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sens. Environ. 1991, 37, 35–46. [Google Scholar] [CrossRef]
  77. McNemar, Q. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika 1947, 12, 153–157. [Google Scholar] [CrossRef] [PubMed]
  78. de Leeuw, J.; Jia, H.; Yang, L.; Liu, X.; Schmidt, K.; Skidmore, A.K. Comparing accuracy assessments to infer superiority of image classification methods. Int. J. Remote Sens. 2006, 27, 223–232. [Google Scholar] [CrossRef]
  79. Dingle Robertson, L.; King, D.J. Comparison of pixel-and object-based classification in land cover change mapping. Int. J. Remote Sens. 2011, 32, 1505–1529. [Google Scholar] [CrossRef]
  80. Adam, E.; Mutanga, O.; Rugege, D. Multispectral and hyperspectral remote sensing for identification and mapping of wetland vegetation: A review. Wetlands Ecol. Manag. 2010, 18, 281–296. [Google Scholar] [CrossRef]
  81. Mohammadimanesh, F.; Salehi, B.; Mahdianpari, M.; Brisco, B.; Motagh, M. Wetland Water Level Monitoring Using Interferometric Synthetic Aperture Radar (InSAR): A Review. Can. J. Remote Sens. 2018, 1–16. [Google Scholar] [CrossRef]
  82. Chen, B.; Xiao, X.; Li, X.; Pan, L.; Doughty, R.; Ma, J.; Dong, J.; Qin, Y.; Zhao, B.; Wu, Z. A mangrove forest map of China in 2015: Analysis of time series Landsat 7/8 and Sentinel-1A imagery in Google Earth Engine cloud computing platform. ISPRS J. Photogramm. Remote Sens. 2017, 131, 104–120. [Google Scholar] [CrossRef]
  83. Kelley, L.; Pitcher, L.; Bacon, C. Using Google Earth Engine to Map Complex Shade-Grown Coffee Landscapes in Northern Nicaragua. Remote Sens. 2018, 10, 952. [Google Scholar] [CrossRef]
  84. Jacobson, A.; Dhanota, J.; Godfrey, J.; Jacobson, H.; Rossman, Z.; Stanish, A.; Walker, H.; Riggio, J. A novel approach to mapping land conversion using Google Earth with an application to East Africa. Environ. Model. Softw. 2015, 72, 1–9. [Google Scholar] [CrossRef]
  85. Vafaei, S.; Soosani, J.; Adeli, K.; Fadaei, H.; Naghavi, H.; Pham, T.D.; Tien Bui, D. Improving accuracy estimation of forest aboveground biomass based on incorporation of ALOS-2 PALSAR-2 and sentinel-2A imagery and machine learning: A case study of the Hyrcanian forest area (Iran). Remote Sens. 2018, 10, 172. [Google Scholar] [CrossRef]
  86. Dong, J.; Xiao, X.; Menarguez, M.A.; Zhang, G.; Qin, Y.; Thau, D.; Biradar, C.; Moore, B., III. Mapping paddy rice planting area in northeastern Asia with Landsat 8 images, phenology-based algorithm and Google Earth Engine. Remote Sens. Environ. 2016, 185, 142–154. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  87. Wulder, M.A.; White, J.C.; Masek, J.G.; Dwyer, J.; Roy, D.P. Continuity of Landsat observations: Short term considerations. Remote Sens. Environ. 2011, 115, 747–751. [Google Scholar] [CrossRef] [Green Version]
Figure 1. The geographic location of the study area with distribution of the training and testing polygons across four pilot sites on the Island of Newfoundland.
Figure 1. The geographic location of the study area with distribution of the training and testing polygons across four pilot sites on the Island of Newfoundland.
Remotesensing 11 00043 g001
Figure 2. The total number of (a) ascending Synthetic Aperture Radar (SAR) observations (VV/VH) and (b) descending SAR observations (HH/HV) during summers of 2016, 2017 and 2018. The color bar represents the number of collected images.
Figure 2. The total number of (a) ascending Synthetic Aperture Radar (SAR) observations (VV/VH) and (b) descending SAR observations (HH/HV) during summers of 2016, 2017 and 2018. The color bar represents the number of collected images.
Remotesensing 11 00043 g002
Figure 3. Three examples of extracted features for land cover classification in this study. The multi-year summer composite of (a) span feature extracted from HH/HV Sentinel-1 data, (b) normalized difference vegetation index (NDVI), and (c) normalized difference water index (NDWI) features extracted from Sentinel-2 data.
Figure 3. Three examples of extracted features for land cover classification in this study. The multi-year summer composite of (a) span feature extracted from HH/HV Sentinel-1 data, (b) normalized difference vegetation index (NDVI), and (c) normalized difference water index (NDWI) features extracted from Sentinel-2 data.
Remotesensing 11 00043 g003
Figure 4. (a) Spatial distribution of Sentinel-2 observations (total observations) during summers of 2016, 2017 and 2018 and (b) the number of observations affected by varying degrees of cloud cover (%) in the study area for each summer.
Figure 4. (a) Spatial distribution of Sentinel-2 observations (total observations) during summers of 2016, 2017 and 2018 and (b) the number of observations affected by varying degrees of cloud cover (%) in the study area for each summer.
Remotesensing 11 00043 g004
Figure 5. Box-and-whisker plot of the multi-year June composite illustrating the distribution of reflectance, NDVI, NDWI, and MSAVI2 for wetland classes obtained using pixel values extracted from training datasets. Note that black, horizontal bars within boxes illustrate median values, boxes demonstrate the lower and upper quartiles, and whiskers extend to minimum and maximum values.
Figure 5. Box-and-whisker plot of the multi-year June composite illustrating the distribution of reflectance, NDVI, NDWI, and MSAVI2 for wetland classes obtained using pixel values extracted from training datasets. Note that black, horizontal bars within boxes illustrate median values, boxes demonstrate the lower and upper quartiles, and whiskers extend to minimum and maximum values.
Remotesensing 11 00043 g005
Figure 6. Box-and-whisker plot of the multi-year July composite illustrating the distribution of reflectance, NDVI, NDWI, and MSAVI2 for wetland classes obtained using pixel values extracted from training datasets.
Figure 6. Box-and-whisker plot of the multi-year July composite illustrating the distribution of reflectance, NDVI, NDWI, and MSAVI2 for wetland classes obtained using pixel values extracted from training datasets.
Remotesensing 11 00043 g006
Figure 7. Box-and-whisker plot of the multi-year August composite illustrating the distribution of reflectance, NDVI, NDWI, and MSAVI2 for wetland classes obtained using pixel values extracted from training datasets.
Figure 7. Box-and-whisker plot of the multi-year August composite illustrating the distribution of reflectance, NDVI, NDWI, and MSAVI2 for wetland classes obtained using pixel values extracted from training datasets.
Remotesensing 11 00043 g007
Figure 8. The land cover maps of Newfoundland obtained from different classification scenarios, including (a) S1, (b) S2, (c) S3 and (d) S4 in this study.
Figure 8. The land cover maps of Newfoundland obtained from different classification scenarios, including (a) S1, (b) S2, (c) S3 and (d) S4 in this study.
Remotesensing 11 00043 g008aRemotesensing 11 00043 g008b
Figure 9. The confusion matrices obtained from different classification scenarios, including (a) S1, (b) S2, (c) S3 and (d) S4 in this study.
Figure 9. The confusion matrices obtained from different classification scenarios, including (a) S1, (b) S2, (c) S3 and (d) S4 in this study.
Remotesensing 11 00043 g009aRemotesensing 11 00043 g009b
Figure 10. The user’s accuracies for various land cover classes in different classification scenarios in this study.
Figure 10. The user’s accuracies for various land cover classes in different classification scenarios in this study.
Remotesensing 11 00043 g010
Figure 11. The final land cover map for the Island of Newfoundland obtained from the object-based Random Forest (RF) classification using the multi-year summer SAR/optical composite. An overall accuracy of 88.37% and a Kappa coefficient of 0.85 were achieved. A total of six insets and their corresponding optical images (i.e., Sentinel-2) were also illustrated to appreciate some of the classification details. Please also see Supplementary Materials for details of the final classification map.
Figure 11. The final land cover map for the Island of Newfoundland obtained from the object-based Random Forest (RF) classification using the multi-year summer SAR/optical composite. An overall accuracy of 88.37% and a Kappa coefficient of 0.85 were achieved. A total of six insets and their corresponding optical images (i.e., Sentinel-2) were also illustrated to appreciate some of the classification details. Please also see Supplementary Materials for details of the final classification map.
Remotesensing 11 00043 g011
Figure 12. The confusion matrix for the final classification map obtained from the object-based RF classification using the multi-year summer SAR/optical composite (OA: 88.37%, K: 0.85).
Figure 12. The confusion matrix for the final classification map obtained from the object-based RF classification using the multi-year summer SAR/optical composite (OA: 88.37%, K: 0.85).
Remotesensing 11 00043 g012
Table 1. Number of training and testing polygons in this study.
Table 1. Number of training and testing polygons in this study.
ClassTraining PolygonsTesting Polygons
bog9291
fen9392
marsh7575
swamp7879
shallow-water5556
deep-water1716
upland9292
urban/bare land9998
total601599
Table 2. A description of extracted features from SAR and optical imagery.
Table 2. A description of extracted features from SAR and optical imagery.
DataFeature DescriptionFormula
Sentinel-1vertically transmitted, vertically received SAR backscattering coefficient σ V V 0
vertically transmitted, horizontally received SAR backscattering coefficient σ V H 0
horizontally transmitted, horizontally received SAR backscattering coefficient σ H H 0
horizontally transmitted, vertically received SAR backscattering coefficient σ H V 0
Span or total scattering power | S V V   | 2 +   | S V H   | 2 , | S H H   | 2 +   | S H V   | 2
difference between co- and cross-polarized observations | S V V   | 2   | S V H   | 2 , | S H H   | 2   | S H V   | 2
ratio | S V V   | 2 | S V H   | 2 , | S H H   | 2 | S H V   | 2
Sentinel-2spectral bands 2 (blue), 3 (green), 4 (red) and 8 (NIR) B 2 ,   B 3 ,   B 4 ,   B 8
the normalized difference vegetation index (NDVI) B 8 B 4 B 8 + B 4
the normalized difference water index (NDWI) B 3 B 8 B 3 + B 8
modified soil-adjusted vegetation index 2 (MSAVI2) 2 B 8 + 1 ( 2 B 8 + 1 ) 2 8 ( B 8 B 4 ) 2
Table 3. Jeffries–Matusita (JM) distances between pairs of wetland classes from the multi-year summer composite for extracted optical features in this study.
Table 3. Jeffries–Matusita (JM) distances between pairs of wetland classes from the multi-year summer composite for extracted optical features in this study.
Optical Featuresd1d2d3d4d5d6d7d8d9d10
blue0.0020.2040.4701.1530.2320.2991.2180.5201.4980.380
green0.0020.3310.3910.9710.3720.4181.4100.4121.1830.470
red0.1080.5670.5701.4950.5460.6401.1030.6341.3910.517
NIR0.2050.5730.5151.3950.3640.6121.0520.6491.1751.776
NDVI0.7030.5900.8201.6440.5860.4381.8090.4951.7831.938
NDWI0.2680.4490.5111.9790.6430.5191.7920.7601.8141.993
MSAVI20.3580.5090.5951.7630.3670.3131.7450.4271.5601.931
all1.0981.4971.5611.9991.4291.4411.9991.6141.8051.999
Note: d1: Bog/Fen, d2: Bog/Marsh, d3: Bog/ Swamp, d4: Bog/Shallow-water, d5: Fen/Marsh, d6: Fen/Swamp, d7: Fen/Shallow-water, d8: Marsh/Swamp, d9: Marsh/Shallow-water, and d10: Swamp/Shallow-water.
Table 4. Overall accuracies and Kappa coefficients obtained from different classification scenarios in this study.
Table 4. Overall accuracies and Kappa coefficients obtained from different classification scenarios in this study.
ClassificationData CompositeScenarioOverall Accuracy (%)Kappa Coefficient
pixel-basedSARS173.120.68
OpticS277.160.72
object-basedSARS379.140.74
OpticS483.790.80
SAR + opticS588.370.85
Table 5. The results of McNemar test for different classification scenarios in this study.
Table 5. The results of McNemar test for different classification scenarios in this study.
Scenarios χ 2 p-Value
S1 vs. S35.210.023
S2 vs. S46.270.012
S3 vs. S59.270.0001
S4 vs. S57.060.008

Share and Cite

MDPI and ACS Style

Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F.; Homayouni, S.; Gill, E. The First Wetland Inventory Map of Newfoundland at a Spatial Resolution of 10 m Using Sentinel-1 and Sentinel-2 Data on the Google Earth Engine Cloud Computing Platform. Remote Sens. 2019, 11, 43. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11010043

AMA Style

Mahdianpari M, Salehi B, Mohammadimanesh F, Homayouni S, Gill E. The First Wetland Inventory Map of Newfoundland at a Spatial Resolution of 10 m Using Sentinel-1 and Sentinel-2 Data on the Google Earth Engine Cloud Computing Platform. Remote Sensing. 2019; 11(1):43. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11010043

Chicago/Turabian Style

Mahdianpari, Masoud, Bahram Salehi, Fariba Mohammadimanesh, Saeid Homayouni, and Eric Gill. 2019. "The First Wetland Inventory Map of Newfoundland at a Spatial Resolution of 10 m Using Sentinel-1 and Sentinel-2 Data on the Google Earth Engine Cloud Computing Platform" Remote Sensing 11, no. 1: 43. https://0-doi-org.brum.beds.ac.uk/10.3390/rs11010043

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop