University of Twente Student Theses


Deep Learning-based Multimodal Fusion of Sentinel-1 and Sentinel-2 Data for Mapping Deforested Areas in the Amazon Rainforest

Biswas, Biman (2022) Deep Learning-based Multimodal Fusion of Sentinel-1 and Sentinel-2 Data for Mapping Deforested Areas in the Amazon Rainforest.

[img] PDF
Abstract:As deforestation continues to increase rapidly, it is critical to map it reliably and efficiently to protect tropical rainforests and implement effective containment policies. Furthermore, it supports monitoring deforestation and assessing its effects on local and global climate and biodiversity loss. However, conventional methods to map deforestation using optical satellite imagery suffer from persistent cloud cover and are often impractical and unsuitable for complex, large-scale analysis. Synthetic aperture radar (SAR) images have the ability to penetrate the cloud cover and provide an alternative data source to monitor deforestation. Therefore, this research aims to perform a fully convolutional network (FCN) based multimodal fusion of optical and SAR data to map the accumulated deforestation regardless of any atmospheric condition. The experiments were carried out in parts of Pará state in the Brazilian Amazon. 10m Sentinel-1 (S-1) SAR data and 10m bands of Sentinel-2 (S-2) optical data were used as input data, and primary forest and non-forest data from the Brazilian Amazon Deforestation Monitoring Program (PRODES) were used as reference data. Five image pairs for five different cloud scenarios, from 0% to 100% cloud cover, were used to prepare the training, testing, and validation data. U-Net variations with early fusion, late fusion and spatial attention mechanisms were used to experiment with the two input data sets in two different scenarios of experiment setups. Scenario-1 was set up to train and test on the same image. And scenario-2 was set up to train and test images from different dates. The results from the experiments in scenario-1 suggest that the accuracy of the standalone S-2 image outperforms every other model in the zero percent cloud scenario. The fusion based models come very close to standalone S-2 performance in this scenario but do not improve the results further. As expected, the performance degrades abruptly for standalone S-2 images when the cloud-cover increases. Results from scenario-2 suggest that with the help of fusion of S-1 and S-2 images during a cloudy scenario, the models can output impressive classification results even during an extreme cloud cover scenario. Further investigation about improving the fusion accuracy during cloud-free conditions in scenario-2 was left for future research works.
Item Type:Essay (Master)
Faculty:ITC: Faculty of Geo-information Science and Earth Observation
Subject:38 earth sciences, 43 environmental science, 54 computer science
Programme:Geoinformation Science and Earth Observation MSc (75014)
Link to this item:
Export this item as:BibTeX
HTML Citation
Reference Manager


Repository Staff Only: item control page