Temporal-Domain Adaptation for Satellite Image Time-Series Land-Cover Mapping With Adversarial Learning and Spatially Aware Self-Training
Abstract
Nowadays, satellite image time series (SITS) are commonly employed to derive land-cover maps (LCM) to support decision makers in a variety of land management applications. In the most general workflow, the production of LCM strongly relies on available GT data to train supervised machine learning models. Unfortunately, these data are not always available due to time-consuming and costly field campaigns. In this scenario, the possibility to transfer a model learnt on a particular year (source domain) to a successive period of time (target domain), over the same study area, can save time andmoney. Such a kind of model transfer is challenging due to different acquisition conditions affecting each time period, thus resulting in possible distribution shifts between source and target domains. In the general field of machine learning, unsupervised domain adaptation (UDA) approaches are well suited to cope with the learning of models under distribution shifts between source and target domains. While widely explored in the general computer vision field, they are still underinvestigated for SITS-based land-covermapping, especially for the temporal transfer scenario.With the aim to copewith this scenario in the context of SITS-based land-cover mapping, here we propose spatially aligned domain-adversarial neural network, a framework that combines both adversarial learning and self-training to transfer a classification model froma time period (year) to a successive one on a specific study area. Experimental assessment on a study area located in Burkina Faso characterized by challenging operational constraints demonstrates the significance of our proposal. The obtained results have shown that our proposal outperforms all the UDA competing methods by 7 to 12 points of F1-score across three different transfer tasks.
Origin | Publisher files allowed on an open archive |
---|---|
Licence |