Dynahead-YOLO-Otsu: an efficient DCNN-based landslide semantic segmentation method using remote sensing images
Recent advancements in deep convolutional neural networks (DCNNs) have significantly improved landslides identification using remote sensing images. Pixel-wise semantic segmentation (PSS) and object-oriented detection (OOD) are two dominant approaches, wherein PSS are better as providing detailed de...
        Saved in:
      
    
          | Published in | Geomatics, natural hazards and risk Vol. 15; no. 1 | 
|---|---|
| Main Authors | , , , , , , | 
| Format | Journal Article | 
| Language | English | 
| Published | 
        Abingdon
          Taylor & Francis
    
        31.12.2024
     Taylor & Francis Ltd Taylor & Francis Group  | 
| Subjects | |
| Online Access | Get full text | 
| ISSN | 1947-5705 1947-5713 1947-5713  | 
| DOI | 10.1080/19475705.2024.2398103 | 
Cover
| Summary: | Recent advancements in deep convolutional neural networks (DCNNs) have significantly improved landslides identification using remote sensing images. Pixel-wise semantic segmentation (PSS) and object-oriented detection (OOD) are two dominant approaches, wherein PSS are better as providing detailed delineation of landslide shapes. However, PSS are limited by the difficulty in labelling training data and low segmentation speed compared to OOD. In this paper, we propose an efficient DCNN-based landslide semantic segmentation method, the so-called Dynahead-YOLO-Otsu, to perform a PSS based on the OOD results. This is achieved by locating potential landslide regions in advance using the ODD-based Dynahead-YOLO model, which enhances the capacity for detecting landslides with variable proportions and complex background in the images. The preliminary results are then processed using the Otsu binarization algorithm to cluster pixels belonging to landslides from the images of potential regions for semantic segmentation. To validate the performance, we tested the proposed method using an open-source dataset containing 950 landslide images. We compared the results with three up-to-date DCNN-and PSS- based approaches, namely DeepLab v3+, PSPnet, and Unet. Results demonstrate that the proposed method achieves comparable Recall (71.80%) and F1 scores (75.80%), with an average improvement of 22% and 16% in Precision and IoU, respectively. | 
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14  | 
| ISSN: | 1947-5705 1947-5713 1947-5713  | 
| DOI: | 10.1080/19475705.2024.2398103 |