Extraction of buildings from high-resolution remote sensing images based on improved U-Net

Aiming at traditional classification methods in high-resolution remote sensing images, a modified semantic segmentation model U-Net was developed to more efficiently and accurately extract buildings. First, Google image was applied to create sample data sets which were used to improve U-Net network...

Full description

Saved in:
Bibliographic Details
Published inScientific Bulletin. Series C, Electrical Engineering and Computer Science no. 2; p. 275
Main Authors Huang, Wenxi, Tao, Liufeng, Li, Xue, Hu, Xiaoyi
Format Journal Article
LanguageEnglish
Published Bucharest University Polytechnica of Bucharest 01.01.2025
Subjects
Online AccessGet full text
ISSN2286-3540

Cover

More Information
Summary:Aiming at traditional classification methods in high-resolution remote sensing images, a modified semantic segmentation model U-Net was developed to more efficiently and accurately extract buildings. First, Google image was applied to create sample data sets which were used to improve U-Net network model. ResNet with different depths was employed as the backbone network to extract semantic information from images. Furthermore, attention mechanism module was added to refine the extracted feature map and improve the classification performance of surface features. The experimental results showed that, compared with Support Vector Machine (SVM) and SegNet, improved U-Net model (Attention Res-UNet) performed better in prediction performance and evaluation metrics, with mean values in accuracy, recall, F1 value, and Intersection over Union (IoU) reached 92.4%, 87.9%, 91.5%, and 89.9%, respectively. The improved U-Net model has a prediction performance that is closer to manual annotation, and can efficiently recognize and extract remote sensing image information and obtained high-precision extraction results. This method had certain application significance for surface features extraction.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2286-3540