YOLO-SWD—An Improved Ship Recognition Algorithm for Feature Occlusion Scenarios
Ship detection and recognition hold significant application value in both military and civilian domains. With the continuous advancement of deep learning technologies, multi-category ship detection and recognition methods based on deep learning have garnered increasing attention. However, challenges...
Saved in:
| Published in | Applied sciences Vol. 15; no. 5; p. 2749 |
|---|---|
| Main Authors | , , |
| Format | Journal Article |
| Language | English |
| Published |
Basel
MDPI AG
01.03.2025
|
| Subjects | |
| Online Access | Get full text |
| ISSN | 2076-3417 2076-3417 |
| DOI | 10.3390/app15052749 |
Cover
| Summary: | Ship detection and recognition hold significant application value in both military and civilian domains. With the continuous advancement of deep learning technologies, multi-category ship detection and recognition methods based on deep learning have garnered increasing attention. However, challenges such as feature occlusion caused by interfering objects, cloudy and foggy weather leading to feature loss, and insufficient accuracy in remote sensing imagery persist. This study aims to enhance the accuracy and robustness of ship recognition by improving deep learning-based object detection models, enabling the algorithm to perform ship detection and recognition tasks effectively in feature-occluded scenarios. In this research, we propose a ship detection and recognition algorithm based on YOLOv11. YOLOv11 possesses stronger feature extraction capabilities and its multi-branch structure effectively captures features of targets at different scales. Three improved modules are introduced: the DLKA module enhances the perception of local details and global context through dynamic deformable convolution and large receptive field attention mechanisms; the CKSP module improves the model’s ability to extract target boundaries and shapes; and the WTHead enhances the diversity and robustness of feature extraction. Comparative experiments with classical object detection models on visible and SAR datasets, which include a variety of feature occlusion scenarios, show that our proposed model achieved the best results across multiple metrics, specifically, our method achieved a mAP of 83.9%, surpassing the second-best result by 2.7%. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 2076-3417 2076-3417 |
| DOI: | 10.3390/app15052749 |