Study on Visual Detection Algorithm of Sea Surface Targets Based on Improved YOLOv3

Countries around the world have paid increasing attention to the issue of marine security, and sea target detection is a key task to ensure marine safety. Therefore, it is of great significance to propose an efficient and accurate sea-surface target detection algorithm. The anchor-setting method of...

Full description

Saved in:
Bibliographic Details
Published inSensors (Basel, Switzerland) Vol. 20; no. 24; p. 7263
Main Authors Liu, Tao, Pang, Bo, Ai, Shangmao, Sun, Xiaoqiang
Format Journal Article
LanguageEnglish
Published Switzerland MDPI AG 18.12.2020
MDPI
Subjects
Online AccessGet full text
ISSN1424-8220
1424-8220
DOI10.3390/s20247263

Cover

More Information
Summary:Countries around the world have paid increasing attention to the issue of marine security, and sea target detection is a key task to ensure marine safety. Therefore, it is of great significance to propose an efficient and accurate sea-surface target detection algorithm. The anchor-setting method of the traditional YOLO v3 only uses the degree of overlap between the anchor and the ground-truth box as the standard. As a result, the information of some feature maps cannot be used, and the required accuracy of target detection is hard to achieve in a complex sea environment. Therefore, two new anchor-setting methods for the visual detection of sea targets were proposed in this paper: the average method and the select-all method. In addition, cross PANet, a feature fusion structure for cross-feature maps was developed and was used to obtain a better baseline cross YOLO v3, where different anchor-setting methods were combined with a focal loss for experimental comparison in the datasets of sea buoys and existing sea ships, SeaBuoys and SeaShips, respectively. The results showed that the method proposed in this paper could significantly improve the accuracy of YOLO v3 in detecting sea-surface targets, and the highest value of mAP in the two datasets is 98.37% and 90.58%, respectively.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1424-8220
1424-8220
DOI:10.3390/s20247263