A comparative analysis of optical sensor artifacts across neural network-based ATR algorithms

With the proliferation of space-based optical systems and corresponding increased volume of overhead imagery data, there is a growing need for automatic target recognition (ATR) algorithms that both effectively identify objects of interest and remove the burden of analysis from a finite number of hu...

Full description

Saved in:
Bibliographic Details
Main Author Davidson, Quinton T.
Format Conference Proceeding
LanguageEnglish
Published SPIE 07.06.2024
Online AccessGet full text
ISBN1510673962
9781510673960
ISSN0277-786X
DOI10.1117/12.3014530

Cover

More Information
Summary:With the proliferation of space-based optical systems and corresponding increased volume of overhead imagery data, there is a growing need for automatic target recognition (ATR) algorithms that both effectively identify objects of interest and remove the burden of analysis from a finite number of human ground operators. Although recent state-of-the-art (SOTA) deep learning architectures like Convolutional Neural Networks (CNNs) and Detection Transformers (DETRs) have shown great performance in accomplishing these tasks, this performance can degrade substantially when operating on data containing unexpected anomalies outside their original training sets. Likewise, the increased amount of automation in these processes magnifies the negative impact that an ATR algorithm can cause prior to a human analyst recognizing any problem in the data downstream. Space-based optical systems rely on accurate calibration to create clean imagery, and as such, these ATR algorithms are subject to sensor calibration artifacts. Previous work has characterized common calibration artifacts such as sensor noise and failed detector artifacts as they affect the performance of an Inceptionv1-based ATR algorithm. This paper looks to extend this analysis to multiple CNN and transformer-based object detection architectures to characterize differences in performance degradation across various SOTA ATR algorithms. Notably, we found the RT-DETR architecture is more robust to uniformly distributed random scaling factors and random pixel failures than YOLOv8, YOLOv9, and Faster-RCNN particularly when detecting large objects like container ships and tankers. These results are useful in summarizing the expected performance impact of common calibration artifacts on ATR algorithms as well as informing algorithm selection when designing systems that leverage overhead imagery.
Bibliography:Conference Date: 2024-04-21|2024-04-26
Conference Location: National Harbor, Maryland, United States
ISBN:1510673962
9781510673960
ISSN:0277-786X
DOI:10.1117/12.3014530