Autonomous Structural Visual Inspection Using Region‐Based Deep Learning for Detecting Multiple Damage Types

Computer vision‐based techniques were developed to overcome the limitations of visual inspection by trained human resources and to detect structural damage in images remotely, but most methods detect only specific types of damage, such as concrete or steel cracks. To provide quasi real‐time simultan...

Full description

Saved in:
Bibliographic Details
Published inComputer-aided civil and infrastructure engineering Vol. 33; no. 9; pp. 731 - 747
Main Authors Cha, Young‐Jin, Choi, Wooram, Suh, Gahyun, Mahmoudkhani, Sadegh, Büyüköztürk, Oral
Format Journal Article
LanguageEnglish
Published Hoboken Wiley Subscription Services, Inc 01.09.2018
Subjects
Online AccessGet full text
ISSN1093-9687
1467-8667
DOI10.1111/mice.12334

Cover

More Information
Summary:Computer vision‐based techniques were developed to overcome the limitations of visual inspection by trained human resources and to detect structural damage in images remotely, but most methods detect only specific types of damage, such as concrete or steel cracks. To provide quasi real‐time simultaneous detection of multiple types of damages, a Faster Region‐based Convolutional Neural Network (Faster R‐CNN)‐based structural visual inspection method is proposed. To realize this, a database including 2,366 images (with 500 × 375 pixels) labeled for five types of damages—concrete crack, steel corrosion with two levels (medium and high), bolt corrosion, and steel delamination—is developed. Then, the architecture of the Faster R‐CNN is modified, trained, validated, and tested using this database. Results show 90.6%, 83.4%, 82.1%, 98.1%, and 84.7% average precision (AP) ratings for the five damage types, respectively, with a mean AP of 87.8%. The robustness of the trained Faster R‐CNN is evaluated and demonstrated using 11 new 6,000 × 4,000‐pixel images taken of different structures. Its performance is also compared to that of the traditional CNN‐based method. Considering that the proposed method provides a remarkably fast test speed (0.03 seconds per image with 500 × 375 resolution), a framework for quasi real‐time damage detection on video using the trained networks is developed.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1093-9687
1467-8667
DOI:10.1111/mice.12334