Conditional autoregressive-tunicate swarm algorithm based generative adversarial network for violent crowd behavior recognition

Violent crowd behavior detection has gained significant attention in the computer vision system. Diverse crowd behavior detection approaches are introduced to detect violent behavior but enhancing the recognition rate poses a complex task due to different crowd diversity, mutual occlusion between cr...

Full description

Saved in:
Bibliographic Details
Published inThe Artificial intelligence review Vol. 56; no. Suppl 2; pp. 2099 - 2123
Main Authors Singh, Juginder Pal, Kumar, Manoj
Format Journal Article
LanguageEnglish
Published Dordrecht Springer Netherlands 01.11.2023
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN0269-2821
1573-7462
DOI10.1007/s10462-023-10571-8

Cover

More Information
Summary:Violent crowd behavior detection has gained significant attention in the computer vision system. Diverse crowd behavior detection approaches are introduced to detect violent behavior but enhancing the recognition rate poses a complex task due to different crowd diversity, mutual occlusion between crowds, and diversity of monitoring scene. Therefore, a crowd behavior recognition mechanism is introduced by Conditional Autoregressive-Tunicate Swarm Algorithm based Generative Adversarial Network (CA-TSA based GAN) to detect violent behavior. Accordingly, the developed CA-TSA is modeled by inheriting Conditional Autoregressive Value at Risk by Regression Quantiles with Tunicate Swarm Algorithm. Initially, the features, such as Tanimoto based Violence Flows descriptor, Local Ternary patterns, and Gray level co-occurrence matrix are extracted from the video frames. Then, the crowd behavior recognition is done by the GAN, which finds the abnormal and the normal crowd behaviors. Here, GAN is trained by the proposed CA-TSA. Moreover, the performance of the proposed method is analyzed using ASLAN challenge dataset. The developed model has the accuracy, sensitivity, and specificity values of 93.688%, 94.261%, and 94.051%, respectively.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0269-2821
1573-7462
DOI:10.1007/s10462-023-10571-8