Deep Learning for Language and Vision tasks in Surveillance Applications

Adrián Pastor López-Monroy, Daniel Vallejo Aldana, Alfredo Arturo Elías Miranda, Juan Manuel García Carmona, Humberto Perez Espinosa

Abstract


The keyword spotting and handgun detection tasks have been widely used to manipulate devices and monitor surveillance systems in a more efficient manner. In spite of the advances of deep learning approaches dominating those tasks, the effectiveness of them is mostly tested and evaluated in datasets of exceptional qualities. The aim of this paper is have an study about the performance of these tools when information captured by common devices are used, for example; commercial surveillance systems based on standard resolution cameras or microphones from smartphones. For this we propose to build an audio dataset consisting of Speech commands recorded from mobile devices and different users. In the audio section, we evaluate and compare some state of art keyword spotting techniques with our own model, which outperforms the baselines and reference approaches. in this evaluation obtained an accuracy of 83\% in accuracy. For the handgun detection, we did a fine tuning of YOLOv5 to adapt the model and perform the detection of handguns in images and video. This model was tested on a new dataset that uses labeled images from commercial security cameras.

Keywords


Handgun detection, keyword spotting, object detection, YOLOv5

Full Text: PDF