Deep Learning for Language and Vision tasks in Surveillance Applications

Authors

  • Adrián Pastor López-Monroy Centro de Investigación en Matemáticas
  • Daniel Vallejo Aldana Centro de Investigación en Matemáticas
  • Alfredo Arturo Elías Miranda Centro de Investigación en Matemáticas
  • Juan Manuel García Carmona Instituto Tecnológico Nacional de México
  • Humberto Perez Espinosa Centro de Investigación Científica y de Educación Superior de Ensenada UT3

DOI:

https://doi.org/10.13053/cys-25-2-3867

Keywords:

Handgun detection, keyword spotting, object detection, YOLOv5

Abstract

The keyword spotting and handgun detection tasks have been widely used to manipulate devices and monitor surveillance systems in a more efficient manner. In spite of the advances of deep learning approaches dominating those tasks, the effectiveness of them is mostly tested and evaluated in datasets of exceptional qualities. The aim of this paper is have an study about the performance of these tools when information captured by common devices are used, for example; commercial surveillance systems based on standard resolution cameras or microphones from smartphones. For this we propose to build an audio dataset consisting of Speech commands recorded from mobile devices and different users. In the audio section, we evaluate and compare some state of art keyword spotting techniques with our own model, which outperforms the baselines and reference approaches. in this evaluation obtained an accuracy of 83\% in accuracy. For the handgun detection, we did a fine tuning of YOLOv5 to adapt the model and perform the detection of handguns in images and video. This model was tested on a new dataset that uses labeled images from commercial security cameras.

Downloads

Published

2021-05-01

Issue

Section

Articles