Recognition of motion-blurred CCTs based on deep and transfer learning

Recognition of motion-blurred CCTs based on deep and transfer learning

Imagen del registro

Colección: Artículos

Título: Recognition of motion-blurred CCTs based on deep and transfer learning / Yun Shi, Yanyan Zhu

Autor: Shi, Yun

Notas: Sumario: This paper uses deep and transfer learning in identifying motion-blurred Chinese character coded targets (CCTs) to reduce the need for a large number of samples and long training times of conventional methods. Firstly, a set of CCTs are designed, and a motion blur image generation system is used to provide samples for the recognition network. Then, the OTSU algorithm, the expansion, and the Canny operator are performed on the real shot blurred images, where the target area is segmented by the minimum bounding box. Next, a sample is selected from the sample set according to the 4:1 ratio, i.e., training set: test set. Furthermore, under the Tensor Flow framework, the convolutional layer in the AlexNet is fixed, and the fully-connected layer is trained for transfer learning. Finally, numerous experiments on the simulated and real-time motion-blurred images are carried out. The results showed that network training and testing take 30 minutes and two seconds on average, and the recognition accuracy reaches 98.6% and 93.58%, respectively. As a result, our method achieves higher recognition accuracy, does not require a large number of samples for training, requires less time, and can provide a certain reference for the recognition of motion-blurred CCTs.

Materia / lugar / evento: Algoritmos
Inteligencia artificial
Muestreos

Autores secundarios: Zhu, Yanyan

Otras clasificaciones: 922.134

Derechos: In Copyright (InC): http://rightsstatements.org/vocab/InC/1.0/