Búsqueda

Recognition of motion-blurred CCTs based on deep and transfer learning

<?xml version="1.0" encoding="UTF-8"?><modsCollection xmlns="http://www.loc.gov/mods/v3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-8.xsd">
<mods version="3.8">
<titleInfo>
<title>Recognition of motion-blurred CCTs based on deep and transfer learning</title>
</titleInfo>
<name type="personal" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="MAPA20200022978">
<namePart>Zhu, Yanyan </namePart>
<nameIdentifier>MAPA20200022978</nameIdentifier>
</name>
<typeOfResource>text</typeOfResource>
<genre authority="marcgt">periodical</genre>
<originInfo>
<place>
<placeTerm type="code" authority="marccountry">esp</placeTerm>
</place>
<dateIssued encoding="marc">2020</dateIssued>
<issuance>serial</issuance>
</originInfo>
<language>
<languageTerm type="code" authority="iso639-2b">eng</languageTerm>
</language>
<physicalDescription>
<form authority="marcform">print</form>
<internetMediaType>application/pdf</internetMediaType>
</physicalDescription>
<abstract displayLabel="Summary">This paper uses deep and transfer learning in identifying motion-blurred Chinese character coded targets (CCTs) to reduce the need for a large number of samples and long training times of conventional methods. Firstly, a set of CCTs are designed, and a motion blur image generation system is used to provide samples for the recognition network. Then, the OTSU algorithm, the expansion, and the Canny operator are performed on the real shot blurred images, where the target area is segmented by the minimum bounding box. Next, a sample is selected from the sample set according to the 4:1 ratio, i.e., training set: test set. Furthermore, under the Tensor Flow framework, the convolutional layer in the AlexNet is fixed, and the fully-connected layer is trained for transfer learning. Finally, numerous experiments on the simulated and real-time motion-blurred images are carried out. The results showed that network training and testing take 30 minutes and two seconds on average, and the recognition accuracy reaches 98.6% and 93.58%, respectively. As a result, our method achieves higher recognition accuracy, does not require a large number of samples for training, requires less time, and can provide a certain reference for the recognition of motion-blurred CCTs.</abstract>
<note type="statement of responsibility">Yun Shi, Yanyan Zhu</note>
<subject xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="MAPA20080553128">
<topic>Algoritmos</topic>
</subject>
<subject xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="MAPA20080611200">
<topic>Inteligencia artificial</topic>
</subject>
<subject xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="MAPA20080551797">
<topic>Muestreos</topic>
</subject>
<classification authority="">922.134</classification>
<relatedItem type="host">
<titleInfo>
<title>Revista Iberoamericana de Inteligencia Artificial</title>
</titleInfo>
<originInfo>
<publisher>IBERAMIA, Sociedad Iberoamericana de Inteligencia Artificial , 2018-</publisher>
</originInfo>
<identifier type="issn">1988-3064</identifier>
<identifier type="local">MAP20200034445</identifier>
<part>
<text>31/12/2020 Volumen 23 Número 66 - diciembre 2020 , p. 1-8</text>
</part>
</relatedItem>
<recordInfo>
<recordContentSource authority="marcorg">MAP</recordContentSource>
<recordCreationDate encoding="marc">201123</recordCreationDate>
<recordChangeDate encoding="iso8601">20220911190418.0</recordChangeDate>
<recordIdentifier source="MAP">MAP20200037231</recordIdentifier>
<languageOfCataloging>
<languageTerm type="code" authority="iso639-2b">spa</languageTerm>
</languageOfCataloging>
</recordInfo>
</mods>
</modsCollection>