Fast segmentation of point clouds using a convolutional neural network for helping visually impaired people find the closest traversable region
<?xml version="1.0" encoding="UTF-8"?><modsCollection xmlns="http://www.loc.gov/mods/v3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-8.xsd">
<mods version="3.8">
<titleInfo>
<title>Fast segmentation of point clouds using a convolutional neural network for helping visually impaired people find the closest traversable region</title>
</titleInfo>
<name type="personal" usage="primary" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="MAPA20220009553">
<namePart>Tinizaray, Paúl</namePart>
<nameIdentifier>MAPA20220009553</nameIdentifier>
</name>
<typeOfResource>text</typeOfResource>
<genre authority="marcgt">periodical</genre>
<originInfo>
<place>
<placeTerm type="code" authority="marccountry">esp</placeTerm>
</place>
<dateIssued encoding="marc">2022</dateIssued>
<issuance>serial</issuance>
</originInfo>
<language>
<languageTerm type="code" authority="iso639-2b">eng</languageTerm>
</language>
<physicalDescription>
<form authority="marcform">print</form>
<internetMediaType>application/pdf</internetMediaType>
</physicalDescription>
<abstract displayLabel="Summary">In this paper, we introduce an approach for helping visually impaired people to find the closest-to-user traversable region. The aim of our work is to reduce the computational cost of this task. For this purpose, we develop a convolutional neural network that classifies patches to segment floor regions in a point cloud. Segmented regions are evaluated by their size and position in the point cloud to identify the closest-to-user traversable region. We evaluate our approach using the NYU-v2 dataset and find that by searching only in the lower section of the point cloud, it is possible to reduce the processing time while finding the closest floor regions. Our approach reports a better processing time than related works, making it suitable to quickly find the closest-to-user traversable region in point clouds.
</abstract>
<accessCondition type="use and reproduction">La copia digital se distribuye bajo licencia "Attribution 4.0 International (CC BY NC 4.0)"</accessCondition>
<note type="statement of responsibility">Paúl Tinizaray, Wilbert Aguilar, José Lucio</note>
<subject xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="MAPA20080611200">
<topic>Inteligencia artificial</topic>
</subject>
<subject xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="MAPA20210014260">
<topic>Discapacidad física</topic>
</subject>
<subject xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="MAPA20080562144">
<topic>Discapacidad</topic>
</subject>
<classification authority="">922.134</classification>
<relatedItem type="host">
<titleInfo>
<title>Revista Iberoamericana de Inteligencia Artificial</title>
</titleInfo>
<originInfo>
<publisher> : IBERAMIA, Sociedad Iberoamericana de Inteligencia Artificial , 2018-</publisher>
</originInfo>
<identifier type="issn">1988-3064</identifier>
<identifier type="local">MAP20200034445</identifier>
<part>
<text>05/12/2022 Volumen 25 Número 70 - diciembre 2022 , p. 50-63</text>
</part>
</relatedItem>
<recordInfo>
<recordContentSource authority="marcorg">MAP</recordContentSource>
<recordCreationDate encoding="marc">221124</recordCreationDate>
<recordChangeDate encoding="iso8601">20230908092129.0</recordChangeDate>
<recordIdentifier source="MAP">MAP20220034760</recordIdentifier>
<languageOfCataloging>
<languageTerm type="code" authority="iso639-2b">spa</languageTerm>
</languageOfCataloging>
</recordInfo>
</mods>
</modsCollection>