Pesquisa de referências

Deep learning approach for detecting fake images using texture variation network

<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.loc.gov/MARC21/slim http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd">
  <record>
    <leader>00000cab a2200000   4500</leader>
    <controlfield tag="001">MAP20230014301</controlfield>
    <controlfield tag="003">MAP</controlfield>
    <controlfield tag="005">20231214132812.0</controlfield>
    <controlfield tag="008">230627|20230501esp|||p      |0|||b|eng d</controlfield>
    <datafield tag="040" ind1=" " ind2=" ">
      <subfield code="a">MAP</subfield>
      <subfield code="b">spa</subfield>
      <subfield code="d">MAP</subfield>
    </datafield>
    <datafield tag="084" ind1=" " ind2=" ">
      <subfield code="a">922.134</subfield>
    </datafield>
    <datafield tag="245" ind1="0" ind2="0">
      <subfield code="a">Deep learning approach for detecting fake images using texture variation network </subfield>
      <subfield code="c">Haseena S...[et al.]</subfield>
    </datafield>
    <datafield tag="520" ind1=" " ind2=" ">
      <subfield code="a">Face manipulation technology is rapidly evolving, making it impossible for human eyes to recognize fake faces in photos. Convolutional Neural Network (CNN) discriminators, on the other hand, can quickly achieve high accuracy in distinguishing fake from real face photos. In this paper, we investigate how CNN models distinguish between fake and real faces. According to our findings, face forgery detection heavily relies on the variation in the texture of the images. As a result of the aforementioned discovery, we propose a deep texture variation network, a new model for robust face fraud detection based on convolution and pyramid pooling. Convolution combines pixel intensity and pixel gradient information to create a stationary representation of composition difference information. Simultaneously, multi-scale information fusion based on the pyramid pooling can prevent the texture features from being destroyed. The proposed deep texture variation network outperforms previous techniques on a variety of datasets, including Faceforensics++, DeeperForensics-1.0, CelebDF, and DFDC. The proposed model is less susceptible to image distortion, such as JPEG compression and blur, which is important in this field </subfield>
    </datafield>
    <datafield tag="650" ind1=" " ind2="4">
      <subfield code="0">MAPA20080541408</subfield>
      <subfield code="a">Imagen</subfield>
    </datafield>
    <datafield tag="650" ind1=" " ind2="4">
      <subfield code="0">MAPA20080611200</subfield>
      <subfield code="a">Inteligencia artificial</subfield>
    </datafield>
    <datafield tag="650" ind1=" " ind2="4">
      <subfield code="0">MAPA20080592028</subfield>
      <subfield code="a">Modelos de análisis</subfield>
    </datafield>
    <datafield tag="650" ind1=" " ind2="4">
      <subfield code="0">MAPA20080560980</subfield>
      <subfield code="a">Variaciones</subfield>
    </datafield>
    <datafield tag="773" ind1="0" ind2=" ">
      <subfield code="w">MAP20200034445</subfield>
      <subfield code="g">24/06/2023 Volumen 26 Número 71 - junio 2023 , 14 p.</subfield>
      <subfield code="x">1988-3064</subfield>
      <subfield code="t">Revista Iberoamericana de Inteligencia Artificial</subfield>
      <subfield code="d"> : IBERAMIA, Sociedad Iberoamericana de Inteligencia Artificial , 2018-</subfield>
    </datafield>
    <datafield tag="856" ind1="0" ind2="0">
      <subfield code="y">MÁS INFORMACIÓN</subfield>
      <subfield code="u">
mailto:centrodocumentacion@fundacionmapfre.org?subject=Consulta%20de%20una%20publicaci%C3%B3n%20&body=Necesito%20m%C3%A1s%20informaci%C3%B3n%20sobre%20este%20documento%3A%20%0A%0A%5Banote%20aqu%C3%AD%20el%20titulo%20completo%20del%20documento%20del%20que%20desea%20informaci%C3%B3n%20y%20nos%20pondremos%20en%20contacto%20con%20usted%5D%20%0A%0AGracias%20%0A
</subfield>
    </datafield>
  </record>
</collection>