Show simple item record

dc.contributor.authorBhattacharjee, B.
dc.contributor.authorDebnath, B.
dc.contributor.authorDas, J.C.
dc.contributor.authorKar, S.
dc.contributor.authorBanerjee, N.
dc.contributor.authorMallik, S.
dc.contributor.authorDe, D.
dc.date.accessioned2024-08-07T19:42:22Z
dc.date.available2024-08-07T19:42:22Z
dc.date.issued2023-03-10
dc.identifier.citationBhattacharjee, B.; Debnath, B.; Das, J.C.; Kar, S.; Banerjee, N.; Mallik, S.; De, D. Predicting the Future Appearances of Lost Children for Information Forensics with Adaptive Discriminator-Based FLM GAN. Mathematics 2023, 11, 1345. https://doi.org/10.3390/math11061345
dc.identifier.issn2227-7390
dc.identifier.doi10.3390/math11061345
dc.identifier.urihttp://hdl.handle.net/10150/673936
dc.description.abstractThis article proposes an adaptive discriminator-based GAN (generative adversarial network) model architecture with different scaling and augmentation policies to investigate and identify the cases of lost children even after several years (as human facial morphology changes after specific years). Uniform probability distribution with combined random and auto augmentation techniques to generate the future appearance of lost children’s faces are analyzed. X-flip and rotation are applied periodically during the pixel blitting to improve pixel-level accuracy. With an anisotropic scaling, the images were generated by the generator. Bilinear interpolation was carried out during up-sampling by setting the padding reflection during geometric transformation. The four nearest data points used to estimate such interpolation at a new point during Bilinear interpolation. The color transformation applied with the Luma flip on the rotation matrices spread log-normally for saturation. The luma-flip components use brightness and color information of each pixel as chrominance. The various scaling and modifications, combined with the StyleGan ADA architecture, were implemented using NVIDIA V100 GPU. The FLM method yields a BRISQUE score of between 10 and 30. The article uses MSE, RMSE, PSNR, and SSMIM parameters to compare with the state-of-the-art models. Using the Universal Quality Index (UQI), FLM model-generated output maintains a high quality. The proposed model obtains ERGAS (12 k–23 k), SCC (0.001–0.005), RASE (1 k–4 k), SAM (0.2–0.5), and VIFP (0.02–0.09) overall scores. © 2023 by the authors.
dc.language.isoen
dc.publisherMDPI
dc.rights© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectdeep learning
dc.subjectGAN
dc.subjectlost children
dc.subjectStyleGan ADA
dc.titlePredicting the Future Appearances of Lost Children for Information Forensics with Adaptive Discriminator-Based FLM GAN
dc.typeArticle
dc.typetext
dc.contributor.departmentDepartment of Pharmacology & Toxicology, The University of Arizona
dc.identifier.journalMathematics
dc.description.noteOpen access journal
dc.description.collectioninformationThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.
dc.eprint.versionFinal Published Version
dc.source.journaltitleMathematics
refterms.dateFOA2024-08-07T19:42:22Z


Files in this item

Thumbnail
Name:
mathematics-11-01345-v2.pdf
Size:
26.16Mb
Format:
PDF
Description:
Final Published Version

This item appears in the following Collection(s)

Show simple item record

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Except where otherwise noted, this item's license is described as © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.