Show simple item record

dc.contributor.advisorMorrison, Clayton
dc.contributor.authorKC, Dharma Raj
dc.creatorKC, Dharma Raj
dc.date.accessioned2024-06-04T01:57:34Z
dc.date.available2024-06-04T01:57:34Z
dc.date.issued2024
dc.identifier.citationKC, Dharma Raj. (2024). Conditional Graph Generative Models for Code and Texture Generation (Doctoral dissertation, University of Arizona, Tucson, USA).
dc.identifier.urihttp://hdl.handle.net/10150/672444
dc.description.abstractIn this dissertation, I present two novel graph generative frameworks, the first for code abstract syntax tree (AST) generation from input binary code sequences, and the second for novel texture generation for input 3D meshes. Both frameworks are developed using insights from advances in neural machine translation, in the form of Transformers (multi-headed local and global self-attention), and conditional graph generative neural networks. In the first part of the dissertation, I describe the framework for inferring ASTs from binary sequences. Existing approaches to building decompilers and binary analysis tools are time-consuming and labor-intensive to implement and are not easily adapted to new languages. Existing neural approaches tend to be limited to small code sequences. To address these challenges, we develop a novel and state-of-the-art framework for learning to generate abstract syntax trees from input binary sequences. In the second part of the dissertation, I describe the development of the framework for novel but high-quality texture generation for input 3D meshes. Creating high-quality texture assets for a given 3D mesh model is tedious and time-consuming but has numerous applications in 3D simulation, gaming, and augmented and virtual reality. Existing automated approaches to this problem either require expensive 3D part segmentation or deform the original input mesh. To address these challenges, we develop a new framework that can learn to generate novel textures for 3D mesh models from a collection of 3D meshes and 2D real-world images without any input mesh deformation. Both frameworks are demonstrated to achieve state-of-the-art performance in their respective applications while also providing increased adaptive flexibility.
dc.language.isoen
dc.publisherThe University of Arizona.
dc.rightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/
dc.subjectCode generation
dc.subjectGenerative adversarial networks
dc.subjectGraph
dc.subjectTexture generation
dc.subjectTransformer
dc.titleConditional Graph Generative Models for Code and Texture Generation
dc.typeElectronic Dissertation
dc.typetext
thesis.degree.grantorUniversity of Arizona
thesis.degree.leveldoctoral
dc.contributor.committeememberBarnard, Kobus
dc.contributor.committeememberSurdeanu, Mihai
dc.contributor.committeememberZhang, Chicheng
thesis.degree.disciplineGraduate College
thesis.degree.disciplineComputer Science
thesis.degree.namePh.D.
refterms.dateFOA2024-06-04T01:57:34Z


Files in this item

Thumbnail
Name:
azu_etd_21218_sip1_m.pdf
Size:
6.415Mb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record