TRAINING NON-TYPICAL CHARACTER MODELS FOR STABLE DIFFUSION UTILIZING OPEN SOURCE AIS
Publisher
The University of Arizona.Rights
Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.Abstract
This thesis explores the potential and limitations of using AI-driven techniques, specifically Stable Diffusion and LoRA, in character design and rendering. The study focuses on creating a unique 3D character with distinct design elements, training an AI model to understand and reproduce the character accurately in response to various text prompts, emotional expressions, and artistic styles. The research methodology involves a combination of modeling and rigging in Blender, exporting the character to Unity, generating training data, training the AI model using LoRA with the Protogen v2.2 base model, and testing the model's performance in Stable Diffusion.The findings demonstrate the AI model's ability to learn the character's design and generate consistent and accurate renders in response to diverse prompts. However, the study also reveals some challenges and limitations, such as the need for careful selection of training data, optimization of model parameters, and addressing potential overfitting or generalization issues. Additionally, the AI's adherence to certain artistic choices, such as the absence of a nose or specific skin tone, raises questions about its capabilities in capturing unique design choices. Overall, this thesis offers valuable insights into the applications and challenges of AI-driven character design and rendering in the digital art landscape.Type
Electronic thesistext
Degree Name
B.S.Degree Level
bachelorsDegree Program
Information Science and ArtsHonors College