Publisher
The University of Arizona.Rights
Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author.Abstract
Large language models (LLMs) have shown impressive performance across various tasks, from simple text classification to complex coding. However, the scale of their training data and its internet-based sources have raised concerns about their true generalizability and performance. The primary concern is whether the test partitions of datasets used to evaluate these models have leaked into the training data, a phenomenon known as "data contamination." In this dissertation, we propose three novel methodologies to detect and estimate data contamination in fully black-box LLMs, each discussed in a separate chapter. All our techniques hinge on memorization for detection. Our first approach attempts to replicate dataset instances through model generations to confirm the presence of contamination. The second approach reformulates data contamination detection as a quiz-like task. In this quiz, the original dataset instance and its word-level perturbations serve as quiz options, and the task is to identify the option containing the original dataset instance. Thus, the correct answer indicates prior exposure to the data, thereby estimating the contamination level based on quiz performance. In the final method, we verify the estimates from the second method by adapting the replication-based technique from our first method. The adaptation involves employing in-context learning to surface more memorization before replication. We also examine data contamination and its effects on downstream performance in few-shot and many-shot scenarios, moving beyond zero-shot. Our findings revealed that data contamination is far more prevalent than previously believed. Specifically, we discovered that the test partitions of many datasets commonly used to evaluate LLMs were indeed part of their training data. Notably, we found higher levels of contamination than those officially reported by model providers. Finally, we observed that the tangibility and impact of data contamination are greater in few-shot and many-shot scenarios compared to zero-shot.Type
textElectronic Dissertation
Degree Name
Ph.D.Degree Level
doctoralDegree Program
Graduate CollegeComputer Science
