• Login
    View Item 
    •   Home
    • UA Graduate and Undergraduate Research
    • UA Theses and Dissertations
    • Dissertations
    • View Item
    •   Home
    • UA Graduate and Undergraduate Research
    • UA Theses and Dissertations
    • Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of UA Campus RepositoryCommunitiesTitleAuthorsIssue DateSubmit DateSubjectsPublisherJournalThis CollectionTitleAuthorsIssue DateSubmit DateSubjectsPublisherJournal

    My Account

    LoginRegister

    About

    AboutUA Faculty PublicationsUA DissertationsUA Master's ThesesUA Honors ThesesUA PressUA YearbooksUA CatalogsUA Libraries

    Statistics

    Most Popular ItemsStatistics by CountryMost Popular Authors

    Advancing Neural Networks Towards Realistic Settings Using Few-Shot

    • CSV
    • RefMan
    • EndNote
    • BibTex
    • RefWorks
    Thumbnail
    Name:
    azu_etd_20241_sip1_m.pdf
    Size:
    51.09Mb
    Format:
    PDF
    Download
    Author
    Hess, Samuel Thomas
    Issue Date
    2022
    Keywords
    Deep Neural Networks
    Explainable Artificial Intelligence
    Few-Shot Learning
    Lifelong Learning
    Online Learning
    Advisor
    Ditzler, Gregory
    
    Metadata
    Show full item record
    Publisher
    The University of Arizona.
    Rights
    Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
    Abstract
    Neural networks have shown remarkable performance across many tasks, including classification, object detection, and image segmentation. Advances in high-performance computing have enabled neural networks to train on extremely large datasets that have resulted in superior performance, often outperforming humans in many tasks. In fact, conventional supervised learning neural networks trained with large volumes of labeled data can produce highly accurate models to classify images, videos, and audio signals. Despite the success of neural networks, their deployment and evaluation are limited to the classes and experiences observed during training. The success of neural networks, however, poses a serious challenge if large labeled datasets are not available to train. Thus, these models are not expected to achieve the same success if there are only a few labeled samples per class. To address this weakness of sample size, an area of research is rapidly evolving known as few-shot learning. Specifically, few-shot learning classifies unlabeled data from novel classes with only one or "a few'' labeled exemplary samples. Unfortunately, few-shot learning comes with its challenges, including reduced classification accuracy with respect to supervised counterparts, requirements on the overall size of the training data, classifier explainability, and evaluation assumptions that can quickly break down with many real-world applications. It is against this background that in this thesis, we present five contributions that expand few-shot performance, explainability, and applicability to new novel tasks. Specifically, our contributions are: (1) A novel few-shot network that improves the classification accuracy over prior models by learning to weight features conditioned on the samples. Conventional techniques perform a one-way comparison of an unlabeled query to a labeled support set; however, the soft weight network allows for two-way cross-comparisons of both query-to-support and support-to-query, which is shown to improve the performance of a few-shot model. (2) A new application and novel few-shot network, namely OrderNet, that can accurately learn an ordering of data given a small labeled dataset. Through pairwise subsampling and episodic training, OrderNet was shown to significantly reduce the amount of training data required to achieve regression accuracy. (3) A new approach for eXplainable Artifical Intelligence (XAI), namely ProtoShotXAI, that uses a few-shot architecture to explain black-box neural networks and is the first approach that is directly applicable to the explanation of few-shot neural networks. (4) A novel similarity metric for a few-shot network that achieves state-of-the-art performance on inductive few-shot tasks. The metric is motivated by the fast approximation of exponentially distributed features in the final layer of a trained few-shot classifier, and maximum log-likelihood estimation. State-of-the-art 1-shot transductive performance is also achieved on imbalanced data using a simple iterative approach with our similarity metric. (5) A novel framework for online detection and classification using few-shot classifiers. In contrast to related work, our lifelong learning framework assumes a continuous data stream of unlabeled and imbalanced data. Additionally, our approach continuously refines classes as new data becomes available while considering computational storage constraints. We demonstrate the capabilities of our proposed approach on benchmark data streams and achieve competitive detection performance and state-of-the-art online classification accuracy.
    Type
    text
    Electronic Dissertation
    Degree Name
    Ph.D.
    Degree Level
    doctoral
    Degree Program
    Graduate College
    Electrical & Computer Engineering
    Degree Grantor
    University of Arizona
    Collections
    Dissertations

    entitlement

     
    The University of Arizona Libraries | 1510 E. University Blvd. | Tucson, AZ 85721-0055
    Tel 520-621-6442 | repository@u.library.arizona.edu
    DSpace software copyright © 2002-2017  DuraSpace
    Quick Guide | Contact Us | Send Feedback
    Open Repository is a service operated by 
    Atmire NV
     

    Export search results

    The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

    By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.

    To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

    After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.