• Login
    View Item 
    •   Home
    • UA Graduate and Undergraduate Research
    • UA Theses and Dissertations
    • Dissertations
    • View Item
    •   Home
    • UA Graduate and Undergraduate Research
    • UA Theses and Dissertations
    • Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Browse

    All of UA Campus RepositoryCommunitiesTitleAuthorsIssue DateSubmit DateSubjectsPublisherJournalThis CollectionTitleAuthorsIssue DateSubmit DateSubjectsPublisherJournal

    My Account

    LoginRegister

    About

    AboutUA Faculty PublicationsUA DissertationsUA Master's ThesesUA Honors ThesesUA PressUA YearbooksUA CatalogsUA Libraries

    Statistics

    Most Popular ItemsStatistics by CountryMost Popular Authors

    Interpretable Natural Language Processing with Applications to Information Extraction

    • CSV
    • RefMan
    • EndNote
    • BibTex
    • RefWorks
    Thumbnail
    Name:
    azu_etd_20230_sip1_m.pdf
    Size:
    2.540Mb
    Format:
    PDF
    Download
    Author
    Tang, Zheng
    Issue Date
    2022
    Advisor
    Surdeanu, Mihai
    
    Metadata
    Show full item record
    Publisher
    The University of Arizona.
    Rights
    Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
    Abstract
    Interpretability is very important for many NLP applications. Many such applications (e.g., information extraction, sentiment analysis) are applied to important decision-making areas like government policy, financial, law, and others. In these scenarios, machines must explain the information produced, if they are to be deployed in the real world. In this dissertation, we present some approaches for information extraction that mitigates the tension between generalization and explainability by jointly training for the two goals. First we investigate an approach uses an encoder-decoder architecture, which jointly trains a classifier for information extraction, and a rule decoder that generates syntactico-semantic rules that explain the decisions of the classifier. We evaluate the proposed approach on two different information extraction tasks and show that the decoder generates interpretable rules that serve as accurate explanations for the classifier's decisions, and, importantly, that the joint training generally improves the performance of the classifier. We show that our approach can be used for semi-supervised learning, and that its performance improves when trained on automatically-labeled data generated by a rule-based system. Second, we investigate another approach uses a multi-task learning architecture, which jointly trains a classifier for relation extraction, and a sequence model that labels words in the context of the relation that explain the decisions of the relation classifier. We also convert the model outputs to rules to bring global explanations to this approach. This sequence model is trained using a hybrid strategy: supervised, when supervision from pre-existing patterns is available, and semi-supervised otherwise. In the latter situation, we treat the sequence model's labels as latent variables, and learn the best assignment that maximizes the performance of the relation classifier. We evaluate the proposed approach on the two relation extraction datasets and show that the sequence model provides labels that serve as accurate explanations for the relation classifier's decisions, and, importantly, that the joint training generally improves the performance of the relation classifier. We also evaluate the performance of the generated rules and show that the new rules are great add-on to the manual rules and bring the rule-based system much closer to the neural models. Third, we also explore the usages of the model outputs in two ways: 1. Convert them to rules to bring global explanations to this approach; and 2. Use them for bootstrapping when we do not have enough data. Our globally-explainable models approach the performance of neural ones within a reasonable gap.
    Type
    Electronic Dissertation
    text
    Degree Name
    Ph.D.
    Degree Level
    doctoral
    Degree Program
    Graduate College
    Computer Science
    Degree Grantor
    University of Arizona
    Collections
    Dissertations

    entitlement

     
    The University of Arizona Libraries | 1510 E. University Blvd. | Tucson, AZ 85721-0055
    Tel 520-621-6442 | repository@u.library.arizona.edu
    DSpace software copyright © 2002-2017  DuraSpace
    Quick Guide | Contact Us | Send Feedback
    Open Repository is a service operated by 
    Atmire NV
     

    Export search results

    The export option will allow you to export the current search results of the entered query to a file. Different formats are available for download. To export the items, click on the button corresponding with the preferred download format.

    By default, clicking on the export buttons will result in a download of the allowed maximum amount of items.

    To select a subset of the search results, click "Selective Export" button and make a selection of the items you want to export. The amount of items that can be exported at once is similarly restricted as the full export.

    After making a selection, click one of the export format buttons. The amount of items that will be exported is indicated in the bubble next to export format.