A Study of Adversarial Attacks Against an LSTM Language Model and the Impact of Normalization in SNN
Author
Liang, ZhengzhongIssue Date
2018Keywords
Adversarial LearningLanguage Model
Long Short-Term Memory
Normalization
Spiking Neural Network
Advisor
Ditzler, Gregory
Metadata
Show full item recordPublisher
The University of Arizona.Rights
Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author.Embargo
Release after 08/15/2020Abstract
Artificial Neural Networks (ANNs) have been used to many application-driven fields and have been shown to be quite successful, however, some aspects of ANNs are not well understood. One such area is learning an ANN in the presence of an adversary.In such a context, it is assumed that the attacker can manipulate the training (also referred to causative attack or poison) or testing data (also referred to exploratory attack) to disrupt its normal functionality. In turn, the defender aims at reducing the impact of such attacks as much as possible. The first part of this thesis focuses on causative attacks against an Long Short-Term Memory (LSTM) neural network in a language model. In causative attacks, it is assumed that the attacker can only change the training text in the language model. We study the behavior of the LSTM language model under different causative attacks and propose several simple measures that can reduce the impact of the attacks. Our results show that the poisoning ratio, the poisoning position and the generation of poisoned text can all influence the performance the LSTM language model. Furthermore, we show that proper use of dropout and gradient clipping can reduce the impact of poisoning the training data to some extent. We also contribute to understanding how to globally learn a Spiking Neural Network (SNN). SNNs are a type of ANN; however SNNs are much more biologically realistic than other ANNs. SNNs have not been widely adopted because of several critical issues of SNNs that are not well studied. One such effect is the training of SNNs and the encoding/decoding of signals in SNNs. In the second part of this thesis, we build an SNN based image classifier to study the encoding/decoding of signals and compare several learning rules for training an SNN. Results reveal that (i) classical STDP learning windows generally obtain the best performance using different decoding schemes; (ii) first-spike decoding has worse accuracy than count decoding classifier does when no normalization rules are applied, although first-spike decoding classifier consumes much less time than count decoding classifier; (iii) the performance of first-spike decoding classifier can be largely enhanced with proper use of normalization rules.Type
textElectronic Thesis
Degree Name
M.S.Degree Level
mastersDegree Program
Graduate CollegeElectrical & Computer Engineering