Show simple item record

dc.contributor.advisorDitzler, Gregory
dc.contributor.authorLiang, Zhengzhong
dc.creatorLiang, Zhengzhong
dc.date.accessioned2018-10-12T00:20:03Z
dc.date.available2018-10-12T00:20:03Z
dc.date.issued2018
dc.identifier.urihttp://hdl.handle.net/10150/630156
dc.description.abstractArtificial Neural Networks (ANNs) have been used to many application-driven fields and have been shown to be quite successful, however, some aspects of ANNs are not well understood. One such area is learning an ANN in the presence of an adversary.In such a context, it is assumed that the attacker can manipulate the training (also referred to causative attack or poison) or testing data (also referred to exploratory attack) to disrupt its normal functionality. In turn, the defender aims at reducing the impact of such attacks as much as possible. The first part of this thesis focuses on causative attacks against an Long Short-Term Memory (LSTM) neural network in a language model. In causative attacks, it is assumed that the attacker can only change the training text in the language model. We study the behavior of the LSTM language model under different causative attacks and propose several simple measures that can reduce the impact of the attacks. Our results show that the poisoning ratio, the poisoning position and the generation of poisoned text can all influence the performance the LSTM language model. Furthermore, we show that proper use of dropout and gradient clipping can reduce the impact of poisoning the training data to some extent. We also contribute to understanding how to globally learn a Spiking Neural Network (SNN). SNNs are a type of ANN; however SNNs are much more biologically realistic than other ANNs. SNNs have not been widely adopted because of several critical issues of SNNs that are not well studied. One such effect is the training of SNNs and the encoding/decoding of signals in SNNs. In the second part of this thesis, we build an SNN based image classifier to study the encoding/decoding of signals and compare several learning rules for training an SNN. Results reveal that (i) classical STDP learning windows generally obtain the best performance using different decoding schemes; (ii) first-spike decoding has worse accuracy than count decoding classifier does when no normalization rules are applied, although first-spike decoding classifier consumes much less time than count decoding classifier; (iii) the performance of first-spike decoding classifier can be largely enhanced with proper use of normalization rules.
dc.language.isoen
dc.publisherThe University of Arizona.
dc.rightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
dc.subjectAdversarial Learning
dc.subjectLanguage Model
dc.subjectLong Short-Term Memory
dc.subjectNormalization
dc.subjectSpiking Neural Network
dc.titleA Study of Adversarial Attacks Against an LSTM Language Model and the Impact of Normalization in SNN
dc.typetext
dc.typeElectronic Thesis
thesis.degree.grantorUniversity of Arizona
thesis.degree.levelmasters
dc.contributor.committeememberTandon, Ravi
dc.contributor.committeememberKoyluoglu, Onur Ozan
dc.contributor.committeememberFellous, Jean-Marc
dc.description.releaseRelease after 08/15/2020
thesis.degree.disciplineGraduate College
thesis.degree.disciplineElectrical & Computer Engineering
thesis.degree.nameM.S.


Files in this item

Thumbnail
Name:
azu_etd_16561_sip1_m.pdf
Size:
2.572Mb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record