Show simple item record

dc.contributor.authorGupta, S.
dc.contributor.authorGolota, R.
dc.contributor.authorDitzler, G.
dc.date.accessioned2021-09-09T21:27:41Z
dc.date.available2021-09-09T21:27:41Z
dc.date.issued2021
dc.identifier.citationGupta, S., Golota, R., & Ditzler, G. (2021). Attack Transferability Against Information-Theoretic Feature Selection. IEEE Access.
dc.identifier.issn2169-3536
dc.identifier.doi10.1109/ACCESS.2021.3105555
dc.identifier.urihttp://hdl.handle.net/10150/661443
dc.description.abstractMachine learning (ML) is vital to many application-driven fields, such as image and signal classification, cyber-security, and health sciences. Unfortunately, many of these fields can easily have their training data tampered with by an adversary to thwart an ML algorithm’s objective. Further, the adversary can impact any stage in an ML pipeline (e.g., preprocessing, learning, and classification). Recent work has shown that many models can be attacked by poisoning the training data, and the impact of the poisoned data can be quite significant. Prior works on adversarial feature selection have shown that the attacks can damage feature selection (FS). Filter FS algorithms, a type of FS, are widely used for their ability to model nonlinear relationships, classifier independence and lower computational requirements. One important question from the security perspective of these widely used approaches is, whether filter FS algorithms are robust against other FS attacks. In this work, we focus on the task of information-theoretic filter FS such MIM, MIFS, and mRMR, and the impact that gradient-based attack can have on these selections. The experiments on five benchmark datasets demonstrate that the stability of different information-theoretic algorithms can be significantly degraded by injecting poisonous data into the training dataset. CCBY
dc.language.isoen
dc.publisherInstitute of Electrical and Electronics Engineers Inc.
dc.rightsCopyright © 2021 The Author(s). This work is licensed under a Creative Commons Attribution 4.0 License.
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectAdversarial Machine Learning
dc.subjectFeature extraction
dc.subjectFeature Selection
dc.subjectInformation Theory
dc.subjectMachine learning algorithms
dc.subjectPipelines
dc.subjectStability analysis
dc.subjectTask analysis
dc.subjectTraining
dc.subjectTraining data
dc.titleAttack Transferability Against Information-Theoretic Feature Selection
dc.typeArticle
dc.typetext
dc.contributor.departmentDepartment of Electrical & Computer Engineering, University of Arizona
dc.identifier.journalIEEE Access
dc.description.noteOpen access journal
dc.description.collectioninformationThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.
dc.eprint.versionFinal published version
dc.source.journaltitleIEEE Access
refterms.dateFOA2021-09-09T21:27:41Z


Files in this item

Thumbnail
Name:
Attack_Transferability_Against ...
Size:
1.421Mb
Format:
PDF
Description:
Final Published Version

This item appears in the following Collection(s)

Show simple item record

Copyright © 2021 The Author(s). This work is licensed under a Creative Commons Attribution 4.0 License.
Except where otherwise noted, this item's license is described as Copyright © 2021 The Author(s). This work is licensed under a Creative Commons Attribution 4.0 License.