Name:
softmax_regression_representat ...
Embargo:
2026-02-03
Size:
2.431Mb
Format:
PDF
Description:
Final Accepted Manuscript
Affiliation
Department of Electrical & Computer Engineering at the University of ArizonaDepartment of Biomedical Engineering, The University of Arizona
BIO5 Institute, The University of Arizona
Issue Date
2024-02-03
Metadata
Show full item recordPublisher
Elsevier LtdCitation
Li, H., Chen, X., Ditzler, G., Roveda, J., & Li, A. (2024). Knowledge distillation under ideal joint classifier assumption. Neural Networks, 106160.Rights
© 2024 Elsevier Ltd. All rights reserved.Collection Information
This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.Abstract
Knowledge distillation constitutes a potent methodology for condensing substantial neural networks into more compact and efficient counterparts. Within this context, softmax regression representation learning serves as a widely embraced approach, leveraging a pre-established teacher network to guide the learning process of a diminutive student network. Notably, despite the extensive inquiry into the efficacy of softmax regression representation learning, the intricate underpinnings governing the knowledge transfer mechanism remain inadequately elucidated. This study introduces the ‘Ideal Joint Classifier Knowledge Distillation’ (IJCKD) framework, an overarching paradigm that not only furnishes a lucid and exhaustive comprehension of prevailing knowledge distillation techniques but also establishes a theoretical underpinning for prospective investigations. Employing mathematical methodologies derived from domain adaptation theory, this investigation conducts a comprehensive examination of the error boundary of the student network contingent upon the teacher network. Consequently, our framework facilitates efficient knowledge transference between teacher and student networks, thereby accommodating a diverse spectrum of applications.Note
24 month embargo; first published 03 February 2024EISSN
1879-2782PubMed ID
38330746Version
Final accepted manuscriptae974a485f413a2113503eed53cd6c53
10.1016/j.neunet.2024.106160
Scopus Count
Collections
Related articles
- Multi-view Teacher-Student Network.
- Authors: Tian Y, Sun S, Tang J
- Issue date: 2022 Feb
- Leveraging different learning styles for improved knowledge distillation in biomedical imaging.
- Authors: Niyaz U, Sambyal AS, Bathula DR
- Issue date: 2024 Jan
- A deep learning knowledge distillation framework using knee MRI and arthroscopy data for meniscus tear detection.
- Authors: Ying M, Wang Y, Yang K, Wang H, Liu X
- Issue date: 2023
- Teacher-student complementary sample contrastive distillation.
- Authors: Bao Z, Huang Z, Gou J, Du L, Liu K, Zhou J, Chen Y
- Issue date: 2024 Feb
- Learning Student Networks via Feature Embedding.
- Authors: Chen H, Wang Y, Xu C, Xu C, Tao D
- Issue date: 2021 Jan