Citation
Deng, Y., Chai, C., Cao, L., Tang, N., Wang, J., Fan, J., ... & Wang, G. (2024). MisDetect: Iterative Mislabel Detection using Early Loss. Proceedings of the VLDB Endowment, 17(6), 1159-1172.Rights
Copyrightis held by the owner/author(s). Publication rights licensed to the VLDB Endowment. This work is licensed under the Creative Commons BY-NC-ND4.0 International License.Collection Information
This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.Abstract
Supervised machine learning (ML) models trained on data with mislabeled instances often produce inaccurate results due to label errors. Traditional methods of detecting mislabeled instances rely on data proximity, where an instance is considered mislabeled if its label is inconsistent with its neighbors. However, it often performs poorly, because an instance does not always share the same label with its neighbors. ML-based methods instead utilize trained models to differentiate between mislabeled and clean instances. However, these methods struggle to achieve high accuracy, since the models may have already overfitted mislabeled instances. In this paper, we propose a novel framework, MisDetect, that detects mislabeled instances during model training. MisDetect leverages the early loss observation to iteratively identify and remove mislabeled instances. In this process, influence-based verification is applied to enhance the detection accuracy. Moreover, MisDetect automatically determines when the early loss is no longer effective in detecting mislabels such that the iterative detection process should terminate. Finally, for the training instances that MisDetect is still not certain about whether they are mislabeled or not, MisDetect automatically produces some pseudo labels to learn a binary classification model and leverages the generalization ability of the machine learning model to determine their status. Our experiments on 15 datasets show that MisDetect outperforms 10 baseline methods, demonstrating its effectiveness in detecting mislabeled instances.Note
Open access articleISSN
2150-8097Version
Final published versionae974a485f413a2113503eed53cd6c53
10.14778/3648160.3648161
Scopus Count
Collections
Except where otherwise noted, this item's license is described as Copyrightis held by the owner/author(s). Publication rights licensed to the VLDB Endowment. This work is licensed under the Creative Commons BY-NC-ND4.0 International License.