Exploring transformers and time lag features for predicting changes in mood over time
Citation
John Culnan, Damian Romero Diaz, and Steven Bethard. 2022. Exploring transformers and time lag features for predicting changes in mood over time. In Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology, pages 226–231, Seattle, USA. Association for Computational Linguistics.Journal
CLPsych 2022 - 8th Workshop on Computational Linguistics and Clinical Psychology, ProceedingsRights
Copyright © 2022 Association for Computational Linguistics. This is an open access article licensed on a Creative Commons Attribution 4.0 International License.Collection Information
This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.Abstract
This paper presents transformer-based models created for the CLPsych 2022 shared task. Using posts from Reddit users over a period of time, we aim to predict changes in mood from post to post. We test models that preserve timeline information through explicit ordering of posts as well as those that do not order posts but preserve features on the length of time between a user’s posts. We find that a model with temporal information may provide slight benefits over the same model without such information, although a RoBERTa transformer model provides enough information to make similar predictions without custom-encoded time information. © 2022 Association for Computational Linguistics.Note
Open access journalISBN
9781955917872Version
Final published versionae974a485f413a2113503eed53cd6c53
10.18653/v1/2022.clpsych-1.21
Scopus Count
Collections
Except where otherwise noted, this item's license is described as Copyright © 2022 Association for Computational Linguistics. This is an open access article licensed on a Creative Commons Attribution 4.0 International License.