Identifying Latent Attributes from Video Scenes Using Knowledge Acquired From Large Collections of Text Documents
dc.contributor.advisor | Cohen, Paul R. | en_US |
dc.contributor.advisor | Surdeanu, Mihai | en_US |
dc.contributor.author | Tran, Anh Xuan | |
dc.creator | Tran, Anh Xuan | en_US |
dc.date.accessioned | 2014-10-14T19:16:46Z | |
dc.date.available | 2014-10-14T19:16:46Z | |
dc.date.issued | 2014 | |
dc.identifier.uri | http://hdl.handle.net/10150/332735 | |
dc.description.abstract | Peter Drucker, a well-known influential writer and philosopher in the field of management theory and practice, once claimed that “the most important thing in communication is hearing what isn't said.” It is not difficult to see that a similar concept also holds in the context of video scene understanding. In almost every non-trivial video scene, most important elements, such as the motives and intentions of the actors, can never be seen or directly observed, yet the identification of these latent attributes is crucial to our full understanding of the scene. That is to say, latent attributes matter. In this work, we explore the task of identifying latent attributes in video scenes, focusing on the mental states of participant actors. We propose a novel approach to the problem based on the use of large text collections as background knowledge and minimal information about the videos, such as activity and actor types, as query context. We formalize the task and a measure of merit that accounts for the semantic relatedness of mental state terms, as well as their distribution weights. We develop and test several largely unsupervised information extraction models that identify the mental state labels of human participants in video scenes given some contextual information about the scenes. We show that these models produce complementary information and their combination significantly outperforms the individual models, and improves performance over several baseline methods on two different datasets. We present an extensive analysis of our models and close with a discussion of our findings, along with a roadmap for future research. | |
dc.language.iso | en_US | en |
dc.publisher | The University of Arizona. | en_US |
dc.rights | Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author. | en_US |
dc.subject | computer vision | en_US |
dc.subject | information extraction | en_US |
dc.subject | information retrieval | en_US |
dc.subject | mental state inference | en_US |
dc.subject | natural language processing | en_US |
dc.subject | Computer Science | en_US |
dc.subject | artificial intelligence | en_US |
dc.title | Identifying Latent Attributes from Video Scenes Using Knowledge Acquired From Large Collections of Text Documents | en_US |
dc.type | text | en |
dc.type | Electronic Dissertation | en |
thesis.degree.grantor | University of Arizona | en_US |
thesis.degree.level | doctoral | en_US |
dc.contributor.committeemember | Cohen, Paul R. | en_US |
dc.contributor.committeemember | Surdeanu, Mihai | en_US |
dc.contributor.committeemember | Barnard, Kobus | en_US |
dc.contributor.committeemember | McAllister, Ken S. | en_US |
thesis.degree.discipline | Graduate College | en_US |
thesis.degree.discipline | Computer Science | en_US |
thesis.degree.name | Ph.D. | en_US |
refterms.dateFOA | 2018-08-31T21:02:48Z | |
html.description.abstract | Peter Drucker, a well-known influential writer and philosopher in the field of management theory and practice, once claimed that “the most important thing in communication is hearing what isn't said.” It is not difficult to see that a similar concept also holds in the context of video scene understanding. In almost every non-trivial video scene, most important elements, such as the motives and intentions of the actors, can never be seen or directly observed, yet the identification of these latent attributes is crucial to our full understanding of the scene. That is to say, latent attributes matter. In this work, we explore the task of identifying latent attributes in video scenes, focusing on the mental states of participant actors. We propose a novel approach to the problem based on the use of large text collections as background knowledge and minimal information about the videos, such as activity and actor types, as query context. We formalize the task and a measure of merit that accounts for the semantic relatedness of mental state terms, as well as their distribution weights. We develop and test several largely unsupervised information extraction models that identify the mental state labels of human participants in video scenes given some contextual information about the scenes. We show that these models produce complementary information and their combination significantly outperforms the individual models, and improves performance over several baseline methods on two different datasets. We present an extensive analysis of our models and close with a discussion of our findings, along with a roadmap for future research. |