Detecting Foreground in Videos via Posterior Regularized Robust Bayesian Tensor Factorization
PublisherThe University of Arizona.
RightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
EmbargoThesis not available (per author's request)
AbstractForeground detection is a critical step for separating the moving object from the background in video processing. Tensor factorization has been used in foreground detection due to its ability to process complex high-dimensional data, such as color images and videos. However, traditional tensor factorization often lacks the ability for uncertainty quantification. Bayesian tensor factorization can measure the uncertainty by considering the distributions of the tensor factorization model parameters. Besides, domain knowledge is commonly available and could improve the accuracy of foreground detection of the Bayesian tensor factorization model if it can be appropriately incor- porated. In this work, a new Bayesian tensor factorization model, named Posterior Regularized Bayesian Robust Tensor Factorization (PR-BRTF), is proposed with in- corporating characteristics of dynamic foreground as a sparsity posterior regularization term. Furthermore, the variational Bayesian inference and L1 norm is combined for inducing sparsity with an ecient inference. The experiments in real-world case studies have shown the performance improvement of the proposed model over benchmarks.
Degree ProgramGraduate College
Systems & Industrial Engineering