Predictive Modeling for Spatio-Temporal Data: From Gaussian Process to Deep Neural Networks
Author
Chen, XiIssue Date
2024Keywords
deep learningGaussian process
Predictive modeling
spatio-temporal data
trajectory prediction
uncertainty quantification
Advisor
Head, Larry
Metadata
Show full item recordPublisher
The University of Arizona.Rights
Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author.Abstract
Spatio-temporal (ST) data is ubiquitous across engineering applications, such as sensor networks, transportation systems, etc. This type of data is characterized by the integration of timestamps and geographic coordinates, providing a comprehensive understanding of how events or variables change in both temporal and spatial dimensions. Existing models handling ST data can be classified into two categories: statistical machine learning models and deep learning models. While statistical models provide interpretable results, they may face challenges with data complexity and scalability. In contrast, deep learning excels at capturing complex patterns but demands significant data and computational resources. We explore both statistical machine learning and deep learning models to effectively handle spatio-temporal data across various engineering applications. In the first application, we concentrate on predicting the radiation pattern of 3D printed antennas for antenna design automation. Here, we extend the standard Gaussian process (GP) model to accommodate spatially structured inputs by incorporating the spatial closeness into the kernel function. We compare different kernel functions and utilize the fitted GP model together with Bayesian optimization to find optimal designs based on specified design objectives. Results show that our proposed kernel outperforms previous ones in both 2D and 3D spatially structured inputs. In the second application, we introduce a deep learning framework for modeling trajectory data from multiple sources in a connected and autonomous vehicle (CAV) environment. We leverage Transformer and Graph Neural Networks (GNN) to capture temporal and spatial relations, respectively. Recognizing the distinct characteristics of data from sensor and communication technologies, we employ source-specific encoders to capture temporal dependencies. The trajectory dataset is collected in the CARLA simulator with synthesized data errors injected. Numerical experiments demonstrate that in a mixed traffic flow scenario, the integration of data from different sources enhances our understanding of the environment. This notably improves trajectory prediction accuracy, particularly in situations with a high CV market penetration rate. In the third application, we extend the trajectory prediction task into a Vehicle-to-Infrastructure (V2I) setting. Here, we leverage trajectory data from both vehicle and infrastructure perspectives. We propose a conformal vehicle trajectory prediction framework with multi-view data integration. For trajectory data from each view, we employ established GNN based models to capture temporal dependencies, agent-agent interactions, and agent-lane relations. Then we utilize a cross-graph attention module to fuse node features from different views. The predicted multimodal trajectories are calibrated by a post-hoc conformal prediction module to get valid and efficient confidence regions, which is crucial for safety-critical tasks. The whole framework is evaluated on a real-world V2I dataset. The performance demonstrates its effectiveness and advantages over existing benchmarks.Type
Electronic Dissertationtext
Degree Name
Ph.D.Degree Level
doctoralDegree Program
Graduate CollegeSystems & Industrial Engineering