Levine, Joshua A.
AffiliationUniversity of Arizona, Department of Computer Science
MetadataShow full item record
CitationWang, Z., Cashman, D., Li, M., Li, J., Berger, M., Levine, J. A., Chang, R., & Scheidegger, C. (2021). NeuralCubes: Deep Representations for Visual Data Exploration. Proceedings - 2021 IEEE International Conference on Big Data, Big Data 2021.
Rights© 2021 IEEE.
Collection InformationThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at firstname.lastname@example.org.
AbstractVisual exploration of large multi-dimensional datasets has seen tremendous progress in recent years, allowing users to express rich data queries that produce informative visual summaries, all in real time. Techniques based on data cubes are some of the most promising approaches. However, these techniques usually require a large memory footprint for large datasets. To tackle this problem, we present NeuralCubes: neural networks that predict results for aggregate queries, similar to data cubes. NeuralCubes learns a function that takes as input a given query, for instance, a geographic region and temporal interval, and outputs the result of the query. The learned function serves as a real-time, low-memory approximator for aggregation queries. Our models are small enough to be sent to the client side (e.g. the web browser for a web-based application) for evaluation, enabling data exploration of large datasets without database/network connection. We demonstrate the effectiveness of NeuralCubes through extensive experiments on a variety of datasets and discuss how NeuralCubes opens up opportunities for new types of visualization and interaction.
VersionFinal accepted manuscript
SponsorsNational Science Foundation