Fast Approximate Score Computation on Large-Scale Distributed Data for Learning Multinomial Bayesian Networks
AffiliationUniv Arizona, Dept Comp Sci
KeywordsApproximate score computation
MetadataShow full item record
PublisherASSOC COMPUTING MACHINERY
CitationAnas Katib, Praveen Rao, Kobus Barnard, and Charles Kamhoua. 2019. Fast Approximate Score Computation on Large-Scale Distributed Data for Learning Multinomial Bayesian Networks. ACM Trans. Knowl. Discov. Data 13, 2, Article 14 (March 2019), 40 pages. https://doi.org/10.1145/3301304
Rights© 2019 Association for Computing Machinery
Collection InformationThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at firstname.lastname@example.org.
AbstractIn this article, we focus on the problem of learning a Bayesian network over distributed data stored in a commodity cluster. Specifically, we address the challenge of computing the scoring function over distributed data in an efficient and scalable manner, which is a fundamental task during learning. While exact score computation can be done using the MapReduce-style computation, our goal is to compute approximate scores much faster with probabilistic error bounds and in a scalable manner. We propose a novel approach, which is designed to achieve the following: (a) decentralized score computation using the principle of gossiping; (b) lower resource consumption via a probabilistic approach for maintaining scores using the properties of a Markov chain; and (c) effective distribution of tasks during score computation (on large datasets) by synergistically combining well-known hashing techniques. We conduct theoretical analysis of our approach in terms of convergence speed of the statistics required for score computation, and memory and network bandwidth consumption. We also discuss how our approach is capable of efficiently recomputing scores when new data are available. We conducted a comprehensive evaluation of our approach and compared with the MapReduce-style computation using datasets of different characteristics on a 16-node cluster. When theMapReduce-style computation provided exact statistics for score computation, it was nearly 10 times slower than our approach. Although it ran faster on randomly sampled datasets than on the entire datasets, it performed worse than our approach in terms of accuracy. Our approach achieved high accuracy (below 6% average relative error) in estimating the statistics for approximate score computation on all the tested datasets. In conclusion, it provides a feasible tradeoff between computation time and accuracy for fast approximate score computation on large-scale distributed data.
VersionFinal accepted manuscript
SponsorsU.S. Air Force Summer Faculty Fellowship Program; University of Missouri Research Board; National Science FoundationNational Science Foundation (NSF) ; King Abdullah Scholarship Program (Saudi Arabia)