Show simple item record

dc.contributor.authorKatib, Anas
dc.contributor.authorRao, Praveen
dc.contributor.authorBarnard, Kobus
dc.contributor.authorKamhoua, Charles
dc.date.accessioned2019-12-06T21:15:07Z
dc.date.available2019-12-06T21:15:07Z
dc.date.issued2019-06
dc.identifier.citationAnas Katib, Praveen Rao, Kobus Barnard, and Charles Kamhoua. 2019. Fast Approximate Score Computation on Large-Scale Distributed Data for Learning Multinomial Bayesian Networks. ACM Trans. Knowl. Discov. Data 13, 2, Article 14 (March 2019), 40 pages. https://doi.org/10.1145/3301304en_US
dc.identifier.issn1556-4681
dc.identifier.doi10.1145/3301304
dc.identifier.urihttp://hdl.handle.net/10150/636328
dc.description.abstractIn this article, we focus on the problem of learning a Bayesian network over distributed data stored in a commodity cluster. Specifically, we address the challenge of computing the scoring function over distributed data in an efficient and scalable manner, which is a fundamental task during learning. While exact score computation can be done using the MapReduce-style computation, our goal is to compute approximate scores much faster with probabilistic error bounds and in a scalable manner. We propose a novel approach, which is designed to achieve the following: (a) decentralized score computation using the principle of gossiping; (b) lower resource consumption via a probabilistic approach for maintaining scores using the properties of a Markov chain; and (c) effective distribution of tasks during score computation (on large datasets) by synergistically combining well-known hashing techniques. We conduct theoretical analysis of our approach in terms of convergence speed of the statistics required for score computation, and memory and network bandwidth consumption. We also discuss how our approach is capable of efficiently recomputing scores when new data are available. We conducted a comprehensive evaluation of our approach and compared with the MapReduce-style computation using datasets of different characteristics on a 16-node cluster. When theMapReduce-style computation provided exact statistics for score computation, it was nearly 10 times slower than our approach. Although it ran faster on randomly sampled datasets than on the entire datasets, it performed worse than our approach in terms of accuracy. Our approach achieved high accuracy (below 6% average relative error) in estimating the statistics for approximate score computation on all the tested datasets. In conclusion, it provides a feasible tradeoff between computation time and accuracy for fast approximate score computation on large-scale distributed data.en_US
dc.description.sponsorshipU.S. Air Force Summer Faculty Fellowship Program; University of Missouri Research Board; National Science FoundationNational Science Foundation (NSF) [1747751]; King Abdullah Scholarship Program (Saudi Arabia)en_US
dc.language.isoenen_US
dc.publisherASSOC COMPUTING MACHINERYen_US
dc.rights© 2019 Association for Computing Machinery.en_US
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/
dc.subjectApproximate score computationen_US
dc.subjectbayesian networksen_US
dc.subjectstructure learningen_US
dc.subjectdistributed dataen_US
dc.subjectgossip algorithmsen_US
dc.titleFast Approximate Score Computation on Large-Scale Distributed Data for Learning Multinomial Bayesian Networksen_US
dc.typeArticleen_US
dc.contributor.departmentUniv Arizona, Dept Comp Scien_US
dc.identifier.journalACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATAen_US
dc.description.collectioninformationThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.en_US
dc.eprint.versionFinal accepted manuscripten_US
dc.source.volume13
dc.source.issue2
dc.source.beginpage1-40
refterms.dateFOA2019-12-06T21:15:08Z


Files in this item

Thumbnail
Name:
DiSC-TKDD-accepted-paper.pdf
Size:
3.681Mb
Format:
PDF
Description:
Final Accepted Manuscript

This item appears in the following Collection(s)

Show simple item record