Computing accurate probabilistic estimates of one‐d entropy from equiprobable random samples
Name:
entropy-23-00740-v2.pdf
Size:
6.077Mb
Format:
PDF
Description:
Final Published Version
Affiliation
Hydrology and Atmospheric Sciences, The University of ArizonaGIDP Statistics and Data Science, The University of Arizona
Issue Date
2021
Metadata
Show full item recordPublisher
MDPI AGCitation
Gupta, H. V., Ehsani, M. R., Roy, T., Sans-Fuentes, M. A., Ehret, U., & Behrangi, A. (2021). Computing accurate probabilistic estimates of one-D entropy from equiprobable random samples. Entropy, 23(6), 740.Journal
EntropyRights
Copyright © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).Collection Information
This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.Abstract
We develop a simple Quantile Spacing (QS) method for accurate probabilistic estimation of one‐dimensional entropy from equiprobable random samples, and compare it with the popular Bin‐Counting (BC) and Kernel Density (KD) methods. In contrast to BC, which uses equal‐width bins with varying probability mass, the QS method uses estimates of the quantiles that divide the support of the data generating probability density function (pdf) into equal‐probability‐mass intervals. And, whereas BC and KD each require optimal tuning of a hyper‐parameter whose value varies with sample size and shape of the pdf, QS only requires specification of the number of quantiles to be used. Results indicate, for the class of distributions tested, that the optimal number of quantiles is a fixed fraction of the sample size (empirically determined to be ~0.25– 0.35), and that this value is relatively insensitive to distributional form or sample size. This provides a clear advantage over BC and KD since hyper‐parameter tuning is not required. Further, unlike KD, there is no need to select an appropriate kernel‐type, and so QS is applicable to pdfs of arbitrary shape, including those with discontinuous slope and/or magnitude. Bootstrapping is used to approximate the sampling variability distribution of the resulting entropy estimate, and is shown to accurately reflect the true uncertainty. For the four distributional forms studied (Gaussian, Log‐Normal, Exponential and Bimodal Gaussian Mixture), expected estimation bias is less than 1% and uncertainty is low even for samples of as few as 100 data points; in contrast, for KD the small sample bias can be as large as-10% and for BC as large as -50%. We speculate that estimating quantile locations, rather than bin‐probabilities, results in more efficient use of the information in the data to approximate the underlying shape of an unknown data generating pdf. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.Note
Open access journalISSN
1099-4300Version
Final published versionae974a485f413a2113503eed53cd6c53
10.3390/e23060740
Scopus Count
Collections
Except where otherwise noted, this item's license is described as Copyright © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).