Tight Concentrations and Confidence Sequences from the Regret of Universal Portfolio
Affiliation
Department of Computer Science, University of ArizonaIssue Date
2023-11-10Keywords
Behavioral sciencesconfidence sequence
Portfolios
Prediction algorithms
Random variables
regret
Tail
Testing
universal portfolio
Upper bound
Metadata
Show full item recordCitation
F. Orabona and K. -S. Jun, "Tight Concentrations and Confidence Sequences From the Regret of Universal Portfolio," in IEEE Transactions on Information Theory, vol. 70, no. 1, pp. 436-455, Jan. 2024, doi: 10.1109/TIT.2023.3330187.Rights
© 2023 The Authors. This work is licensed under a Creative Commons Attribution 4.0 License.Collection Information
This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.Abstract
A classic problem in statistics is the estimation of the expectation of random variables from samples. This gives rise to the tightly connected problems of deriving concentration inequalities and confidence sequences, i.e., confidence intervals that hold uniformly over time. Previous studies have shown that it is possible to convert the regret guarantee of an online learning algorithm into concentration inequalities, but these concentration results were not tight. In this paper, we show regret guarantees of universal portfolio algorithms applied to the online learning problem of betting give rise to new implicit time-uniform concentration inequalities for bounded random variables. The key feature of our concentration results is that they are centered around the maximum log wealth of the best fixed betting strategy in hindsight. We propose numerical methods to solve these implicit inequalities, which results in confidence sequences that enjoy the empirical Bernstein rate with the optimal asymptotic behavior while being never worse than Bernoulli-KL confidence bounds. We further show that our confidence sequences are never vacuous with even one sample, for any given target failure rate δ ∈ (0,1). Our empirical study shows that our confidence bounds achieve the state-of-the-art performance, especially in the small sample regime. AuthorsNote
Open access articleISSN
0018-9448Version
Final Published Versionae974a485f413a2113503eed53cd6c53
10.1109/TIT.2023.3330187
Scopus Count
Collections
Except where otherwise noted, this item's license is described as © 2023 The Authors. This work is licensed under a Creative Commons Attribution 4.0 License.