Showing 1 - 9 of 9
Persistent link: https://www.econbiz.de/10010506518
Persistent link: https://www.econbiz.de/10011809730
Persistent link: https://www.econbiz.de/10001794322
Persistent link: https://www.econbiz.de/10001889620
Persistent link: https://www.econbiz.de/10001652615
We show that Bayesian posteriors concentrate on the outcome distributions that approximately minimize the Kullback–Leibler divergence from the empirical distribution, uniformly over sample paths, even when the prior does not have full support. This generalizes Diaconis and Freedman's (1990)...
Persistent link: https://www.econbiz.de/10014440089
The expected value of the log of a Bayesian’s posterior assessment of the true state of nature, computed under the probability law of the true state, is always at least as large as the log of the prior.
Persistent link: https://www.econbiz.de/10011116198
We show that the probability that Bayesian posteriors assign to the outcome distributions that do not ``best fit'' the empirical distribution in terms of Kullback-Leibler divergence converges to zero at a uniform and exponential rate, even when the prior does not have full support. This extends...
Persistent link: https://www.econbiz.de/10013235501
This paper discusses the implications of learning theory for the analysis of Bayesian games. One goal is to illuminate the issues that arise when modeling situations where players are learning about the distribution of Nature's move as well as learning about the opponents' strategies. A second...
Persistent link: https://www.econbiz.de/10014126917