Showing 1 - 10 of 10
We study learning and information acquisition by a Bayesian agent whose prior belief is misspecified in the sense that it assigns probability 0 to the true state of the world. At each instant, the agent takes an action and observes the corresponding payoff, which is the sum of a fixed but...
Persistent link: https://www.econbiz.de/10012010010
We show that Bayesian posteriors concentrate on the outcome distributions that approximately minimize the Kullback-Leibler divergence from the empirical distribution, uniformly over sample paths, even when the prior does not have full support. This generalizes Diaconis and Freedman (1990)'s...
Persistent link: https://www.econbiz.de/10014536861
We show that the probability that Bayesian posteriors assign to the outcome distributions that do not ``best fit'' the empirical distribution in terms of Kullback-Leibler divergence converges to zero at a uniform and exponential rate, even when the prior does not have full support. This extends...
Persistent link: https://www.econbiz.de/10013235501
Persistent link: https://www.econbiz.de/10011753039
We study learning and information acquisition by a Bayesian agent whose prior belief is misspecified in the sense that it assigns probability 0 to the true state of the world. At each instant, the agent takes an action and observes the corresponding payoff, which is the sum of a fixed but...
Persistent link: https://www.econbiz.de/10011744140
Persistent link: https://www.econbiz.de/10011953614
We model the joint distribution of choice probabilities and decision times in binary choice tasks as the solution to a problem of optimal sequential sampling, where the agent is uncertain of the utility of each action and pays a constant cost per unit time for gathering information. In the...
Persistent link: https://www.econbiz.de/10011272682
We study learning and information acquisition by a Bayesian agent who is misspecified in the sense that his prior belief assigns probability zero to the true state of the world. In our model, at each instant the agent takes an action and observes the corresponding payoff, which is the sum of the...
Persistent link: https://www.econbiz.de/10012999380
We model the joint distribution of choice probabilities and decision times in binary decisions as the solution to a problem of optimal sequential sampling, where the agent is uncertain of the utility of each action and pays a constant cost per unit time for gathering information. We show that...
Persistent link: https://www.econbiz.de/10014135927
We show that Bayesian posteriors concentrate on the outcome distributions that approximately minimize the Kullback–Leibler divergence from the empirical distribution, uniformly over sample paths, even when the prior does not have full support. This generalizes Diaconis and Freedman's (1990)...
Persistent link: https://www.econbiz.de/10014440089