Showing 1 - 2 of 2
In this paper we show that the approximation error of the optimal policy function in the stochastic dynamic programing problem using the policies defined by the Bellman contraction method is lower than a constant (which depends on the modulus of strong concavity of the one-period return...
Persistent link: https://www.econbiz.de/10005190025
Persistent link: https://www.econbiz.de/10005132805