Showing 1 - 10 of 10
This work concerns finte-state Markov decision chains endowed with the long-run average reward criterion. Assuming that the optimality equation has a solution, it is shown that a nearly optimal stationary policy, as well as an approximation to the optimal average reward within a specified error,...
Persistent link: https://www.econbiz.de/10010759438
In this paper we extend standard dynamic programming results for the risk sensitive optimal control of discrete time Markov chains to a new class of models. The state space is only finite, but now the assumptions about the Markov transition matrix are much less restrictive. Our results are then...
Persistent link: https://www.econbiz.de/10010759262
In this paper we study an optimal investment problem of an insurer when the company has the opportunity to invest in a risky asset using stochastic control techniques. A closed form solution is given when the risk preferences are exponential as well as an estimate of the ruin probability when...
Persistent link: https://www.econbiz.de/10010759514
This work concerns controlled Markov chains with denumerable state space and discrete time parameter. The reward function is assumed to be≤0 and the performance of a control policy is measured by the expected total-reward criterion. Within this context, sufficient conditions are given so that...
Persistent link: https://www.econbiz.de/10010847483
This note concerns controlled Markov chains on a denumerable sate space. The performance of a control policy is measured by the risk-sensitive average criterion, and it is assumed that (a) the simultaneous Doeblin condition holds, and (b) the system is communicating under the action of each...
Persistent link: https://www.econbiz.de/10010759218
This note concerns Markov decision processes on a discrete state space. It is supposed that the reward function is nonnegative, and that the decision maker has a nonnull constant risk-sensitivity, which leads to grade random rewards via the expectation of an exponential utility function. The...
Persistent link: https://www.econbiz.de/10010759334
This note concerns discrete-time Markov decision processes with denumerable state space. A control policy is graded by the long-run expected average reward criterion, and the main feature of the model is that the reward function and the transition law depend on an unknown parameter. Besides...
Persistent link: https://www.econbiz.de/10010759346
This work is concerned with controlled Markov chains with finite state and action spaces. It is assumed that the decision maker has an arbitrary but constant risk sensitivity coefficient, and that the performance of a control policy is measured by the long-run average cost criterion. Within this...
Persistent link: https://www.econbiz.de/10010759540
We study controlled Markov chains with denumerable state space and bounded costs per stage. A (long-run) risk-sensitive average cost criterion, associated to an exponential utility function with a constant risk sensitivity coefficient, is used as a performance measure. The main assumption on the...
Persistent link: https://www.econbiz.de/10010759565
This work concerns discrete-time Markov decision processes with finite state space and bounded costs per stage. The decision maker ranks random costs via the expectation of the utility function associated to a constant risk sensitivity coefficient, and the performance of a control policy is...
Persistent link: https://www.econbiz.de/10010759568