An asymptotically optimal strategy for constrained multi-armed bandit problems
Year of publication: |
2020
|
---|---|
Authors: | Chang, Hyeong Soo |
Subject: | Multi-armed bandit | Constrained stochastic optimization | Simulation optimization | Constrained Markov decision process | Theorie | Theory | Mathematische Optimierung | Mathematical programming | Simulation | Entscheidung | Decision | Stochastischer Prozess | Stochastic process | Markov-Kette | Markov chain | Dynamische Optimierung | Dynamic programming | Wahrscheinlichkeitsrechnung | Probability theory |
-
Rectangular sets of probability measures
Shapiro, Alexander, (2016)
-
Penalty-based algorithms for the stochastic obstacle scene problem
Aksakalli, Vural, (2014)
-
Distorted probability operator for dynamic portfolio optimization in times of socio-economic crisis
Uğurlu, Kerem, (2023)
- More ...
-
Simulation based algorithms for Markov decision processes
Chang, Hyeong Soo, (2007)
-
Multi-policy iteration with a distributed voting
Chang, Hyeong Soo, (2004)
-
Multi-policy iteration with a distributed voting
Chang, Hyeong Soo, (2004)
- More ...