Simchi-Levi, David; Wang, Chonghuan - 2022
Multi-armed bandit has been well-known for its efficiency in online decision-making in terms of minimizing the loss of the participants' welfare during experiments (i.e., the regret). In clinical trials and many other scenarios, the statistical power of inferring the treatment effects (i.e., the...