Evaluating the Precision of Estimators of Quantile-Based Risk Measures
This paper examines the precision of estimators of Quantile-Based Risk Measures (Value at Risk, Expected Shortfall, Spectral Risk Measures). It first addresses the question of how to estimate the precision of these estimators, and proposes a Monte Carlo method that is free of some of the limitations of existing approaches. It then investigates the distribution of risk estimators, and presents simulation results suggesting that the common practice of relying on asymptotic normality results might be unreliable with the sample sizes commonly available to them. Finally, it investigates the relationship between the precision of different risk estimators and the distribution of underlying losses (or returns), and yields a number of useful conclusions.