Testing the Value of Probability Forecasts for Calibrated Combining
We combine the probability forecasts of real GDP declines from the U.S. Survey of Professional Forecasters, after trimming the forecasts that do not have "value" in the sense of Merton (1981). For this purpose, we propose a new test to evaluate probability forecasts that does not require converting the probabilities to binary forecasts before testing. The test accommodates serial correlation and skewness in the forecasts, and is implemented using a circular block bootstrap procedure. We find that the number of forecasters making valuable forecasts decreases sharply as horizon increases. The beta-transformed linear pool, based only on the valuable individual forecasts, is shown to outperform the simple average for all horizons on a number of performance measures including calibration and sharpness.
Year of publication: |
2013
|
---|---|
Authors: | Lahiri, Kajal ; Peng, Huaming ; Zhao, Yongchen |
Institutions: | University at Albany, SUNY, Department of Economics |
Saved in:
freely available
Saved in favorites
Similar items by person
-
Machine Learning and Forecast Combination in Incomplete Panels
Lahiri, Kajal, (2013)
-
The yield spread puzzle and the information content of SPF forecasts
Lahiri, Kajal, (2012)
-
Quantifying Heterogeneous Survey Expectations: The Carlson-Parkin Method Revisited
Lahiri, Kajal, (2013)
- More ...