Off-Training-Set Error for the Gibbs and the Bayes Optimal Generalizers
In this paper we analyze the average off-training-set behavior of the Bayes-optimal and Gibbs learning algorithms. We do this by exploiting the concept of refinement, which concerns the relationship between probability distributions. For non-uniform sampling distributions the expected off-training-set error for both learning algorithms can increase with training set size. However, we show in this paper that for uniform sampling and either algorithm, the expected error is a non-increasing function of training set size. For uniform sampling distributions, we also characterize the priors for which the expected error of the Bayes-optimal algorithm stays constant. In addition, we show that when the target function is fixed, expected off-training-set error can increase with training set size if and only if the expected error averaged over all targets decreases with training set size. Our results hold for arbitrary noise and arbitrary loss functions.
Year of publication: |
1995-02
|
---|---|
Authors: | Grossman, Tal ; Knill, Emanuel ; Wolpert, David |
Institutions: | Santa Fe Institute |
Saved in:
Saved in favorites
Similar items by person
-
Use of Bad Training Data for Better Predictions
Grossman, Tal, (1995)
-
Neural Net Representations of Empirical Protein Potentials
Grossman, Tal, (1996)
-
Noise Sensitivity Signatures for Model Selection
Grossman, Tal, (1995)
- More ...