Showing 1 - 10 of 19
This paper proves that one can not build a computer which can, for any physical system, take the specification of that system's state as input and then correctly predict its future state before that state actually occurs. Loosely speaking, this means that one can not build a physical computer...
Persistent link: https://www.econbiz.de/10005837689
This paper presents two Bayesian alternatives to the chi-squared test for determining whether a pair of categorical data sets were generated from the same underlying distribution. It then discusses such alternatives for the Kolmogorov-Smirnov test, which is often used when the data sets consist...
Persistent link: https://www.econbiz.de/10005790642
This paper presents a Bayesian "correction" to the familiar quadratic loss bias-plus-variance formula. It then discusses some other loss-function-specific aspects of supervised learning. It ends by presenting a version of the bias-plus-variance formula appropriate for log loss.
Persistent link: https://www.econbiz.de/10005790667
In bagging [Bre94a] one uses bootstrap replicates of the training set [Efr79, ET93] to try to improve a learning algorithm's performance. The computational requirements for estimating the resultant generalization error on a test set by means of cross-validation are often prohibitive; for...
Persistent link: https://www.econbiz.de/10005790753
For any real-world generalization problem, there are always many generalizers which could be applied to the problem. This chapter discusses some algorithmic techniques for dealing with this multiplicity of possible generalizers. All of these techniques rely on partitioning the provided learning...
Persistent link: https://www.econbiz.de/10005790777
We explore the 2-armed bandit with Gaussian payoffs as a theoretical model for optimization. We formulate the problem from a Bayesian perspective, and provide the optimal strategy for both 1 and 2 pulls. We present regions of parameter space where a greedy strategy is provably optimal. We also...
Persistent link: https://www.econbiz.de/10005790787
We show that all algorithms that search for an extremum of a cost function perform exactly the same, when averaged over all possible cost functions. In particular, if algorithm A outperforms algorithm B on some cost functions, then loosely speaking there must exist exactly as many other...
Persistent link: https://www.econbiz.de/10005790828
As defined in MacLennan (1987), a {\it field computer} is a (spatial) continuum-limit neural net. This paper investigates field computers whose dynamics is also contiuum-limit, being governed by a purely linear integro-differential equation. Such systems are motivated both as as a means of...
Persistent link: https://www.econbiz.de/10005790877
This paper proves that for no prior probability distribution does the bootstrap (BS) distribution equal the predictive distribution, for all Bernoulli trails of some fixed size. It then proves that for no prior will the BS give the same first two moments as the predictive distribution for all...
Persistent link: https://www.econbiz.de/10005790883
In supervising learning it is commonly believed that penalizing complex functions help one avoid ``overfitting'' functions to data, and therefore improves generalization. It is also commonly believed that cross-validation is an effective way to choose amongst algorithms for fitting functions to...
Persistent link: https://www.econbiz.de/10005790885