Showing 1 - 10 of 19
This paper proves that one can not build a computer which can, for any physical system, take the specification of that system's state as input and then correctly predict its future state before that state actually occurs. Loosely speaking, this means that one can not build a physical computer...
Persistent link: https://www.econbiz.de/10005837689
For systems usually characterized as complex/living/intelligent, the spatio-temporal patterns exhibited on different scales differ markedly from one another. (E.g., the biomass distribution of a human body looks very different depending on the spatial scale at which one examines that biomass.)...
Persistent link: https://www.econbiz.de/10005739991
The ``evidence'' procedure for setting hyperparameters is essentially the same as the techniques of ML-II and generalized maximum likelihood. Unlike those older techniques however, the evidence procedure has been justified (and used) as an approximation to the hierarchical Bayesian calculation....
Persistent link: https://www.econbiz.de/10005739998
Part I: Bayes Estimators and the Shannon Entropy. <p> This paper is the first of two on the problem of estimation a function of a probability distribution from a finite set of samples of that distribution. In this paper a Bayerian analysis of this problem is presented, the optimal properties of the...</p>
Persistent link: https://www.econbiz.de/10005623641
This paper presents two Bayesian alternatives to the chi-squared test for determining whether a pair of categorical data sets were generated from the same underlying distribution. It then discusses such alternatives for the Kolmogorov-Smirnov test, which is often used when the data sets consist...
Persistent link: https://www.econbiz.de/10005790642
This paper presents a Bayesian "correction" to the familiar quadratic loss bias-plus-variance formula. It then discusses some other loss-function-specific aspects of supervised learning. It ends by presenting a version of the bias-plus-variance formula appropriate for log loss.
Persistent link: https://www.econbiz.de/10005790667
In bagging [Bre94a] one uses bootstrap replicates of the training set [Efr79, ET93] to try to improve a learning algorithm's performance. The computational requirements for estimating the resultant generalization error on a test set by means of cross-validation are often prohibitive; for...
Persistent link: https://www.econbiz.de/10005790753
For any real-world generalization problem, there are always many generalizers which could be applied to the problem. This chapter discusses some algorithmic techniques for dealing with this multiplicity of possible generalizers. All of these techniques rely on partitioning the provided learning...
Persistent link: https://www.econbiz.de/10005790777
We explore the 2-armed bandit with Gaussian payoffs as a theoretical model for optimization. We formulate the problem from a Bayesian perspective, and provide the optimal strategy for both 1 and 2 pulls. We present regions of parameter space where a greedy strategy is provably optimal. We also...
Persistent link: https://www.econbiz.de/10005790787
We show that all algorithms that search for an extremum of a cost function perform exactly the same, when averaged over all possible cost functions. In particular, if algorithm A outperforms algorithm B on some cost functions, then loosely speaking there must exist exactly as many other...
Persistent link: https://www.econbiz.de/10005790828