Showing 1 - 10 of 1,589
Minimax lower bounds for concept learning state, for example, that for each sample size n and learning rule gn , there exists a distribution of the observation X and a concept C to be learnt such that the expected error of gn is at least a constant times V=n, where V is the vc dimension of the...
Persistent link: https://www.econbiz.de/10014089349
We present a new general concentration-of-measure inequality and illustrate its power by applications in random combinatorics. The results find direct applications in some problems of learning theory.
Persistent link: https://www.econbiz.de/10005704883
We introduce a simple new hypothesis testing procedure, which, based on an independent sample drawn from a certain density, detects which of $k$ nominal densities is the true density is closest to, under the total variation (L_{1}) distance. We obtain a density-free uniform exponential bound for...
Persistent link: https://www.econbiz.de/10005704891
Let a class $\F$ of densities be given. We draw an i.i.d.\ sample from a density $f$ which may or may not be in $\F$. After every $n$, one must make a guess whether $f \in \F$ or not. A class is almost surely testable if there exists such a testing sequence such that for any $f$, we make...
Persistent link: https://www.econbiz.de/10005704913
We investigate on-line prediction of individual sequences. Given a class of predictors, the goal is to predict as well as the best predictor in the class, where the loss is measured by the self information (logarithmic) loss function. The excess loss (regret) is closely related to the redundancy...
Persistent link: https://www.econbiz.de/10005708008
We derive a new inequality for uniform deviations of averages from their means. The inequality is a common generalization of previous results of Vapnik and Chervonenkis (1974) and Pollard (1986). Using the new inequality we obtain tight bounds for empirical loss minimization learning.
Persistent link: https://www.econbiz.de/10005827454
For the standard kernel density estimate, it is known that one can tune the bandwidth such that the expected L1 error is within a constant factor of the optimal L1 error (obtained when one is allowed to choose the bandwidth with knowledge of the density). In this paper, we pose the same problem...
Persistent link: https://www.econbiz.de/10005827507
We show that every finite N-player normal form game possesses a correlated equilibrium with a precise lower bound on the number of outcomes to which it assigns zero probability. In particular, the largest games with a unique fully supported correlated equilibrium are two-player games; moreover,...
Persistent link: https://www.econbiz.de/10005772039
We obtain minimax lower bounds on the regret for the classical two--armed bandit problem. We provide a finite--sample minimax version of the well--known log $n$ asymptotic lower bound of Lai and Robbins. Also, in contrast to the log $n$ asymptotic results on the regret, we show that the minimax...
Persistent link: https://www.econbiz.de/10005772047
We consider adaptive sequential lossy coding of bounded individual sequences when the performance is measured by the sequentially accumulated mean squared distortion. The encoder and the decoder are connected via a noiseless channel of capacity $R$ and both are assumed to have zero delay. No...
Persistent link: https://www.econbiz.de/10005772112