Showing 1 - 10 of 1,415
In statistical machine learning, the standard measure of accuracy for models is the prediction error, i.e. the expected loss on future examples. When the data distribution is unknown, it cannot be computed but several resampling methods, such as K-fold cross-validation can be used to obtain an...
Persistent link: https://www.econbiz.de/10005417557
The similarity between objects is a fundamental element of many learning algorithms. Most non-parametric methods take this similarity to be fixed, but much recent work has shown the advantages of learning it, in particular to exploit the local invariances in the data or to capture the possibly...
Persistent link: https://www.econbiz.de/10005417543
In this paper, we study and put under a common framework a number of non-linear dimensionality reduction methods, such as Locally Linear Embedding, Isomap, Laplacian Eigenmaps and kernel PCA, which are based on performing an eigen-decomposition (hence the name 'spectral'). That framework also...
Persistent link: https://www.econbiz.de/10005417545
We perform a theoretical investigation of the variance of the cross-validation estimate of the generalization error that takes into account the variability due to the choice of training sets and test examples. This allows us to propose two new estimators of this variance. We show, via...
Persistent link: https://www.econbiz.de/10005417549
In this paper, we show a direct equivalence between spectral clustering and kernel PCA, and how both are special cases of a more general learning problem, that of learning the principal eigenfunctions of a kernel, when the functions are from a Hilbert space whose inner product is defined with...
Persistent link: https://www.econbiz.de/10005417565
We describe an interesting application of the principle of local learning to density estimation. Locally weighted fitting of a Gaussian with a regularized full covariance matrix yields a density estimator which displays improved behavior in the case where much of the probability mass is...
Persistent link: https://www.econbiz.de/10005417569
We aim at modelling fat-tailed densities whose distributions are unknown but are potentially asymmetric. In this context, the standard normality assumption is not appropriate.In order to make as few distributional assumptions as possible, we use a non-parametric algorithm to model the center of...
Persistent link: https://www.econbiz.de/10005417570
We consider sequential data that is sampled from an unknown process, so that the data are not necessarily iid. We develop a measure of generalization for such data and we consider a recently proposed approach to optimizing hyper-parameters, based on the computation of the gradient of a model...
Persistent link: https://www.econbiz.de/10005417575
Multi-task learning is a process used to learn domain-specific bias. It consists in simultaneously training models on different tasks derived from the same domain and forcing them to exchange domain information. This transfer of knowledge is performed by imposing constraints on the parameters...
Persistent link: https://www.econbiz.de/10005417579
In this paper, we set the basis for learning a multitype assets portfolio management technique relying on no assumptions over the distributions of the financial data. The neural network based model tries to capture patterns in the evolution of the market. Furthermore, the model allows a...
Persistent link: https://www.econbiz.de/10005417585