Showing 1 - 10 of 20
Many univariate robust estimators are based on quantiles. As already theoretically pointed out by Fernholz (in J. Stat. Plan. Inference 57(1), 29–38, <CitationRef CitationID="CR7">1997</CitationRef>), smoothing the empirical distribution function with an appropriate kernel and bandwidth can reduce the variance and mean squared error...</citationref>
Persistent link: https://www.econbiz.de/10010994282
Persistent link: https://www.econbiz.de/10010946374
Persistent link: https://www.econbiz.de/10010946562
Persistent link: https://www.econbiz.de/10005238593
First we derive the maximal breakdown value of regression equivariant estimators in two-way contingency tables under the loglinear independence model. We then prove that the L1 estimator achieves this maximal breakdown value. Finally, we illustrate how these results can be generalized towards...
Persistent link: https://www.econbiz.de/10005211779
Kernel Based Regression (KBR) minimizes a convex risk over a possibly infinite dimensional reproducing kernel Hilbert space. Recently, it was shown that KBR with a least squares loss function may have some undesirable properties from a robustness point of view: even very small amounts of...
Persistent link: https://www.econbiz.de/10008521101
Persistent link: https://www.econbiz.de/10005131048
The outlier sensitivity of classical principal component analysis (PCA) has spurred the development of robust techniques. Existing robust PCA methods like ROBPCA work best if the non-outlying data have an approximately symmetric distribution. When the original variables are skewed, too many...
Persistent link: https://www.econbiz.de/10005131091
Deepest regression (DR) is a method for linear regression introduced by P. J. Rousseeuw and M. Hubert (1999, J. Amer. Statis. Assoc.94, 388-402). The DR method is defined as the fit with largest regression depth relative to the data. In this paper we show that DR is a robust method, with...
Persistent link: https://www.econbiz.de/10005093712
Motivated by the notion of regression depth (Rousseeuw and Hubert, 1996) we introduce thecatline, a new method for simple linear regression. At any bivariate data setZn={(xi, yi);i=1, ..., n} its regression depth is at leastn/3. This lower bound is attained for data lying on a convex or...
Persistent link: https://www.econbiz.de/10005093787