Showing 11 - 20 of 91
Persistent link: https://www.econbiz.de/10001646219
Persistent link: https://www.econbiz.de/10003752058
We will study causal relationships of a known form between random variables. Given a model, we distinguish one or more dependent (endogenous) variables Y = (Y1, . . . , Yl), l ∈ N, which are explained by a model, and independent (exogenous, explanatory) variables X = (X1, . . . ,Xp), p ∈ N,...
Persistent link: https://www.econbiz.de/10010296407
Many methods of computational statistics lead to matrix-algebra or numerical- mathematics problems. For example, the least squares method in linear regression reduces to solving a system of linear equations. The principal components method is based on finding eigenvalues and eigenvectors of a...
Persistent link: https://www.econbiz.de/10010296419
Most dimension reduction methods based on nonparametric smoothing are highly sensitive to outliers and to data coming from heavy tailed distributions. We show that the recently proposed MAVE and OPG methods by Xia et al. (2002) allow us to make them robust in a relatively straightforward way...
Persistent link: https://www.econbiz.de/10010296438
Persistent link: https://www.econbiz.de/10010309973
Persistent link: https://www.econbiz.de/10010310226
Classical parametric estimation methods applied to nonlinear regression and limited-dependent-variable models are very sensitive to misspecification and data errors. On the other hand, semiparametric and nonparametric methods, which are not restricted by parametric assumptions, require more data...
Persistent link: https://www.econbiz.de/10010310330
Persistent link: https://www.econbiz.de/10010310409
The Nadaraya-Watson estimator of regression is known to be highly sensitive to the presence of outliers in the sample. A possible way of robustication consists in using local L-estimates of regression. Whereas the local L-estimation is traditionally done using an empirical conditional...
Persistent link: https://www.econbiz.de/10010310509