Showing 1 - 8 of 8
This paper first reduces the problem of detecting structural breaks in a random walk to that of finding the best subset of explanatory variables in a regression model and then tailors various subset selection criteria to this specific problem. Of particular interest are those new criteria, which...
Persistent link: https://www.econbiz.de/10010998427
We review variable selection and variable screening in high-dimensional linear models. Thereby, a major focus is an empirical comparison of various estimation methods with respect to true and false positive selection rates based on 128 different sparse scenarios from semi-real data (real data...
Persistent link: https://www.econbiz.de/10010998445
Results on cross category effects obtained by explanatory market basket analyses may be biased as studies typically investigate only a small fraction of the retail assortment (Chib et al. in Advances in econometrics, vol 16. Econometric models in marketing. JAI, Amsterdam, pp 57–92, <CitationRef CitationID="CR11">2002</CitationRef>). We...</citationref>
Persistent link: https://www.econbiz.de/10010998447
In this paper, we study the estimation and variable selection of the sufficient dimension reduction space for survival data via a new combination of <InlineEquation ID="IEq1"> <EquationSource Format="TEX">$$L_1$$</EquationSource> </InlineEquation> penalty and the refined outer product of gradient method (rOPG; Xia et al. in J R Stat Soc Ser B 64:363–410, <CitationRef CitationID="CR28">2002</CitationRef>), called SH-OPG...</citationref></equationsource></inlineequation>
Persistent link: https://www.econbiz.de/10010998460
Locally weighted regression is a technique that predicts the response for new data items from their neighbors in the training data set, where closer data items are assigned higher weights in the prediction. However, the original method may suffer from overfitting and fail to select the relevant...
Persistent link: https://www.econbiz.de/10010998498
An efficient algorithm is derived for solving the quantile regression problem combined with a group sparsity promoting penalty. The group sparsity of the regression parameters is achieved by using a <InlineEquation ID="IEq1"> <EquationSource Format="TEX">$$\ell _{1,\infty }$$</EquationSource> <EquationSource Format="MATHML"> <math xmlns:xlink="http://www.w3.org/1999/xlink"> <msub> <mi>ℓ</mi> <mrow> <mn>1</mn> <mo>,</mo> <mi>∞</mi> </mrow> </msub> </math> </EquationSource> </InlineEquation>-norm penalty (or constraint) on the regression...</equationsource></equationsource></inlineequation>
Persistent link: https://www.econbiz.de/10010998543
In this paper, a Bayesian hierarchical model for variable selection and estimation in the context of binary quantile regression is proposed. Existing approaches to variable selection in a binary classification context are sensitive to outliers, heteroskedasticity or other anomalies of the latent...
Persistent link: https://www.econbiz.de/10010847927
Varying-coefficient models are useful tools for analyzing longitudinal data. They can effectively describe a relationship between predictors and responses which are repeatedly measured. We consider the problem of selecting variables in the varying-coefficient models via adaptive elastic net...
Persistent link: https://www.econbiz.de/10011241294