Showing 1 - 10 of 85
Associated with every popular nonlinear estimation method is at least one "artificial" linear regression. We define an artificial regression in terms of three conditions that it must satisfy. Then we show how artificial regressions can be useful for numerical optimization, testing hypotheses,...
Persistent link: https://www.econbiz.de/10011940607
We first propose procedures for estimating the rejection probabilities for bootstrap tests in Monte Carlo experiments without actually computing a bootstrap test for each replication. These procedures are only about twice as expensive as estimating rejection probabilities for asymptotic tersts....
Persistent link: https://www.econbiz.de/10011940622
We first propose procedures for estimating the rejection probabilities for bootstrap tests in Monte Carlo experiments without actually computing a bootstrap test for each replication. These procedures are only about twice as expensive as estimating rejection probabilities for asymptotic tersts....
Persistent link: https://www.econbiz.de/10005688294
Associated with every popular nonlinear estimation method is at least one "artificial" linear regression. We define an artificial regression in terms of three conditions that it must satisfy. Then we show how artificial regressions can be useful for numerical optimization, testing hypotheses,...
Persistent link: https://www.econbiz.de/10005653239
Associated with every popular nonlinear estimation method is at least one "artificial" linear regression. We define an artificial regression in terms of three conditions that it must satisfy. Then we show how artificial regressions can be useful for numerical optimization, testing hypotheses,...
Persistent link: https://www.econbiz.de/10005787824
Despite much recent work on the finite-sample properties of estimators and tests for linear regression models with a single endogenous regressor and weak instruments, little attention has been paid to tests for overidentifying restrictions in these circumstances. We study asymptotic tests for...
Persistent link: https://www.econbiz.de/10010368288
Inference using difference-in-differences with clustered data requires care. Previous research has shown that t tests based on a cluster-robust variance estimator (CRVE) severely over-reject when there are few treated clusters, that different variants of the wild cluster bootstrap can...
Persistent link: https://www.econbiz.de/10011583198
Inference using large datasets is not nearly as straightforward as conventional econometric theory suggests when the disturbances are clustered, even with very small intra-cluster correlations. The information contained in such a dataset grows much more slowly with the sample size than it would...
Persistent link: https://www.econbiz.de/10011583208
When there are few treated clusters in a pure treatment or difference-in-differences setting, t tests based on a cluster-robust variance estimator (CRVE) can severely over-reject. Although procedures based on the wild cluster bootstrap often work well when the number of treated clusters is not...
Persistent link: https://www.econbiz.de/10011939455
We propose several Lagrange Multiplier tests of logit and probit models, which may be inexpensively computed by artificial linear regressions. These may be used to test for omitted variables and heteroskedasticity. We argue that one of these tests is likely to have better small-sample...
Persistent link: https://www.econbiz.de/10011940421