Showing 11 - 20 of 75
Inference using difference-in-differences with clustered data requires care. Previous research has shown that t tests based on a cluster-robust variance estimator (CRVE) severely over-reject when there are few treated clusters, that different variants of the wild cluster bootstrap can...
Persistent link: https://www.econbiz.de/10011583198
Inference using large datasets is not nearly as straightforward as conventional econometric theory suggests when the disturbances are clustered, even with very small intra-cluster correlations. The information contained in such a dataset grows much more slowly with the sample size than it would...
Persistent link: https://www.econbiz.de/10011583208
When there are few treated clusters in a pure treatment or difference-in-differences setting, t tests based on a cluster-robust variance estimator (CRVE) can severely over-reject. Although procedures based on the wild cluster bootstrap often work well when the number of treated clusters is not...
Persistent link: https://www.econbiz.de/10011939455
We propose several Lagrange Multiplier tests of logit and probit models, which may be inexpensively computed by artificial linear regressions. These may be used to test for omitted variables and heteroskedasticity. We argue that one of these tests is likely to have better small-sample...
Persistent link: https://www.econbiz.de/10011940421
We examine several modified versions of the heteroskedasticity-consistent covariance matrix estimator of Hinkley and White. On the basis of sampling experiments which compare the performance of quasi t statistics, we find that one estimator, based on the jackknife, performs better in small...
Persistent link: https://www.econbiz.de/10011940422
Non-nested hypothesis tests provide a way to test the specification of an econometric model against the evidence provided by one or more non-nested alternatives. This paper surveys the recent literature on non-nested hypothesis testing in the context of regression and related models. Much of the...
Persistent link: https://www.econbiz.de/10011940423
Associated with every popular nonlinear estimation method is at least one "artificial" linear regression. We define an artificial regression in terms of three conditions that it must satisfy. Then we show how artificial regressions can be useful for numerical optimization, testing hypotheses,...
Persistent link: https://www.econbiz.de/10011940607
We first propose procedures for estimating the rejection probabilities for bootstrap tests in Monte Carlo experiments without actually computing a bootstrap test for each replication. These procedures are only about twice as expensive as estimating rejection probabilities for asymptotic tersts....
Persistent link: https://www.econbiz.de/10011940622
The fast double bootstrap, or FDB, is a procedure for calculating bootstrap P values that is much more computationally efficient than the double bootstrap itself. In many cases, it can provide more accurate results than ordinary bootstrap tests. For the fast double bootstrap to be valid, the...
Persistent link: https://www.econbiz.de/10011940645
We study several tests for the coefficient of the single right-hand-side endogenous variable in a linear equation estimated by instrumental variables. We show that all the test statistics--Student's t, Anderson-Rubin, Kleibergen's K, and likelihood ratio (LR)--can be written as functions of six...
Persistent link: https://www.econbiz.de/10011940646