Showing 1 - 10 of 249
When economists analyze a well-conducted RCT or natural experiment and find a statistically significant effect, they conclude the null of no effect is unlikely to be true. But how frequently is this conclusion warranted? The answer depends on the proportion of tested nulls that are true and the...
Persistent link: https://www.econbiz.de/10014372423
This paper describes a process for automatically generating academic finance papers using large language models (LLMs). It demonstrates the process' efficacy by producing hundreds of complete papers on stock return predictability, a topic particularly well-suited for our illustration. We first...
Persistent link: https://www.econbiz.de/10015195009
We generalize the seminal Gibbons-Ross-Shanken test to the empirically relevant case where the number of test assets far exceeds the number of observations. In such a setting, one needs to use a regularized estimator of the covariance matrix of test assets, which leads to biases in the original...
Persistent link: https://www.econbiz.de/10015361441
A growing number of central authorities use assignment mechanisms to allocate students to schools in a way that reflects student preferences and school priorities. However, most real-world mechanisms incentivize students to strategically misreport their preferences. In this paper, we provide an...
Persistent link: https://www.econbiz.de/10014544713
We propose a new specification test to assess the validity of the judge leniency design. We characterize a set of sharp testable implications, which exploit all the relevant information in the observed data distribution to detect violations of the judge leniency design assumptions. The proposed...
Persistent link: https://www.econbiz.de/10014544734
Panel or grouped data are often used to allow for unobserved individual heterogeneity in econometric models via fixed effects. In this paper, we discuss identification of a panel data model in which the unobserved heterogeneity both enters additively and interacts with treatment variables. We...
Persistent link: https://www.econbiz.de/10014322772
Models defined by moment inequalities have become a standard modeling framework for empirical economists, spreading over a wide range of fields within economics. From the point of view of an empirical researcher, the literature on inference in moment inequality models is large and complex,...
Persistent link: https://www.econbiz.de/10014247961
Econometric software packages typically report a fixed number of decimal digits for coefficient estimates and their associated standard errors. This practice misses the opportunity to use rounding rules that convey statistical precision. Using insights from the testing statistical hypotheses of...
Persistent link: https://www.econbiz.de/10014486216
In a fuzzy regression discontinuity (RD) design, the probability of treatment jumps when a running variable (R) passes a threshold (R0). Fuzzy RD estimates are obtained via a procedure analogous to two-stage least squares (2SLS), where an indicator I(R R0) plays the role of the instrument....
Persistent link: https://www.econbiz.de/10015421923
We propose a statistical model of differences in beliefs in which heterogeneous investors are represented as different machine learning model specifications. Each investor forms return forecasts from their own specific model using data inputs that are available to all investors. We measure...
Persistent link: https://www.econbiz.de/10014337816