Evaluating a targeted social program when placement is decentralized
An assessment of the welfare gains from a targeted social program can be seriously biased unless it takes proper account of the endogeneity of program participation. Bias comes from two sources of placement endogeneity: the purposive targeting of the geographic areas to receive the program, and the targeting of individual recipients within selected areas. Decentralization of program placement decisions is common, because of the administrative cost of centralized placement decisions and the fact that local groups and governments are likely to be better informed about who most needs help. But full decentralization is uncommon; the center typically retains control of broad geographic targeting. The authors argue that partial decentralization of program placement decisions creates control and instrumental variables useful for identifying program benefits. The central allocation to a local level of government is presumably based on observable indicators. The central allocation will also influence the allocation to an individual but is unlikely to determine outcomes at the individual level conditional on individual program participation. So with suitable controls for the welfare-relevant geographiccharacteristics determining program placement decisions, the center's allocation across areas can be used as an instrumental variable for individual participation. The authors use Bangladesh's Food for Education program to illustrate their approach. A single post-intervention cross-sectional household survey was used to identify the impact of the program on school attendance, using geographic placement at the village level as an instrument for individual program placement. To deal with bias from the endogeneity of village selection, the authors used a detailed community survey coordinated with the household survey to control for likely sources of heterogeneity in geographic influences on school attendance, consistent with prior information on how the government targeted the program geographically. They found that the programs had significant and sizable impacts on school attendance. At mean points, the program's incentive increased attendance by 24 percent of the maximum feasible days of schooling. A regression estimator ignoring the purposive program placement was found to result in a substantial underestimation of the program's impact. Indeed, the simplest possible control group method--assuming that nonparticipants provide a valid counterfactual--performed much better than a regression method treating placement as exogenous.