We examine the theory and behavior in practice of Bayesian and bootstrap methods for generating error bands on impulse responses in dynamic linear models. The Bayesian intervals have a firmer theoretical foundation in small samples, are easier to compute, and are about as good in small samples by classical criteria as are the best bootstrap intervals. Bootstrap intervals based directly on the simulated small-sample distribution of an estimator, without bias correction, perform very badly. We show that a method that has been used to extend to the overidentified case standard algorithms for Bayesian intervals in reduced form models is incorrect, and we show how to obtain correct Bayesian intervals for this case.