To fulfill their responsibilities, governments rely on administrators and employees who, simply because they are human, are prone to individual and group decision-making errors. These errors have at times produced both major tragedies and minor inefficiencies. One potential strategy for overcoming cognitive limitations and group fallibilities is to invest in artificial intelligence (AI) tools that allow for the automation of governmental tasks, thereby reducing reliance on human decision-making. Yet as much as AI tools show promise for improving public administration, automation itself can fail or can generate controversy. Public administrators face the question of when exactly they should use automation. This paper considers the justifications for governmental reliance on AI along with the legal concerns raised by such reliance. Comparing AI-driven automation with a status quo that relies on human decision-making, the paper provides public administrators with guidance for making decisions about AI use. After explaining why prevailing legal doctrines present no intrinsic obstacle to governmental use of AI, the paper presents considerations for administrators to use in choosing when and how to automate existing processes. It recommends that administrators ask whether their contemplated uses meet the preconditions for the deployment of AI tools and whether these tools are in fact likely to outperform the status quo. In moving forward, administrators should also consider the possibility that a contemplated AI use will generate public or legal controversy, and then plan accordingly. The promise and legality of automated administration ultimately depends on making responsible decisions about when and how to deploy this technology