The study of causality spans numerous research areas and applications, with celebrated successes in a range of fields. Yet, despite decades of progress, fundamental challenges remain for answering the most pressing questions in policy and practice. Much can go wrong in the latest efforts to use the latest administrative data sets to determine “what works,” where simple associations are often mistaken for cause-and-effect. And even when we can use carefully controlled experiments to learn about causal relationships, learning about what works on average can miss important variation in effects. Causal inference researchers are at the forefront of developing the necessary tools to inform research and practice across a range of settings.
Even when researchers develop these tools, however, there are meaningful barriers to widespread use of best practices. First, the development of new methods tends to happen in silos, targeted to a narrow group of empirical researchers and often developed by methodological researchers who aren’t directly involved in data analysis. Second, new methods may not be made available in user-friendly software or explained in ways that make sense and are compelling to applied researchers.