Summary
In this repository you’ll find data for the ACIC 2023 competition. This README will provide all the information you need.
With this competition, we aim to search for the most appropriate methods for forecasting counterfactuals. Traditional forecasting methods–built to minimise observed error–are not built for counterfactuals. Traditional counterfactual estimators–built for unbiased estimates of causal estimands–are not built for forecasts. In this competition, we collectively can explore the trade off between these methods and advance our understanding of both forecasting and counterfactual estimation.
We welcome your participation and contributions to this exciting competition and look forward to advancing the field together.
In short:
- We want to understand the trade off between forecasting and counterfactual estimation
- Task is to make 30 predictions per unit: five steps, with outcomes under six different treatments. Loss is MSE over the units x 30 predictions.
- Data (simulated) found here, in the repository
- Competition runs March and April 2023. Results released at ACIC 2023
- Submissions here
- Enjoy 🙂
Why we made this competition
This challenge helps address a critical challenge in industry. Decision-makers – often algorithmic – rely on forecasts of outcomes under different interventions to inform their choices. For example, they want to choose prices for items based on outcomes under different prices.
The current widely used approach is supervised learning, as seen in the M5 competition. A benefit of this approach is its scalability. However, supervised learning falls short as it only fits observed outcomes to predictions based on the observed intervention. The loss function does not measure outcomes for interventions not taken.
Another approach uses observational causal inference techniques, as seen in prior ACIC competitions. This method measures the difference in outcomes between interventions but does not focus on the levels of future outcomes, which are crucial for optimisation tasks.
Reinforcement learning offers a theoretical solution to this problem, but its practical implementation is hindered by the large, complex state space and legacy systems. Incremental improvements to legacy systems can provide a way forward, but leaves us with the challenge of forecasting counterfactuals.
To address this challenge, participants in the ACIC23 competition will be asked to forecast both observed and counterfactual outcomes. The competition will provide a deeper understanding of the relationship between supervised learning and counterfactual estimation and help improve decision-making processes.
Dates
- March 1, 2023. Competition opens
- April 30, 2023. Competition closes
- May 24–26, 2023. Results released at ACIC23
Contact
Please use this this form, or open an issue in this repository.