Skip to content

Abstract Search

Primary Submission Category: Causal Discovery

Identifying Changes in Causal Relationships for Time Series Data

Authors: Stephen Savas,

Presenting Author: Stephen Savas*

In time-series data, existing causal discovery algorithms are capable of learning causal graphs from data, and newer methods have recently been extended to handle changes in causal relationships; for example, a drug may lose its effectiveness if a patient develops a tolerance. However, these models are often impractical, as they make strong assumptions about the underlying data (such as a single change point or periods of stationarity), or they require substantial computing resources. In order to adjust for a changing causal graph, it is important to understand when an existing graph no longer works, at which point users can retrain on recent data that is representative of the changed ground truth. By retesting an existing graph, users do not have to assume that the graph will change in the future — instead, they can use their preferred causal discovery algorithm that properly meets their own assumptions and time complexity needs, and only retrain if necessary to create an updated graph. This paper proposes a novel method that detects the situation where a causal graph no longer accurately represents the data. The approach works for causal discovery methods that estimate the causal effect by analyzing the perceived noise of a time series relative to the model’s output to understand when the level of noise is abnormal, at which point the model is likely no longer valid. Such a test allows for more practical applications of causal discovery relative to existing methods.