Skip to content

Abstract Search

Primary Submission Category: LLM and Causality

A New Frontier at the Intersection of Causality and LLMs

Authors: Emre Kiciman, Robert Ness, Amit Sharma, Chenhao Tan,

Presenting Author: Emre Kiciman*

Correct causal reasoning requires domain knowledge beyond observed data. Consequently, the first step to correctly frame and answer cause-and-effect questions in medicine, science, law, and engineering requires working closely with domain experts and capturing their (human) understanding of system dynamics and mechanisms. This is a labor-intensive practice, limited by expert availability, and a significant bottleneck to widespread application of causal methods.

In this talk, we will delve into the causal capabilities of large language models (LLMs), discussing recent studies and benchmarks of their ability to retrieve and apply causal knowledge, as well as the limitations of their causal reasoning capabilities. Most notably, LLMs present the first instance of general-purpose assistance for constructing causal arguments, including generating causal graphical models and identifying contextual information from natural language. This promises to reduce the necessary human effort and error in end-to-end causal inference and reasoning, broadening their practical usage. Ultimately, by capturing common sense and domain knowledge, we believe LLMs are a catalyst for a new frontier facilitating translation between real world scenarios and causal questions, and formal and data-driven methods to answer them.