Introduction to causal inference, counterfactual frameworks and intuition

We begin by motivating the use of causal inference methods; introducing at a conceptual level the foundations of causal reasoning: counterfactual frameworks, causal graphs and potential framework methods. Using these concepts, we show how the simple and familiar randomized experiment addresses the challenges of causal inference.

Patterns and predictions are not enough

We discuss how machine learning methods today focus on correlation analyses and prediction, and how this is insufficient when we need to understand causal mechanisms and design interventions. We give examples where such correlational and predictive analyses can fail, showing that these are special cases of a phenomena called Simpson's Paradox.

We discuss how correlational analyses are insufficient for answering "what if?" and "why?" questions, and that these are critical questions for many of the tasks that social computing and computational social science value: from estimating the impacts of changes in online social feeds and recommender systems, to understanding the societally critical domains such as healthcare, education and governance.

Counterfactual framework for reasoning about causality

We continue our introduction by presenting the counterfactual framework. Intuitively, the counterfactual framework measures causal effects by comparing measured outcomes in two almost-identical worlds---imagine two parallel universes, identical in every way up until the point where a some ``treatment'' occurs in one world but not the other. Any subsequent difference in the two worlds is, logically, a consequence of this treatment.

Brief introduction to causal graphs and potential outcomes

Building upon the counterfactual framework, we introduce causal graphs, which are a tool for formalizing implicit assumptions about causal mechanisms (e.g., encoding domain knowledge about causal mechanisms into an analysis); and potential outcomes methods, which are statistical tools for estimating causal effects.

Intuition: Counterfactual frameworks measure causal effects by comparing outcomes in two similar worlds, identical in every way up until the point where a “treatment” is experienced in one but not the other

Randomized experiments: The gold standard for causal inference

We close our introduction by presenting the randomized experiment as the simplest method for causal inference. We describe the randomized experiment in the language of the counterfactual framework, providing a causal graph and associated potential outcomes formulation, and show how this conceptually clean and simple method addresses the challenges of causal inference.

Still, randomized experiments are sometimes too costly, unethical or otherwise infeasible. Sometimes true randomized experiments become difficult to design when they involve very different individuals in a study group, or when experimenters must resort to indirect manipulations (e.g., encouragement designs). How do we solve such problems? Such methods are the focus of the remainder of the tutorial.

results matching ""

    No results matching ""