Introduction
In many applied contexts we are interested in the behavior of the estimator and the interpretation of the estimate. Our starting point for this discussion is a probability space. Well, actually several of them, but they can be understood as transformations of the original one, so let's start there.
-
Unobserved
\[\Big( \mathcal{Y} \times \mathcal{Y} \times \mathcal{X} \times \mathcal{D}, \mathcal{F}, \mathbb{P}_0\Big)\] -
Observed
\[\Big( \mathcal{Y} \times \mathcal{X} \times \mathcal{D}, \mathcal{F} , \mathbb{P}\Big)\] -
\(Sample\)
\[\begin{align*} &D_{i}:: \Omega \to \mathcal{R} \\ &D_{i} \ (\_ \ \_ \ d) = d \end{align*}\] -
\(Y_i\)
\[\begin{align*} &Y_{i}:: \Omega \to \mathcal{R} \\ &Y_{i} \ (y_0 \ y_1 \ d) = dy_1 + (1-d)y_0 \end{align*}\]
Given this probability space, we can then define the random variables of interest as follows:
-
\(Y_{i0}\)
\[\begin{align*} &Y_{i0}:: \Omega \to \mathcal{R} \\ &Y_{i0} \ (y_0 \ \_ \ \_) = y_0 \end{align*}\] -
\(Y_{i1}\)
\[\begin{align*} &Y_{i1}:: \Omega \to \mathcal{R} \\ &Y_{i1} \ (\_ \ y_1 \ \_) = y_1 \end{align*}\] -
\(D_i\)
\[\begin{align*} &D_{i}:: \Omega \to \mathcal{R} \\ &D_{i} \ (\_ \ \_ \ d) = d \end{align*}\] -
\(Y_i\)
\[\begin{align*} &Y_{i}:: \Omega \to \mathcal{R} \\ &Y_{i} \ (y_0 \ y_1 \ d) = dy_1 + (1-d)y_0 \end{align*}\]
We can say that treatment is indepdent of the potential outcome if the corresponding \(\sigma\)-algebras are independent. More intuitively, this is equivalent (as shown here).
Working Across Probability Spaces
Independence
you will often here that treatment is independent of the potential outcomes. While you probably have an intuitive sense of what this means, it can be helpful to formally define this. To start, let's consider the following probability spaces:
Expectations
Many terms/properties can be understood as working across multiple probability spaces.
To Do
is \(f(A)\) a measurable set??