Monotone Class Theorem for Sets

Monotone Class Theorem for Sets

Unless otherwise specified, all classes of sets in this article refer to classes of subsets of $\Omega$.

The Monotone Class Theorem

In many situations, we hope to prove that every element in some $\sigma$-algebra $\mathcal{F}$ satisfies a certain property $P$, but it is often difficult to write down the explicit form of every element in this $\sigma$-algebra $\mathcal{F}$ (for example, the Borel $\sigma$-algebra $\mathcal{B}(\mathbb{R})$ on $\mathbb{R}$, product $\sigma$-algebras, etc.). However, we can often clearly write down what kind of simple class of sets generates them, for example:

$$
\mathcal{B}(\mathbb{R}) = \sigma\left( \mathcal{C}\right),
$$

where

$$
\mathcal{C} = \left\lbrace (a, b]\ ;\ a\le b \right\rbrace
$$

is a semiring. Another example is the product $\sigma$-algebra corresponding to $\sigma$-algebras $\mathcal{F}_1, \ \mathcal{F}_2$:

$$
\mathcal{F}_1\times \mathcal{F}_2 = \sigma(J_0),
$$

where

$$
J_0 = \left\lbrace A_1\times A_2\ ;\ A_1\in \mathcal{F}_1, \ A_2\in \mathcal{F}_2 \right\rbrace
$$

is a semialgebra. In general, it is easy for us to prove that every element in $\mathcal{C}$ or $J_0$ satisfies the property $P$. Then how can we extend such a conclusion to the $\sigma$-algebras generated by them, namely $\mathcal{B}(\mathbb{R})$ or $\mathcal{F}_1\times \mathcal{F}_2$? At this point, we need to use the good set principle and the monotone class theorem. In later study, we will gradually see that the good set principle and the monotone class theorem play a very important role in advanced probability theory: they realize the transition from simple to complicated, from special to general, from finite to infinite, and from discrete to continuous. Therefore, we may say: whether in the past, present, or future, the monotone class theorem is extremely important!

The Good Set Principle

We hope to prove that some $\sigma$-algebra $\mathcal{F} = \sigma(\mathcal{C})$ satisfies a certain property $P$. The basic idea of the good set principle is to define such a “good set”

$$
\mathcal G = \{A\in \mathcal{F}\ ;\ A \text{ satisfies property } P\},
$$

and clearly $\mathcal G\subset \mathcal{F}$. If we can prove that $\mathcal G \supset \mathcal{F}$, then we have proved that $\mathcal G = \mathcal{F}$, that is, every element in $\mathcal{F}$ satisfies the property $P$.

Then how should we prove that $\mathcal G \supset \mathcal{F}$? A natural idea is that if we can prove:

  1. $\mathcal G\supset \mathcal{C}$,
  2. $\mathcal G$ is a $\sigma$-algebra,

then we have

$$
\mathcal G = \sigma(\mathcal G)\supset \sigma(\mathcal{C}) = \mathcal{F}.
$$

But very unfortunately, except in some extremely simple situations, in general we cannot obtain that $\mathcal G$ is a $\sigma$-algebra. Usually, we can only prove that it is a $\lambda$-class. At this point, we need to use the monotone class theorem.

The Set Version of the Monotone Class Theorem

The monotone class theorem mainly includes the set version of the monotone class theorem and the function version of the monotone class theorem. We will first introduce the set version of the monotone class theorem, including Dynkin’s $\pi-\lambda$ theorem and Halmos’s monotone class theorem. Among them, the $\pi-\lambda$ theorem is more commonly used, and for this reason this article will mainly introduce the $\pi-\lambda$ theorem.

Then what does the set version of the monotone class theorem say? Dynkin’s $\pi-\lambda$ theorem mainly tells us that when a class of sets is a $\pi$-class, the $\lambda$-class generated by it is the same as the $\sigma$-algebra generated by it. Then what is the use of the set version of the monotone class theorem? If the good set $\mathcal G$ defined before is a $\lambda$-class, and $\mathcal{C}$ is a $\pi$-class, then we have

$$
\mathcal G = \lambda(\mathcal G) \supset \lambda(\mathcal{C}) = \sigma(\mathcal{C}) = \mathcal{F},
$$

and thus we obtain the result we want. Moreover, in later study it is easy to find that the conditions that the good set $\mathcal G$ is a $\lambda$-class and that $\mathcal{C}$ is a $\pi$-class are often easy to prove.

Theorem 1 ($\pi-\lambda$ theorem) If a class of sets $\mathcal{C}$ is a $\pi$-class, then $\lambda(\mathcal{C}) = \sigma(\mathcal{C})$.

Proof: First, $\sigma(\mathcal{C})$ is a $\sigma$-algebra, hence in particular it is a $\lambda$-class containing $\mathcal{C}$. Therefore, since $\lambda(\mathcal{C})$ is the smallest $\lambda$-class containing $\mathcal{C}$, we know that $\lambda(\mathcal{C}) \subset\sigma(\mathcal{C})$. Thus we only need to prove that $\lambda(\mathcal{C}) \supset\sigma(\mathcal{C})$. Similarly, since $\sigma(\mathcal{C})$ is the smallest $\sigma$-algebra containing $\mathcal{C}$, we only need to prove that $\lambda(\mathcal{C})$ is a $\sigma$-algebra. In turn, we only need to prove that $\lambda(\mathcal{C})$ is a $\pi$-class, namely,

$$
A, B\in \lambda(\mathcal{C}) \Longrightarrow A\cap B\in \lambda(\mathcal{C}).
$$

Below we use the good set principle to prove the above conclusion. Let

$$
\mathcal G = \{A\in \lambda(\mathcal{C})\ ;\ \forall B\in \lambda(\mathcal{C}), \ A\cap B\in \lambda(\mathcal{C})\} \subset \lambda(\mathcal{C}).
$$

Then we still need to prove that $\mathcal G$ is a $\lambda$-class and $\mathcal G\supset \mathcal{C}$.

We first prove that $\mathcal G$ is a $\lambda$-class. Indeed,

  1. (1) $\Omega\in \mathcal G$ is obvious.

  2. (2) If $A, B\in \mathcal G$ and $A\subset B$, then we need to prove that $B\backslash A \in \mathcal G$. Since $A, B\in \mathcal G$, we know that $A, B\in \lambda(\mathcal{C})$ and for every $D\in \lambda(\mathcal{C})$, we have $A\cap D\in \lambda(\mathcal{C})$ and $B\cap D\in \lambda(\mathcal{C})$. Since $A\subset B$, we have $A\cap D\subset B\cap D$. Since $\lambda(\mathcal{C})$ is a $\lambda$-class, it follows that

$$
B\backslash A\in \lambda(\mathcal{C}), \quad (B\backslash A)\cap D = (B\cap D)\backslash (A\cap D)\in \lambda(\mathcal{C}),
$$

hence $B\backslash A\in \mathcal G$.

  1. (3) If $A_n\in \mathcal G$ and $A_n\uparrow A$, we need to prove that $A\in \mathcal G$. Since $A_n\in \mathcal G$, we know that $A_n\in \lambda(\mathcal{C})$ and for every $D\in \lambda(\mathcal{C})$, we have $A_n\cap D\in \lambda(\mathcal{C})$. Since $\lambda(\mathcal{C})$ is a $\lambda$-class, it follows that

$$
A\in \lambda(\mathcal{C}), \quad A_n\cap D\uparrow A\cap D\in \lambda(\mathcal{C}).
$$

Therefore, $A\in \mathcal G$.

Hence $\mathcal G$ is indeed a $\lambda$-class. Next we still need to prove that $\mathcal G\supset \mathcal{C}$, namely, for every $B\in \mathcal{C}$, we have $B\in \lambda(\mathcal{C})$ and for every $A\in \lambda(\mathcal{C})$, we have $A\cap B\in \lambda(\mathcal{C})$. Clearly, for every $B\in \mathcal{C}$, we have $B\in \lambda(\mathcal{C})$. Let

$$
\mathcal G_2 = \{ A\in \lambda(\mathcal{C})\ ;\ \forall B\in \mathcal{C}, \ A\cap B\in \lambda(\mathcal{C})\} \subset \lambda(\mathcal{C}).
$$

Similarly, it is easy to verify that $\mathcal G_2$ is also a $\lambda$-class, and since $\mathcal{C}$ is a $\pi$-class, we know that $\mathcal G_2\supset \mathcal{C}$. Hence

$$
\lambda(\mathcal{C})\supset \mathcal G_2 = \lambda(\mathcal G_2)\supset \lambda(\mathcal{C}).
$$

Therefore, $\mathcal G_2 = \lambda(\mathcal{C})$. That is, for every $A\in \lambda(\mathcal{C})$ and every $B\in \mathcal{C}$, we have $A\cap B\in \lambda(\mathcal{C})$. Therefore, $\mathcal G\supset \mathcal{C}$.

Combining the above, we have $\mathcal G = \lambda(\mathcal{C})$, and the conclusion follows.

Similarly, we may prove Halmos’s monotone class theorem.

Theorem 2 (Halmos’s monotone class theorem) If a class of sets $\mathcal{C}$ is an algebra, then $m(\mathcal{C}) = \sigma(\mathcal{C})$.

Proof: Left as an exercise.

Examples of Applications of the Set Version of the Monotone Class Theorem

The set version of the monotone class theorem is an important foundation of advanced probability theory. Its importance is no less than the role of the definition of the limit of a sequence in mathematical analysis (although this may be a slight exaggeration). It is an important guarantee for making probability theory rigorous. In this section, we will use several examples to experience the importance of the monotone class theorem, and in later study we will continue to appreciate this point.

Application 1: Uniqueness of Measure Extension

Theorem 3 (Uniqueness of measure extension) If $\mathcal{C}$ is a $\pi$-class, and $\mathbb{P}, \ \mathbb{Q}$ are probability measures on $\sigma(\mathcal{C})$, and their restrictions to $\mathcal{C}$ are equal, namely, $\forall A\in \mathcal{C}$ we have $\mathbb{P}(A) = \mathbb{Q}(A)$, then $\mathbb{P}\equiv \mathbb{Q}$, namely, $\forall A\in \sigma(\mathcal{C})$, we have $\mathbb{P}(A) = \mathbb{Q}(A)$.

Proof: Let

$$
\mathcal G = \{A \in \sigma(\mathcal{C}) \ |\ \mathbb{P}(A) = \mathbb{Q}(A) \} \subset \sigma(\mathcal{C}).
$$

Then, using the finite subtractivity and continuity from below of probability measures, we can prove that $\mathcal G$ is a $\lambda$-class. Hence, by the $\pi-\lambda$ theorem and the fact that $\mathcal{C}$ is a $\pi$-class, we have

$$
\mathcal G = \lambda(\mathcal G) \supset \lambda(\mathcal{C}) = \sigma(\mathcal{C}).
$$

Therefore, $\mathcal G = \sigma(\mathcal{C})$. The conclusion follows.

Remark: In the above example, we can only prove that $\mathcal G$ is a $\lambda$-class, rather than a $\sigma$-algebra, because the finite subtractivity of probability measures only holds for proper differences. From this one can also see the huge role of the monotone class theorem. In addition, this theorem is an important foundation for the uniqueness part of the Carathéodory extension theorem in the future. With uniqueness of extension, we can guarantee that some common measures defined by extension (such as the Lebesgue measure, the Lebesgue-Stieltjes measure, and measures on product spaces) are well-defined.

Application 2: Independence

Presumably everyone has encountered the concept of independence of events in elementary probability theory. It is defined as follows:

Definition 1 (Independence of events) Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space. A family of events $A_t \in \mathcal{F}, \ t \in T$ is said to be mutually independent if for every $n \in \mathbb{N}$ and every $t_1, \cdots, t_n \in T$, we have

$$
\mathbb{P}(A_{t_1} \cap \cdots \cap A_{t_n}) = \mathbb{P}(A_{t_1}) \cdots \mathbb{P}(A_{t_n}).
$$

Now we extend the concept of independence of events to classes of sets and random elements.

Definition 2 (Independence of classes of sets) Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space. A family of classes of sets $\mathcal H_t \subset \mathcal{F}, \ t \in T$ is said to be mutually independent if for every $A_t \in \mathcal H_t$, the events $A_t, \ t \in T$ are mutually independent.

Definition 3 (Independence of random elements) Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space, and let $(E_t, \mathcal H_t), \ t \in T$ be a family of measurable spaces. Let $X_t \in \mathcal{F} / \mathcal H_t, \ t \in T$ be a family of random elements. Then $X_t, \ t \in T$ are said to be mutually independent if $\sigma(X_t) = X_t^{-1}(\mathcal H_t), \ t \in T$ are mutually independent.

Definition 4 (Independence of a random element and a class of sets) Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space, let $(E, \mathcal H)$ be a measurable space, let $X \in \mathcal{F} / \mathcal H$ be a random element, and let $\mathcal G \subset \mathcal{F}$ be a $\sigma$-algebra. Then $X$ and $\mathcal G$ are said to be independent if $\sigma(X)$ and $\mathcal G$ are independent.

Recall that in elementary probability theory, the independence of random variables is defined as follows:

Definition 5 (Independence of random variables in elementary probability theory) We say that random variables $X_1, \cdots, X_m$ are mutually independent if for every $n \in \mathbb{N}$ and every $t_1, \cdots, t_n \in \{1, \cdots, m\}$, we have

$$
F_{X_{t_1}, \cdots, X_{t_n}}(x_1, \cdots, x_n) = F_{X_{t_1}}(x_1) \cdots F_{X_{t_n}}(x_n).
$$

where

$$
F_{X_{t_1}, \cdots, X_{t_n}}(x_1, \cdots, x_n) = \mathbb{P}(X_{t_1} \le x_1, \cdots, X_{t_n} \le x_n)
$$

is the joint distribution function, and

$$
F_{X_{t_j}}(x_j) = \mathbb{P}(X_{t_j} \le x_j), \quad j = 1, 2, \cdots, n
$$

are the marginal distribution functions.

Thus we naturally want to ask the following question:

Question 1 What is the relationship between the definition of independence of random variables in elementary probability theory and the definition of independence of random variables in advanced probability theory?

In fact, the definition of independence of random variables in advanced probability theory is equivalent to that in elementary probability theory, but the notion of independence in advanced probability theory is not restricted to random variables; it is further extended to random elements. To explain this issue, let us first look concretely at the definition of independence of random variables in advanced probability theory. It is equivalent to the following: for every $B_t \in \mathcal H_t \subset \mathcal{B}(\mathbb{R}), \ t \in T$, for every $n \in \mathbb{N}$ and every $t_1, \cdots, t_n \in \{1, \cdots, m\}$, we have that

$$
X_{t_1}^{-1}(B_{t_1}), \cdots, X_{t_n}^{-1}(B_{t_n})
$$

are mutually independent, namely,
$$
\mathbb{P}(X_{t_1} \in B_{t_1}, \cdots, X_{t_n} \in B_{t_n}) = \mathbb{P}(X_{t_1} \in B_{t_1}) \cdots \mathbb{P}(X_{t_n} \in B_{t_n}).
$$

Therefore, when we take $\mathcal H_t = \{(-\infty, x_t] \ ;\ x_t \in \mathbb{R}\}$, we immediately obtain the definition of independence of random variables in elementary probability theory. Conversely, in order to derive the definition of independence of random variables in advanced probability theory from the definition in elementary probability theory, we still need the following extension theorem for independent classes.

Theorem 4 (Extension theorem for independent classes) Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space. If a family of classes of sets $\mathcal H_t \subset \mathcal{F}, \ t \in T$ are mutually independent, then $\lambda(\mathcal H_t), \ t \in T$ are mutually independent. If furthermore $\mathcal H_t, \ t \in T$ are $\pi$-classes, then $\sigma(\mathcal H_t), \ t \in T$ are mutually independent.

Proof: For every $n \in \mathbb{N}$ and every $t_1, \cdots, t_n \in T$, let

$$
\mathcal G_{t_1} = \{ A_{t_1} \in \lambda(\mathcal H_{t_1}) \ ;\ \mathbb{P}(A_{t_1}, \cdots, A_{t_n}) = \mathbb{P}(A_{t_1}) \cdots \mathbb{P}(A_{t_n}), \ \forall A_{t_2} \in \mathcal H_{t_2}, \cdots, A_{t_n} \in \mathcal H_{t_n} \}.
$$

It is easy to prove that $\mathcal G_{t_1}$ is a $\lambda$-class and $\mathcal G_{t_1} \supset \mathcal H_{t_1}$. Therefore, $\mathcal G_{t_1} = \lambda(\mathcal H_{t_1})$.

Furthermore, let

$$
\mathcal G_{t_2} = \{ A_{t_2} \in \lambda(\mathcal H_{t_2}) \ ;\ \mathbb{P}(A_{t_1}, \cdots, A_{t_n}) = \mathbb{P}(A_{t_1}) \cdots \mathbb{P}(A_{t_n}), \ \forall A_{t_1} \in \lambda(\mathcal H_{t_1}), A_{t_3} \in \mathcal H_{t_3}, \cdots, A_{t_n} \in \mathcal H_{t_n} \}.
$$

It is also easy to prove that $\mathcal G_{t_2}$ is a $\lambda$-class and $\mathcal G_{t_2} \supset \mathcal H_{t_2}$. Therefore, $\mathcal G_{t_2} = \lambda(\mathcal H_{t_2})$. Continuing in this way, we obtain that $\lambda(\mathcal H_t), \ t \in T$ are mutually independent.

If furthermore $\mathcal H_t, \ t \in T$ are $\pi$-classes, then by the monotone class theorem $\lambda(\mathcal H_t) = \sigma(\mathcal H_t)$, hence $\sigma(\mathcal H_t), \ t \in T$ are mutually independent.

With the extension theorem for independent classes, we now answer Question 1. We still need to prove that Definition 5 implies Definition 3. Let $\mathcal H_t = \{(-\infty, x_t] \ ;\ x_t \in \mathbb{R}\}$. It is easy to prove that this is a $\pi$-class and that $\sigma(\mathcal H_t) = \mathcal{B}(\mathbb{R})$. Definition 5 tells us that $\mathcal H_t, \ t \in T$ are mutually independent. Hence, by the extension theorem for independent classes, we immediately obtain Definition 3.

From the above analysis, we also see that through the monotone class theorem, we extend a property on the special $\pi$-class $\mathcal H_t = \{(-\infty, x_t] \ ;\ x_t \in \mathbb{R}\}$ to the general $\sigma$-algebra $\mathcal{B}(\mathbb{R})$. From this, one can also see that the monotone class theorem realizes the transition from simple to complicated and from special to general.

But why is independence defined in this way in advanced probability theory? In fact, besides being extendable to random elements, this definition is stronger than the condition in elementary probability theory. Therefore, the definition in advanced probability theory can be used to prove some properties that cannot be clearly explained in elementary probability theory. If I only say this, then you will surely think that I am a liar, so let us look at a few examples.

Theorem 5 Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space, let $(E_t, \mathcal H_t), \ t \in T$ be a family of measurable spaces, and let $X_t \in \mathcal{F} / \mathcal H_t, \ t \in T$ be a family of random elements. If $X_t, \ t \in T$ are mutually independent, then for every $g_t \in \mathcal H_t / \mathcal{B}(\mathbb{R})$, we have that $g_t(X_t), \ t \in T$ are mutually independent.

Proof: For every $n \in \mathbb{N}$ and every $t_1, \cdots, t_n \in T$, and every $B_{t_1}, \cdots, B_{t_n} \in \mathcal{B}(\mathbb{R})$, we have

$$
\mathbb{P}(g_{t_1}(X_{t_1}) \in B_{t_1}, \cdots, g_{t_n}(X_{t_n}) \in B_{t_n}) = \mathbb{P}(X_{t_1} \in g_{t_1}^{-1}(B_{t_1}), \cdots, X_{t_n} \in g_{t_n}^{-1}(B_{t_n}))
$$

$$
= \mathbb{P}(X_{t_1} \in g_{t_1}^{-1}(B_{t_1})) \cdots \mathbb{P}(X_{t_n} \in g_{t_n}^{-1}(B_{t_n})) = \mathbb{P}(g_{t_1}(X_{t_1}) \in B_{t_1}) \cdots \mathbb{P}(g_{t_n}(X_{t_n}) \in B_{t_n}).
$$

Therefore, $g_t(X_t), \ t \in T$ are mutually independent.

Question 2 In elementary probability theory, if random variables $X_1, X_2, \cdots, X_5$ are mutually independent, then why are $X_1 + X_2$ and $X_3 + X_4 + X_5$ independent? Why are $X_1 X_2$ and $X_3 X_4 X_5$ independent?

In fact, Question 2 cannot be proved using the definition by distribution functions in elementary probability theory. But with the help of the definition of independence in advanced probability theory and the monotone class theorem, this can be explained very well. We have the following lemma.

Lemma 1 If the $\pi$-classes $\mathcal E_i, \ i=1, 2, 3, 4, 5$ are mutually independent, then $\sigma(\mathcal E_1 \cup \mathcal E_2)$ and $\sigma(\mathcal E_3 \cup \mathcal E_4 \cup \mathcal E_5)$ are mutually independent.

Proof: Let $\mathcal G_1$ and $\mathcal G_2$ be the classes obtained respectively by closing the classes $\mathcal E_1 \cup \mathcal E_2$ and $\mathcal E_3 \cup \mathcal E_4 \cup \mathcal E_5$ under finite intersections, namely,

$$
\mathcal G_1 = (\mathcal E_1 \cup \mathcal E_2) _{\cap f}, \quad \mathcal G_2 = (\mathcal E_3 \cup \mathcal E_4 \cup \mathcal E_5) _{\cap f}.
$$

Clearly, $\mathcal G_1, \mathcal G_2$ are both $\pi$-classes, and since $\mathcal E_i, \ i=1, 2, 3, 4, 5$ are mutually independent and are all $\pi$-classes, it is easy to see that $\mathcal G_1, \mathcal G_2$ are mutually independent. Hence, by the extension theorem for independent classes, $\sigma(\mathcal G_1)$ and $\sigma(\mathcal G_2)$ are independent. Moreover,
$$
\sigma(\mathcal G_1) = \sigma(\mathcal E_1 \cup \mathcal E_2), \quad \sigma(\mathcal G_2) = \sigma(\mathcal E_3 \cup \mathcal E_4 \cup \mathcal E_5).
$$

Therefore, $\sigma(\mathcal E_1 \cup \mathcal E_2)$ and $\sigma(\mathcal E_3 \cup \mathcal E_4 \cup \mathcal E_5)$ are mutually independent.

Answer to Question 2 According to Lemma 1, take $\mathcal E_i = \sigma(X_i)$. Then $\sigma(X_1, X_2) = \sigma(\sigma(X_1) \cup \sigma(X_2))$ and $\sigma(X_3, X_4, X_5) = \sigma(\sigma(X_3) \cup \sigma(X_4) \cup \sigma(X_5))$ are mutually independent. Moreover,

$$
X_1 + X_2 \in \sigma(X_1, X_2), \quad X_3 + X_4 + X_5 \in \sigma(X_3, X_4, X_5)
$$

$$
X_1 X_2 \in \sigma(X_1, X_2), \quad X_3 X_4 X_5 \in \sigma(X_3, X_4, X_5)
$$

hence $X_1 + X_2$ and $X_3 + X_4 + X_5$ are independent, and $X_1 X_2$ and $X_3 X_4 X_5$ are independent.

We can further generalize Lemma 1 to the general case, namely, we have the following theorem.

Theorem 6 If the $\pi$-classes $\mathcal E_t, \ t \in T$ are mutually independent, and

$$
T = \biguplus_{i \in I} S_i,
$$

then

$$
\sigma\left( \bigcup_{t \in S_i} \mathcal E_t \right), \quad i \in I
$$

are also mutually independent.

Proof Left as an exercise.

Finally, let us look at one last example.

Definition 6 (Independent increments) A family of random elements $X_t, \ t \in T$ is said to have independent increments if for every $n \in \mathbb{N}$ and every $t_1, \cdots, t_n \in T$,

$$
X_{t_0}, \ X_{t_1} - X_{t_0}, \cdots, \ X_{t_n} - X_{t_{n-1}}
$$

are independent.

Theorem 7 Let $X_t, \ t \in T$ be random elements, and write $\mathcal{F}_t = \sigma(X_s, \ s \le t)$. Then $X_t, \ t \in T$ has independent increments if and only if for every $s < t$, $X_t - X_s$ is independent of $\mathcal{F}_s$.

Proof: The sufficiency is obvious, so we only prove necessity. Let

$$
\mathcal G = \{ A \in \mathcal{F}_s \mid A \text{ is independent of } X_t - X_s \},
$$

then it is easy to prove that $\mathcal G$ is a $\lambda$-class (left as an exercise). Let

$$
\mathcal{A} = \left\lbrace \bigcup_{0 \le s_0 < s_1 < \cdots < s_n = s} \sigma(X_{s_0}, \cdots, X_{s_n}) \mid n \ge 1 \right\rbrace,
$$

then it is also easy to prove that $\mathcal{A}$ is a $\pi$-class. Next we prove that $\mathcal{A} \subset \mathcal G$.

Indeed, since $X_t, \ t \in T$ has independent increments, for every $0 \le s_0 < s_1 < \cdots < s_n = s < t$, we have

$$
\sigma(X_{s_0}), \ \sigma(X_{s_1} - X_{s_0}), \cdots, \sigma(X_s - X_{s_{n-1}}), \ X_t - X_s
$$

mutually independent. By Theorem 6, it follows that $\sigma(X_{s_0}, \cdots, X_{s_n})$ is independent of $X_t - X_s$, and therefore $\mathcal{A} \subset \mathcal G$. Thus, by the monotone class theorem,

$$
\mathcal G = \lambda(\mathcal G) \supset \lambda(\mathcal{A}) = \sigma(\mathcal{A}) = \mathcal{F}_s
$$

and hence $\mathcal G = \mathcal{F}_s$, that is, for every $s < t$, $X_t - X_s$ is independent of $\mathcal{F}_s$.

The cover image of this article was taken at Singapore Changi Airport, Singapore.

Author

Handstein Wang

Posted on

2024-12-21

Updated on

2024-12-21

Licensed under