Posted by: yglima | September 25, 2010

## ERT14: Factors, conditional expectation, disintegration and relative product of measures

This post develops necessary machinery to define relative notions of compact and weak mixing systems, seen in the previous posts. These notions are the basic objects used by Furstenberg in his structural theorem, which we shall investigate next month.

A basic concept in Mathematics, exploited in all areas, is Divide and Conquer: in order to analyse a mathematical object, one tries to investigate its smaller components which, in some specific sense, are simpler than the initial one and at the same time give sufficient information to the understanding of the larger structure.

An example of this philosophy occurs when one considers invariant subsets of continuous functions. For example, if such subset is minimal, then every point has dense orbit and, in particular, is recurrent. This simple idea gives a proof of Birkhoff theorem on the existence of recurrent points for continuous transformations defined in compact metric spaces.

Another example is Thurston Geometrization Theorem, in which a compact three-dimensional manifold is broken into the direct sum of prime three-manifolds. These prime three-manifolds are chosen among a set of 8 different geometric structures: spherical, euclidean, hyperbolic and so on.

In terms of measure theory, one has the notion of ergodicity, in which there are no non-trivial invariant sets. In this post, we will discuss another notion on measure theory, that of

1. Factors and extensions

Given two probability measure spaces ${\mathbb X=(X,\mathcal A,\mu)}$ and ${\mathbb Y=(Y,\mathcal B,\nu)}$, we usually refer to a measure-preserving map ${\pi:\mathbb X\rightarrow\mathbb Y}$ as a spatial map from ${X}$ to ${Y}$ such that

1. it is measurable, that is, ${\pi^{-1}\mathcal B\subset\mathcal A}$ and
2. it preserves measure, that is, ${\mu(\pi^{-1}B)=\nu(B)}$ for all ${B\in\mathcal B}$.

In many situations, the underlying spaces ${X}$ and ${Y}$ play a secondary role, merely providing the means for defining the ${\sigma}$-algebras ${\mathcal A}$ and ${\mathcal B}$ and the measures ${\mu}$ and ${\nu}$. It will be advantageous to consider the boolean algebras (defined in ERT11) ${\hat{\mathcal A}}$ and ${\hat{\mathcal B}}$ and maps defined from ${\hat{\mathcal B}}$ to ${\hat{\mathcal A}}$. With this, many problems regarding sets of zero measure and transformations not defined in every point are not anymore obstructions for the development of the theory and we only keep the essence of the object in study.

Definition 1 A homomorphism ${\pi:\mathbb X=(X,\mathcal A,\mu)\rightarrow\mathbb Y=(Y,\mathcal B,\nu)}$ between two probability measure spaces is an injection ${\pi^{-1}:\hat{\mathcal B}\rightarrow\hat{\mathcal A}}$ such that

1. ${\pi^{-1}\left(\hat{B_1}\cup\hat{B_2}\right)=\pi^{-1}\hat{B_1}\cup\pi^{-1}\hat{B_2}}$, for ${\hat{B_1},\hat{B_1}\in\hat{\mathcal B}}$,
2. ${\pi^{-1}\left(\hat Y\backslash\hat B\right)=\hat X\backslash\pi^{-1}\hat B}$, for ${\hat B\in\hat{\mathcal B}}$, and
3. ${\mu\left(\pi^{-1}\hat B\right)=\nu(\hat B)}$, for ${\hat B\in\hat{\mathcal B}}$.

In this situation, we say ${\mathbb Y}$ is a factor of ${\mathbb X}$ or that ${\mathbb X}$ is an extension of ${\mathbb Y}$. If ${\pi^{-1}}$ is a bijection, ${\pi}$ is called an isomorphism.

We remark that, provided properties (i) to (iii) hold, the map ${\pi^{-1}}$ is necessarily an injection between the two boolean algebras. We leave this as an exercise to reader. Now, let’s see some

Examples.

1. Every measure space ${\mathbb X=(X,\mathcal A,\mu)}$ has a trivial factor: let ${Y=\{0\}}$ with the trivial ${\sigma}$-algebra ${\mathcal B}$ and probability ${\nu}$. Then ${\pi^{-1}:\hat{\mathcal B}\rightarrow\hat{\mathcal A}}$ defined by ${\pi^{-1}\hat 0=\hat X}$ is a homomorphism.
2. Every ${\sigma}$-algebra ${\mathcal B\subset\mathcal A}$ defines the factor ${\mathbb Y=(X,\mathcal B,\mu)}$ by the inclusion map ${\pi:\hat{\mathcal B}\rightarrow\hat{\mathcal A}}$.

Actually, the last example represents the general situation.

Lemma 2 If ${\pi:\mathbb X=(X,\mathcal A,\mu)\rightarrow\mathbb Y=(Y,\mathcal B,\nu)}$ is a homomorphism, then there exists a ${\sigma}$-algebra ${\mathcal{A'}\subset\mathcal A}$ and an isomorphism between ${\mathbb Y}$ and ${(X,\mathcal{A'},\mu).}$

Proof: Define ${\hat{\mathcal A'}=\pi^{-1}\hat{\mathcal B}}$. The map ${\pi^{-1}:\hat{\mathcal B}\rightarrow\hat{\mathcal A'}}$ is, by definition, an isomorphism. $\Box$

In all cases of interest, the homomorphism ${\pi^{-1}}$ is determined by a spatial map ${\pi}$ from ${X}$ to ${Y}$. In the spectral point of view, it induces an isometric inclusion from ${L^p(\mathbb Y)}$ to ${L^p(\mathbb X)}$ that lifts maps from ${Y}$ to ${X}$ by the composition map ${f\mapsto f\circ \pi}$, for every ${1\le p\le+\infty}$. We denote this map by ${f^\pi}$. By Lemma 2, ${L^p(\mathbb Y)^{\pi}}$ is isometric to ${L^p(X,\mathcal A',\mu)}$.

When ${p=2}$, we have an inclusion between two closed Hilbert spaces and so it’s worth to infer who is the orthogonal projection from ${L^2(\mathbb X)}$ to ${L^2(X,\mathcal A',\mu)\cong L^2(\mathbb Y)^\pi}$.

2. Conditional expectation

Definition 3 If ${\mathbb X=(X,\mathcal A,\mu)}$ is a probability measure space and ${\mathcal B\subset \mathcal A}$ a ${\sigma}$-algebra, the condition expectation of a function ${f\in L^1(\mathbb X)}$ with respect to ${\mathcal B}$ is the unique ${\mathcal B}$-measurable map ${\mathbb E(f|\mathcal B):X\rightarrow\mathbb C}$ such that$\displaystyle \int_B \mathbb E(f|\mathcal B)d\mu=\int_B fd\mu\,,\ \ \forall\,B\in\mathcal B.$

If ${\pi:\mathbb X\rightarrow\mathbb Y=(Y,\mathcal B,\nu)}$ is a homomorphism, the conditional expectation of ${f\in L^1(\mathbb X)}$ is the unique function ${g\in L^1(\mathbb Y)}$ for which ${g^\pi=\mathbb E(f|\mathcal \pi^{-1}B)}$. We denote ${g}$ by ${\mathbb E(f|\mathbb Y)}$.

By the above definition, ${\mathbb E(f|\mathbb Y)}$ is the unique function from ${Y}$ to ${\mathbb C}$ for which $\displaystyle \int_{\pi^{-1}B}fd\mu=\int_B\mathbb E(f|\mathbb Y)d\nu,\ \ \forall\,B\in\mathcal B. \ \ \ \ \ (1)$

Remarks. For all properties to be proved, we sometimes assume, by means of Lemma 2, that the factor is defined by a sub-${\sigma}$-algebra.

Let us check that the above equality really defines a unique function. Consider the probability measure ${\nu}$ defined in ${\mathcal B}$ by the identity

$\displaystyle \nu(B)=\int_B fd\mu,\ \ \forall\,B\in\mathcal B.$

Clearly, ${\nu}$ is absolutely continuous with respect to the restriction ${\mu_{\mathcal B}}$ of ${\mu}$ to ${\mathcal B}$. By Radon-Nikodym theorem, there is a unique ${\mathcal B}$-measurable density function ${\rho:X\rightarrow\mathbb R}$ such that

$\displaystyle \nu(B)=\int_B \rho d\mu\,,\ \ \forall\,B\in\mathcal B.$

This proves ${\mathbb E(f|\mathcal B)}$ (and so ${\mathbb E(f|\mathbb Y)}$) is well-defined and so, for every ${1\le p\le+\infty}$, there is a map

$\displaystyle \mathbb E(\cdot|\mathbb Y):L^p(\mathbb X)\rightarrow L^p(\mathbb Y).$

It is clearly a linear operator, which is actually bounded. For this last assertion, we make use of Jensen inequality.

Theorem 4 Let ${\psi:\mathbb R\rightarrow\mathbb R}$ be a convex function. If ${f\in L^p(\mathbb X)}$is such that  ${\psi\circ f\in L^p(\mathbb X)}$, then $\displaystyle \psi(\mathbb E(f|\mathbb Y))\le\mathbb E(\psi\circ f|\mathbb Y).$

For a proof, consult any basic book of probability. Applying Jensen inequality to the function ${\psi(t)=|t|^p}$, the boundedness of ${\mathbb E(\cdot|\mathbb Y)}$ follows directly:

$\displaystyle \begin{array}{rcl} \|\mathbb E(f|\mathbb Y)\|_{L^p(\mathbb Y)}^p&=&\int_{Y}\psi(\mathbb E(f|\mathbb Y))d\nu\\ &&\\ &\le&\int_{Y}\mathbb E(\psi\circ f|\mathbb Y)d\nu\\ &&\\ &=&\int_{X}\psi\circ fd\nu\\ &&\\ &=&\|f\|_{L^p(\mathbb X)}^p. \end{array}$

We thus have the

Proposition 5 For every ${1\le p\le+\infty}$, the conditional expectation ${\mathbb E(\cdot|\mathbb Y):L^p(\mathbb X)\rightarrow L^p(\mathbb Y)}$ is a linear weak contraction.

The good feature of Proposition 5, although its intrisic importance, is that many properties of conditional expectation can now be proved by means of approximation. We’ll usually do this when dealing with products of functions. The next proposition collects the main properties of the conditional expectation.

Proposition 6 The conditional expectation operator ${\mathbb E(\cdot|\mathbb Y):L^1(\mathbb X)\rightarrow L^1(\mathbb Y)}$ has the following properties:

1. ${\mathbb E(1|\mathbb Y)=1}$,
2. ${\mathbb E(f|\mathbb Y)\ge 0}$ whenever ${f\ge 0}$,
3. ${\mathbb E(f^\pi|\mathbb Y)=f}$ for every ${f\in L^1(\mathbb Y)}$,
4. ${\mathbb E(g^\pi f|\mathbb Y)=g\cdot\mathbb E(f|\mathbb Y)}$ whenever ${g^\pi f\in L^1(\mathbb Y)}$, and
5. ${\int fd\mu=\int \mathbb E(f|\mathbb Y)d\nu}$.
6. If ${\pi':\mathbb Y\rightarrow\mathbb Z}$ is another homomorphism, then ${\mathbb E(f|\mathbb Z)=\mathbb E(\mathbb E(f|\mathbb Y)\mathbb Z)}$.

Proof: (i). Clear.

(ii). Let ${f\ge 0}$. Given ${\varepsilon>0}$, consider the ${\mathcal B}$-measurable set ${A_\varepsilon=\mathbb E(f|\mathcal B)^{-1}(-\infty,-\varepsilon)}$. We have

$\displaystyle \begin{array}{rcl} -\varepsilon\cdot\nu(A_\varepsilon)&\ge&\int_{A_\varepsilon}\mathbb E(f|\mathcal B)d\nu\\ &&\\ &=&\int_{\pi^{-1}A_\varepsilon}fd\mu\\ &&\\ &\ge&0, \end{array}$

implying that ${\nu(A_\varepsilon)=0}$. As ${\varepsilon>0}$ is arbitrary, ${\mathbb E(f|\mathbb Y)\ge 0}$.

(iii) ${f^{\pi}}$ is ${\pi^{-1}\mathcal B}$-measurable and so

$\displaystyle \mathbb E(f^\pi|\pi^{-1}\mathcal B)=f^{\pi}\ \Longrightarrow\ \mathbb E(f|\mathbb Y)=f.$

(iv). First, assume ${g=\chi_{B_1}}$ for some ${B_1\in\mathcal B}$. Then ${g^{\pi}=\chi_{\pi^{-1}B_1}}$. For arbitrary ${B_2\in\mathcal B}$,

$\displaystyle \begin{array}{rcl} \int_{B_2}\mathbb E(g^\pi f|\mathbb Y)d\nu&=&\int_{\pi^{-1}B_2}g^{\pi}fd\mu\\ &&\\ &=&\int_{\pi^{-1}B_1\cap\pi^{-1}B_2}fd\mu\\ &&\\ &=&\int_{\pi^{-1}(B_1\cap B_2)}fd\mu\\ &&\\ &=&\int_{B_1\cap B_2}\mathbb E(f|\mathbb Y)d\nu\\ &&\\ &=&\int_{B_2}g\cdot\mathbb E(f|\mathbb Y)d\nu \end{array}$

and so the equality is true in this case. For the general one, proceed by approximation.

(v). Take ${B=Y}$ in relation (1).

(vi). Let ${\mathbb Z=(Z,\mathcal C,\tau)}$. For every ${C\in\mathcal C}$, we have

$\displaystyle \begin{array}{rcl} \int_C\mathbb E(\mathbb E(f|\mathbb Y)|\mathbb Z)d\tau&=&\int_{\pi'^{-1}C}\mathbb E(f|\mathbb Y)d\nu\\ &&\\ &=&\int_{\pi^{-1}\pi'^{-1}C}fd\mu\\ &&\\ &=&\int_{(\pi'\circ\pi)^{-1}C}fd\mu\\ &&\\ &=&\int_C\mathbb E(f|\mathbb Z)d\tau. \end{array}$

$\Box$

Until now, we didn’t say what the conditional expectation has to do with the orthogonal projection we wanted to investigate in the first time. Let ${P}$ denote the orthogonal projection of ${L^2(\mathbb X)}$ to ${L^2(\mathbb Y)}$. It is a simple fact that

Lemma 7 Restricted to ${L^2(\mathbb X)}$, ${\mathbb E(\cdot|\mathbb Y)}$ is equal to ${P}$.

Proof: Assume ${\mathbb X=(X,\mathcal A,\mu)}$ and ${\mathbb Y=(X,\mathcal B,\mu)}$. For each ${B\in\mathcal B}$, we have

$\displaystyle \begin{array}{rcl} (f-\mathbb E(f|\mathcal B),\chi_B)&=&\int_X (f-\mathbb E(f|\mathcal B))\cdot\chi_Bd\mu\\ &&\\ &=&\int_B fd\mu-\int_B \mathbb E(f|\mathcal B)d\mu\\ &&\\ &=&0, \end{array}$

by definition. As ${(\chi_B)_{B\in\mathcal B}}$ is dense in ${L^2(\mathbb Y)}$, we conclude the proof. $\Box$

3. Measure-preserving systems

We now turn our attention to measure-preserving systems, which is our case of interest. In this section, we return to the usual notation of mathematical letter representing a mps. Consider two mps ${\mathbb X=(X,\mathcal A,\mu,T)}$ and ${\mathbb Y=(Y,\mathcal B,\nu,S)}$.

Definition 8 A homomorphism ${\pi}$ from ${\mathbb X}$ to ${\mathbb Y}$ is a homomorphism ${\pi:(X,\mathcal A,\mu)\rightarrow (Y,\mathcal B,\nu)}$ between the measure spaces such that ${\pi\circ T=S\circ\pi}$. In this case, we say ${\mathbb Y}$ is a factor of ${\mathbb Y}$ or that ${\mathbb X}$ is an extension of ${\mathbb Y}$.

Roughly speaking, what happens is that, if one considers the fibers ${\pi^{-1}y\subset X}$ of elements of ${Y}$, the dynamics of T among the fibers is exactly the dynamics of ${S}$. In other words, saying that ${\mathbb X}$ is an extension of ${\mathbb Y}$ means that, after collapsing points into fibers, the remaining dynamics is that of ${S}$. This is exactly we wanted: to decompose the dynamics ${T}$ into a simpler one. Again, let’s see some

Examples.

1. Every mps ${\mathbb X=(X,\mathcal A,\mu,T)}$ has a trivial factor: let ${(Y,\mathcal B,\nu)}$ be the trivial measure space defined in Example 1 and ${S}$ the spatial map in ${Y}$ such that ${S0=0}$. The map ${\pi:\mathbb X\rightarrow\mathbb Y}$ given by ${\pi x=0}$, ${\forall\,x\in X}$, is a homomorphism.
2. Consider a mps ${\mathbb X=(X,\mathcal A,\mu,T)}$ and a ${\sigma}$-algebra ${\mathcal B\subset\mathcal A}$ such that ${T^{-1}\mathcal B\subset\mathcal B}$. This defines another mps ${\mathbb Y=(T,\mathcal B,\mu,T)}$ that is a factor of the initial one by the identity map ${{\rm id}:\mathbb X\rightarrow\mathbb Y}$.
3. Given two mps ${\mathbb X}$ and ${\mathbb Y}$, the product mps ${\mathbb X\times\mathbb Y}$ (see ERT8) is an extension of both ${\mathbb X}$ and ${\mathbb Y}$ via the projections.
4. Given two measure spaces ${(X,\mathcal A,\mu)}$ and ${(Y,\mathcal B,\nu)}$, let ${\mathbb X=(X,\mathcal A,\mu,S)}$ be a mps and ${U:(X\times Y,\mathcal A\otimes\mathcal B,\mu\times\nu)\rightarrow (Y,\mathcal B,\nu)}$ a measure-preserving transformations. They induce the skew-product system ${\mathbb Z=(X\times Y,\mathcal A\otimes\mathcal B,\mu\times\nu,T)}$ defined by$\displaystyle T(x,y)=(Sx,U(x,y))$that is an extension of ${\mathbb X}$ via the projection ${\pi:\mathbb Z\rightarrow\mathbb X}$ on the first coordinate.

As in Lemma 2, we have a classification of factors.

Lemma 9 For every homomorphism ${\pi:\mathbb X=(X,\mathcal A,\mu,T)\rightarrow\mathbb Y}$ between mps, there exists a ${\sigma}$-algebra ${\mathcal A'\subset\mathcal A}$ such that ${\mathbb Y}$ is isomorphic to ${(X,\mathcal A',\mu,T)}$.

Proof: Let ${\mathcal A'=\pi^{-1}\mathcal B}$. We have

$\displaystyle T^{-1}\mathcal A'=T^{-1}\pi^{-1}\mathcal B=\pi^{-1}S^{-1}\mathcal B\subset\pi^{-1}\mathcal B=\mathcal A'$

and so ${(X,\mathcal A',\mu,T)}$ is a mps. The same isomorphism between measure spaces defined in Lemma 2 represents an isomorphism between the mps ${\mathbb Y}$ and ${(X,\mathcal A',\mu,T)}$, due to the assumption that ${\pi}$ commutes ${T}$ and ${S}$. $\Box$

When one has a homomorphism between mps, conditional expectations have the additional property of being compatible with ${T}$ and ${S}$, according to the

Proposition 10 Let ${\pi:\mathbb X=(X,\mathcal A,\mu,T)\rightarrow\mathbb Y=(Y,\mathcal B,\nu,S)}$ be a homomorphism. Then, for each ${f\in L^1(\mathbb X)}$, one has$\displaystyle S\mathbb E(f|\mathbb Y)=\mathbb E(Tf|\mathbb Y).$

Proof: Fix ${B\in\mathcal B}$. Applying a change of variable, we have

$\displaystyle \begin{array}{rcl} \int_B S\mathbb E(f|\mathbb Y)d\nu&=&\int_{SB}\mathbb E(f|\mathbb Y)d\nu\\ &&\\ &=&\int_{\pi^{-1}(SB)}fd\mu\\ &&\\ &=&\int_{T(\pi^{-1}B)}fd\mu\\ &&\\ &=&\int_{\pi^{-1}B}Tfd\mu\\ &&\\ &=&\int_B\mathbb E(Tf|\mathbb Y)d\mu. \end{array}$

As ${S\mathbb E(f|\mathbb Y)\in L^1(\mathbb Y)}$, the uniqueness of ${\mathbb E(Tf|\mathbb Y)}$ guarantees the result. $\Box$

4. Disintegration of measures

We now examine what takes place above the points ${y\in Y}$, that is, on the fibers ${X_y=\pi^{-1}y}$ of a homomorphism ${\pi:\mathbb X\rightarrow\mathbb Y}$ between measure spaces: ${\mu}$ should contain ${\nu}$ as a sub-measure, in some sense. This is what happens when one considers the disintegration of ${\mu}$ with respect to ${\nu}$.

Theorem 11 (Rokhlin) Let ${\pi:(X,\mathcal A,\mu)\rightarrow(Y,\mathcal B,\nu)}$ be a homomorphism, where ${X}$ is a compact metric space. Then there exists a measurable map ${y\mapsto \mu_y}$ from ${Y}$ to ${\mathcal M(X)=\{\text{probabilities measures on }(X,\mathcal A)\}}$ satisfying, for every ${f\in L^1(X,\mathcal A,\mu)}$, the following conditions:

1. ${f\in L^1(X,\mathcal A,\mu_y)}$ for ${\nu}$-a.e. ${y\in Y}$ and $\displaystyle \mathbb E(f|\mathbb Y)(y)=\int_X fd\mu_y\,, \ \ \nu\text{-a.e. }y\in Y. \ \ \ \ \ (2)$
2. ${\int_Y\left(\int_X fd\mu_y\right)d\nu(y)=\int_X fd\mu}$.

For a proof of Rokhlin’s theorem, we refer the reader to Furstenberg’s book or to these lecture notes from Marcelo Viana.

We call each ${\mu_y}$ a conditional measure and write

$\displaystyle \mu=\int \mu_yd\nu(y)$

for the disintegration of ${\mu}$ with respect to ${\pi}$. By (i), these conditional measures capture the essence of the conditional expectation. It is a simple task to check that the disintegration is unique almost surely. This implies, among other things, the following

Proposition 12 Let ${\pi:(X,\mathcal A,\mu,T)\rightarrow(Y,\mathcal B,\nu,S)}$ be a homomorphism of mps and ${\mu=\int \mu_yd\nu(y)}$ be the disintegration of ${\mu}$ with respect to ${\pi}$. Then$\displaystyle T_*\mu_y=\mu_{Sy}\,,\ \ \text{for }\nu\text{-a.e. }y\in Y.$

Proof: It is enough to check that ${\mu=\int T_*\mu_{S^{-1}y}d\nu(y)}$ is a disintegration satisfying (i) of Theorem 11. In fact, if ${f\in L^1(X,\mathcal A,\mu)}$, by a change of variables and (2),

$\displaystyle \begin{array}{rcl} \int_X fd(T_*\mu_y)&=&\int_X Tfd\mu_y\\ &&\\ &=&\mathbb E(Tf|\mathbb Y)(y) \end{array}$

which, by Proposition 10, is equal to

$\displaystyle \begin{array}{rcl} S\mathbb E(f|\mathbb Y)(y)&=&\mathbb E(f|\mathbb Y)(Sy)\\ &&\\ &=&\int_X fd\mu_{Sy} \end{array}$

and so

$\displaystyle \int_X fd(T_*\mu_y)=\int_X fd\mu_{Sy}\,,\ \ \forall\,f\in L^1(X,\mathcal A,\mu),$

proving that ${T_*\mu_y=\mu_{Sy}}$. $\Box$

5. Relative product of measure spaces

Suppose now that ${\mathbb X_1=(X_1,\mathcal A_1,\mu_1)}$, ${\mathbb X_2=(X_1,\mathcal A_1,\mu_1)}$, ${\mathbb Y=(Y,\mathcal B,\nu)}$ are measure spaces and ${\pi_1:\mathbb X_1\rightarrow\mathbb Y}$, ${\pi_2:\mathbb X_2\rightarrow\mathbb Y}$ homomorphisms. One can form the fibre product space ${\mathbb X_1\times_Y\mathbb X_2=(X_1\times X_2,\mathcal A_1\otimes\mathcal A_2,\mu_1\times_{\mathbb Y}\mu_2)}$, where ${\mu_1\times_{\mathbb Y}\mu_2}$ is defined via the disintegrations ${\mu_1=\int \mu_{1,y}d\nu}$ and ${\mu_2=\int \mu_{2,y}d\nu}$ by the integration formula

$\displaystyle \mu_1\times_{\mathbb Y}\mu_2=\int_Y (\mu_{1,y}\times\mu_{2,y})d\nu(y).$

This measure space is, in a natural way, an extension of ${\mathbb Y}$. Actually, if ${\theta_1:\mathbb X_1\times_{\mathbb Y}\mathbb X_2\rightarrow\mathbb X_1}$ is the projection map on the first coordinate, then ${\pi_1\circ\theta_1:\mathbb X_1\times_{\mathbb Y}\mathbb X_2\rightarrow\mathbb Y}$ is a homomorphism.

Definition 13 The measure space ${\mathbb X_1\times_Y\mathbb X_2=(X_1\times X_2,\mathcal A_1\otimes\mathcal A_2,\mu_1\times_{\mathbb Y}\mu_2)}$ is called the product of ${\mathbb X_1}$ and ${\mathbb X_2}$ relative to ${\mathbb Y}$.

Remarks.

1. ${\mu_1\times_{\mathbb Y}\mu_2}$ is supported on the fibre-product set$\displaystyle X_1\times_{\mathbb Y}X_2=\{(x_1,x_2)\,;\,\pi_1(x_1)=\pi_2(x_2)\}.$
2. It is a simple task to check that if ${\theta_2}$ is defined in an analogous manner, then ${\pi_1\circ\theta_1=\pi_2\circ\theta_2}$, which proves that ${\mathbb X_1\times_{\mathbb Y}\mathbb X_2}$ is unambiguously an extension of ${\mathbb Y}$. This also proves that, if ${\pi_3:\mathbb X_3\rightarrow\mathbb Y}$ is another extension, then the operation of relative product with respect to ${\mathbb Y}$ is associative:$\displaystyle (\mathbb X_1\times_{\mathbb Y}\mathbb X_2)\times_{\mathbb Y}\mathbb X_3= \mathbb X_1\times_{\mathbb Y}(\mathbb X_2\times_{\mathbb Y}\mathbb X_3).$We denote this measure space simply as ${\mathbb X_1\times_{\mathbb Y}\mathbb X_2\times_{\mathbb Y}\mathbb X_3}$. See Furstenberg book for further detail.

From the probabilistic point of view, the relative product provides a probability space on which both ${\sigma}$-algebras of events ${\mathcal A_1}$ and ${\mathcal A_2}$ are represented and are independent conditioned upon ${\mathcal B}$. This is corroborated by the next

Proposition 14 If ${f_1\in L^2(\mathbb X_1)}$ and ${f_2\in L^2(\mathbb X_2)}$, then$\displaystyle \mathbb E(f_1\otimes f_2|\mathbb Y)=\mathbb E(f_1|\mathbb Y)\cdot\mathbb E(f_2|\mathbb Y),$

where the first expectation refers to the extension ${\pi_1\circ\theta_1:\mathbb X_1\times_{\mathbb Y}\mathbb X_2\rightarrow\mathbb Y}$, the second to ${\pi_1:\mathbb X_1\rightarrow\mathbb Y}$ and the third to ${\pi_2:\mathbb X_2\rightarrow\mathbb Y}$.

Proof: By Rokhlin theorem,

$\displaystyle \begin{array}{rcl} \mathbb E(f_1\otimes f_2|\mathbb Y)(y)&=&\int_{X_1\times X_2}(f_1\otimes f_2)d\mu_{1,y}\times\mu_{2,y}\\ &&\\ &=&\int_{X_1}f_1d\mu_{1,y}\cdot \int_{X_2}f_2d\mu_{2,y}\\ &&\\ &=&\mathbb E(f_1|\mathbb Y)(y)\cdot\mathbb E(f_2|\mathbb Y)(y). \end{array}$

$\Box$

Previous posts: ERT0, ERT1, ERT2, ERT3, ERT4, ERT5, ERT6, ERT7, ERT8, ERT9, ERT10, ERT11, ERT12, ERT13.

This site uses Akismet to reduce spam. Learn how your comment data is processed.