Posted by: matheuscmss | July 31, 2013

Furstenberg’s theorem on the Poisson boundaries of lattices of SL(n,R) (part II)

Last time we introduced Poisson boundaries hoping to use them to distinguish between lattices of {SL(2,\mathbb{R})} and {SL(n,\mathbb{R})}, {n\geq 3}. More precisely, we followed Furstenberg to define and construct Poisson boundaries as a tool that would allow to prove the following statement:

Theorem 1 (Furstenberg (1967)) A lattice of {SL(2,\mathbb{R})} can’t be realized as a lattice in {SL(n,\mathbb{R})} for {n\geq 3} (or, in the language introduced in the previous post, {SL(n,\mathbb{R})}, {n\geq 3}, can’t envelope a discrete group enveloped by {SL(2,\mathbb{R})}).

Here, we recall that, very roughly speaking, these Poisson boundaries {(B,\nu)} were certain “maximal” topological objects attached to locally compact groups with probability measures {(G,\mu)} in such a way that the points in the boundary {B} were almost sure limits of {\mu}-random walks on {G} and the probability measure {\nu} was a {\mu}-stationary measure giving the distribution at which {\mu}-random walks hit the boundary.

For this second (final) post, we will discuss (below the fold) some examples of Poisson boundaries and, after that, we will sketch the proof of Theorem 1.

Remark 1 The basic references for this post are the same ones from the previous post, namely, Furstenberg’s survey, his original articles and A. Furman’s survey.

1. Some examples of Poisson boundaries

1.1. Abelian groups

The Poisson boundary of a locally compact Abelian group {G} with respect to any probability measure {\mu} is trivial.

Indeed, in terms of the characterization of Poisson boundaries via {\mu}-harmonic functions, this amounts to say that bounded {\mu}-harmonic functions on locally compact Abelian groups are constant, a fact proved by Choquet-Deny here.

More generally, there is a natural notion of random walk entropy {h(G,\mu)} (cf. Furman’s survey) allowing to characterize the triviality of Poisson boundary: more precisely, Kaimanovich-Vershik showed here that a discrete countable group {G} equipped with a probability measure {\mu} has trivial Poisson boundary if and only if {h(G,\mu)=0}.

Remark 2 In addition to these results, it is worth to mention that:

  • Furstenberg showed that any probability measure {\mu} on a locally compact non-amenable group {G} whose support {\textrm{supp}(\mu)} generates {G} admits bounded non-constant {\mu}-harmonic functions;
  • Kaimanovich-Vershik and Rosenblatt showed that given a locally compact amenable group {G}, there exists a (symmetric) probability measure {\mu} with full support {\textrm{supp}(\mu)=G} such that all bounded {\mu}-harmonic functions on {G} are constant.

See Furman’s survey for more comments on these results (as well as related ones).

1.2. Free group {F_2} on 2 generators

Let {F_2} be the free group on {2} generators {a} and {b} equipped with the Bernoulli probability measure {\mu(\{\ast\})=1/4} for {\ast\in\{a, a^{-1}, b, b^{-1}\}}. Last time, during the naive description of the Poisson boundary, we saw that the space

\displaystyle \Omega_2=\{w_1\dots w_n\dots: w_{i}w_{i+1}\neq 1 \textrm{ and } w_i\in\{a, a^{-1}, b, b^{-1}\}\}

of infinite words {w_1w_2\dots} on {a, a^{-1}, b, b^{-1}} satisfy the non-cancellation condition {w_iw_{i+1}\neq 1} for all {i\in\mathbb{N}} was a natural candidate for a boundary of {(F_2,\mu)} (as far as {\mu}-random walks were concerned).

1.2.1. Computation of the Poisson boundary of {(F_2,\mu)}

As the reader can imagine, {\Omega_2} equipped with an adequate measure {\nu} is the Poisson boundary of {(F_2,\mu)}. The proof of this fact goes along the following lines. Firstly, let us show that {\Omega_2} is a {F_2}-space, i.e.,

Lemma 2 {F_2} acts continuously on {\Omega_2} (for the topology of pointwise convergence).

Proof: The action of {F_2} on {\Omega_2} is defined by continuity: given {g\in F_2} and {g_n\in F_2}, {g_n\rightarrow\omega\in\Omega_2}, let {g\omega:=\lim\limits_{n\rightarrow\infty} gg_n}. In other words, if {g=w_1\dots w_l}, then {g\omega} is the infinite word obtained by concatenation of {g} and {\omega}, and then collapsing if necessary: e.g., if {\omega=abab\dots}, then

  • {a\omega=aabab\dots}, {b\omega=babab\dots} and {b^{-1}\omega=b^{-1}abab\dots}, and
  • {a^{-1}\omega=a^{-1}abab\dots=bab\dots}

This proves the lemma. \Box

Next, let us consider {x_n} a stationary sequence of independent {F_2}-valued random variables with distribution {\mu} (so that {x_n\in\{a,a^{-1},b,b^{-1}\}}). We claim that {x_1\dots x_n} converges to a point {z_1} of {\Omega_2} with probability {1}:

Lemma 3 {x_1\dots x_n\rightarrow z_1\in\Omega_2} with probability {1}.

Proof: The basic idea to get convergence is simple: we write the product {x_1x_2\dots x_nx_{n+1}\dots}, we delete all pairs {x_ix_{i+1}} canceling out (i.e., {x_ix_{i+1}=1}), and we continue to delete “canceling pairs” until we get a point {z_1} in {\Omega_2}.

Of course, this procedure fails to produce a limit point {z_1} only if the product {x_1\dots x_n\dots} keeps degenerating indefinitely.

However, this possibility occurs only with zero probability thanks to the transience of the random walk {x_1\dots x_n} on the free group {F_2} (i.e, the fact that this random walk wanders to {\infty} with full probability; cf., e.g., this classical paper of Kesten).

Equivalently, letting {\ell(g)} denote the length of an element {g\in F_2} (i.e., the minimal length of products expressing {g} in terms of the generators {\{a, a^{-1}, b, b^{-1}\}}), the transience of {x_1\dots x_n} means that {\ell(x_1\dots x_n)\rightarrow\infty} as {n\rightarrow\infty} with probability {1}. In particular, with probability {1}, for each {k}, we can select {n_k\in\mathbb{N}} the largest value of {n} such that {\ell(x_1\dots x_n)=k}, so that the right multiplication of {x_1\dots x_{n_k}} by {x_{n_k+1}\dots} will not change the first {k} letters of the product {x_1\dots x_n\dots}, and, a fortiori, we get (with probability {1}) a well-defined limit word that we can denote without any ambiguity

\displaystyle z_1=x_1\dots x_n\dots

This proves the lemma. \Box

An immediate corollary of the proof of this lemma (and the definition of pointwise convergence) is:

Corollary 4 For any {\omega\in\Omega_2}, {x_1x_2\dots x_n\omega} converges to {z_1}. In particular, given any probability measure {\nu} on {\Omega_2}, one has

\displaystyle (x_1x_2\dots x_n)_*(\nu)\rightarrow\delta_{z_1}

Another way of phrasing this corollary is:

{\Omega_2} carries only one {\mu}-stationary measure {\nu}.

Indeed, from the limit point {z_1} provided by the previous lemma, we can define a sequence {\{z_k\}} of {\Omega_2}-valued random variables by setting {z_k=x_k x_{k+1}\dots} and it is not hard to check that {\{z_k\}} is a {\mu}-process (as defined in the previous post). Therefore, if {\nu} is a {\mu}-stationary probability measure on {\Omega_2}, the convergence asserted in the corollary and Proposition 7 of the previous post says that {\nu} is the distribution of a {\mu}-process. Furthermore, Proposition 5 of the previous post asserts that this {\mu}-process is exactly {\{z_k\}}, that is, the unique {\mu}-stationary measure {\nu} on {\Omega_2} is the distribution of the {\mu}-process {\{z_k\}}.

In summary, denoting by {\nu} the unique {\mu}-stationary on {\Omega_2} (namely, the distribution of {\{z_k\}}), our discussion so far shows that {(\Omega_2,\nu)} is a boundary of {(F_2,\mu)}. In fact, one can make this boundary entirely explicit because it is not hard to guess {\nu}:

Lemma 5 Let {\nu} be the probability measure on {\Omega_2} giving the maximal independence between the coordinates, i.e., {\omega_1} takes one of the four values {\{a,a^{-1},b,b^{-1}\}} with equal probability, {\omega_2} takes one of the three possible values {\{a,a^{-1},b,b^{-1}\}-\{\omega_1^{-1}\}} with equal probability, etc., so that the {\nu}-measure of a cylinder obtained by prescribing the first {n}-entries is {1/(4\cdot 3^{n-1})}, i.e.,

\displaystyle \nu(\{\omega=\omega_1\omega_2\dots\in\Omega_2: \omega_1=\omega_1^*, \dots, \omega_n=\omega_n^*\})=\frac{1}{4\cdot 3^{n-1}}

Proof: From our previous discussion, it is sufficient to check that {\nu} is {\mu}-stationary, i.e., {\mu\ast\nu=\nu}. This fact is not hard to check: we have just to verify that the {\mu\ast\nu}-integral and the {\nu}-integral of characteristic functions of cylinders coincide, i.e., we have to compute (and show the equality of) the {\mu\ast\nu}-measure and the {\nu}-measure of any given cylinder

\displaystyle \Sigma=\{\omega=\omega_1\omega_2\dots\in\Omega_2: \omega_1=\omega_1^*, \dots, \omega_n=\omega_n^*\}.

In this direction, note that, by definition,

\displaystyle \mu\ast\nu(\Sigma)=\frac{1}{4}(\nu(a\Sigma)+\nu(a^{-1}\Sigma)+\nu(b\Sigma)+\nu(b^{-1}\Sigma)).

On the other hand, for {g\in\{a,a^{-1},b,b^{-1}\}}, we have that {g\Sigma} is a cylinder of size {n+1} (i.e., corresponding to the prescription of {n+1} entries) unless {g=(\omega_1^*)^{-1}} where {g\Sigma} is a cylinder of size {n-1}. In particular,

\displaystyle \nu(a\Sigma)+\nu(a^{-1}\Sigma)+\nu(b\Sigma)+\nu(b^{-1}\Sigma) = 3\left(\frac{1}{4\cdot 3^{n}}\right)+\frac{1}{4\cdot 3^{n-2}} = \frac{1}{3^{n-1}}.

By plugging this into the previous equation, we deduce that

\displaystyle \mu\ast\nu(\Sigma)=\frac{1}{4\cdot 3^{n-1}}=\nu(\Sigma)

as desired. This proves the lemma. \Box

At this point, it remains only to check that {(\Omega_2,\nu)} is the Poisson boundary, i.e., all boundaries of {(F_2,\mu)} are equivariant images of {(\Omega_2,\nu)} and the Poisson formula for {\mu}-harmonic functions.

We will skip the proof of this last fact because it would lead us far from the scope of this post. Instead, we refer to Dynkin-Maljutov paper where it is shown that {(\Omega_2,\nu)} is the Martin boundary (and, a fortiori, the Poisson boundary).

1.2.2. A law of large numbers for {(F_2,\mu)}

Using our knowledge of the Poisson boundary {(\Omega_2,\nu)} of {(F_2,\mu)}, let us sketch a proof of the fact that

\displaystyle \ell(x_1\dots x_n)\sim n/2

as {n\rightarrow\infty} with probability {1} (where {\ell(g)} is the length of {g\in F_2}).

Firstly, let us observe that {g\nu} is absolutely continuous with respect to {\nu} for each {g\in F_2}. Indeed, since {\nu} is {\mu}-stationary, i.e.,

\displaystyle \nu=\frac{1}{4}(a\nu+a^{-1}\nu+b\nu+b^{-1}\nu)

our claim is true for the generators {a, a^{-1}, b, b^{-1}} of {F_2}, and, a fortiori, it is also true for all {g\in F_2}.

Actually, the Radon-Nikodym density {dg\nu/d\nu} is not hard to compute: if {\omega\in\Omega_2} is the limit of the sequence {g_n\in F_2}, then one can check by induction on {\ell(g)} that

\displaystyle \frac{dg\nu}{d\nu}(\omega)=3^{\ell(g_n)-\ell(g^{-1}g_n)}

for all {n} sufficiently large. In other words,

\displaystyle -\log\left(\frac{dg\nu}{d\nu}(\omega)\right)=\lim\limits_{n\rightarrow\infty}(\log 3)(\ell(g^{-1}g_n)-\ell(g_n)). \ \ \ \ \ (1)

Consider now the quantity

\displaystyle \frac{1}{n}\ell(x_1\dots x_n) = \frac{1}{n}\sum\limits_{i=1}^n (\ell(x_i x_{i+1}\dots x_n)-\ell(x_{i+1}\dots x_n))

For each {i}, we know that {(\log 3)(\ell(x_i x_{i+1}\dots x_n)-\ell(x_{i+1}\dots x_n))\rightarrow-\log \frac{dx_i^{-1}\nu}{d\nu}(z_{i+1})}. From this, it is possible to prove (after some work [related to the fact that we want to let {i} and {n} vary...]) that

\displaystyle (\log 3)\lim\limits_{n\rightarrow\infty}\frac{1}{n}\ell(x_1\dots x_n) = \lim\limits_{n\rightarrow\infty}-\frac{1}{n}\sum\limits_{i=1}^n \log \frac{dx_i^{-1}\nu}{d\nu}(z_{i+1})

By Birkhoff’s ergodic theorem, the time-average on the right-hand side of this equality converges to the spatial-average with probability {1}, i.e.,

\displaystyle \lim\limits_{n\rightarrow\infty}-\frac{1}{n}\sum\limits_{i=1}^n \log \frac{dx_i^{-1}\nu}{d\nu}(z_{i+1})=-\int_{F_2}\int_{\Omega_2}\log\frac{dg^{-1}\nu}{d\nu}(\omega) d\nu(\omega)d\mu(g).

Using (1), we can compute this integral as follows. By definition of {\mu},

\displaystyle \int_{F_2}\int_{\Omega_2}-\log\frac{dg^{-1}\nu}{d\nu}(\omega) d\nu(\omega)d\mu(g)=\frac{1}{4}\sum\limits_{g\in\{a, a^{-1}, b, b^{-1}\}}\int_{\Omega_2}-\log\frac{dg^{-1}\nu}{d\nu}(\omega)d\nu(\omega)

Now, for each {\omega=\omega_1\omega_2=\lim\limits_{n\rightarrow\infty}g_n\dots\in\Omega_2} and {g\in\{a,a^{-1},b,b^{-1}\}}, we have that {\ell(gg_n)-\ell(g_n)=1} unless {g=\omega_1^{-1}} in which case {\ell(gg_n)-\ell(g)=-1}. In particular, from (1), we get that

\displaystyle \sum\limits_{g\in\{a,a^{-1},b,b^{-1}\}}-\log\frac{dg^{-1}\nu}{d\nu}(\omega) = 2\log3

for each {\omega\in\Omega_2}. By plugging this into the previous equation, we obtain that

\displaystyle \int_{F_2}\int_{\Omega_2}-\log\frac{dg^{-1}\nu}{d\nu}(\omega) d\nu(\omega)d\mu(g)=\frac{1}{2}\log 3

and, hence,

\displaystyle \lim\limits_{n\rightarrow\infty}\frac{1}{n}\ell(x_1\dots x_n) = 1/2,

i.e., {\ell(x_1\dots x_n)\sim n/2} as {n\rightarrow\infty} with probability {1}.

1.2.3. A random walk on the free group {F_{\infty}} on {\infty} generators

The arguments of the previous subsubsections also apply to the free groups {F_r} on {r\in\mathbb{N}} generators equipped with the Bernoulli probability assigning equal probabilities to the {2r} singletons consisting of the generators and their inverses.

However, the simple-minded extension of this discussion to the free group {F_{\infty}} on countably many generators fails because we do not have a Bernoulli measure.

Nevertheless, it is possible to equip {F_{\infty}} with some probability measure {\mu_{\infty}} such that the Poisson boundary of {(F_{\infty},\mu_{\infty})} coincides with the Poisson boundary {(\Omega_2,\nu)} of {(F_2,\mu)}.

In fact, this is so because {F_{\infty}} “behaves like a lattice” of {F_2}. More concretely, the important fact about {F_{\infty}} is that it is isomorphic to the commutator subgroup {[F_2,F_2]} of {F_2}. Using this fact, we can formalize the idea that {F_{\infty}} is a lattice of {F_2} through the following lemma:

Lemma 6 {F_{\infty}} (or, more precisely, the commutator subgroup {[F_2,F_2]}) is a recurrence set for the random walk {u_n=x_1\dots x_n} on {F_2} in the sense that {u_n} meets {F_{\infty}\simeq[F_2,F_2]} infinitely often with probability {1}.

Proof: The quotient group {F_2/[F_2, F_2]} is the free Abelian group on {2} generators, i.e., {F_2/[F_2, F_2]\simeq\mathbb{Z}^2}. Now, by projecting to {F_2/[F_2, F_2]\simeq\mathbb{Z}^2} the random walk {u_n=x_1\dots x_n} on {F_2}, we obtain the simple random walk {v_n} on {\mathbb{Z}^2}. In this notation, the assertion that {[F_2, F_2]} is a recurrence set of {u_n} corresponds to the well-known fact that the simple random walk {v_n} on {\mathbb{Z}^2} returns infinitely often to the origin. \Box

From this lemma, we can construct (at least heuristically) a probability measure {\mu_{\infty}} on {F_{\infty}} whose Poisson boundary is the same of {(F_2,\mu)} (namely, {(\Omega_2,\nu)}). In fact, letting {u_n=x_1\dots x_n} be the random walk on {F_2}, we know that (with probability {1}) there is a subsequence {u_{n_k}} hitting {F_{\infty}}. On the other hand, {u_n} converges to a limit point {z_1\in\Omega_2}. In particular, it follows that the boundary points {z_1\in\Omega_2} are also limits of the {F_{\infty}}-valued random variables {\{u_{n_k}\}}. Thus, if one can show that {\{u_{n_k}\}} are independent random variables with a fixed distribution {\mu_{\infty}}, then {(F_{\infty},\mu_{\infty})} has Poisson boundary {(\Omega_2,\nu)}. Here, the keyword is the strong Markov property of {x_n}: more precisely, we notice that from {u_{n_k}} to {u_{n_{k+1}}} we multiply (to the right) {x_{n_k+1}x_{n_k+2}\dots x_{n_{k+1}}}; since {u_{n_k}} and {u_{n_{k+1}}} belong to {F_{\infty}}, we have that {x_{n_k+1}x_{n_k+2}\dots x_{n_{k+1}}\in F_{\infty}} and, in particular, this suggests that {\mu_{\infty}} is the distribution of the position of {u_n} at the first time that {u_n=x_1\dots x_n} enters {F_{\infty}}; of course, to formalize this, we need to know that the the entrance times {n_k} of {u_n} on {F_{\infty}} are themselves random variables (or rather that {x_{n_k+1}\dots x_{n_{k+1}}} {x_{n_{k+1}+1}\dots x_{n_{k+2}}} are still independent random variables) and this is precisely a consequence of the strong Markov property of {\{x_n\}}.

Alternatively, one can show that {(F_{\infty},\mu_{\infty})} has Poisson boundary {(\Omega_2,\nu)} by working with harmonic functions. More precisely, our task consists into showing that the restrictions to {F_{\infty}} of {\mu}-harmonic functions on {F_2} are {\mu_{\infty}}-harmonic and all {\mu_{\infty}}-harmonic functions on {F_{\infty}} arise in this way (by restriction of {\mu}-harmonic functions on {F_2}). In this direction, one recalls that, for each {g\in F_{\infty}}, the quantity {\mu_{\infty}(g)} is the probability that {x_1\dots x_n} takes value {g} at the first time it enters {F_{\infty}}, and then one shows the desired facts about {\mu}-harmonic functions versus {\mu_{\infty}}-harmonic functions with the aid of the following abstract lemma:

Lemma 7 Let {G} be a discrete, {\mu} a probability measure on {G} and {\{x_n\}} a stationary sequence of independent random variables with distribution {\mu}. Suppose that {R\subset G} is a recurrence set of {u_n=x_1\dots x_n} and let {h=h(g)} be a {\mu}-harmonic function. Then,

\displaystyle h(g)=\sum\limits_{g^*\in R} \theta_g(g^*) h(g^*)

where {\theta_g} is the distribution of the first point of {R} hit by {gx_1\dots x_n}.

We refer to pages 31 and 32 of Furstenberg’s survey for a (short) proof of this lemma and its application to the computation of the Poisson boundary of {(F_{\infty},\mu_{\infty})}.

1.3. {SL(n,\mathbb{R})}, {n\geq 2}

Let {G=G_n=SL(n,\mathbb{R})}, {n\geq 2}, and denote by {K=K_n} the (maximal compact) subgroup of orthogonal matrices in {G} and {P=P_n} the (Borel or minimal parabolic) subgroup upper triangular matrices in {G}. From the Gram-Schmidt orthogonalization process, we know that {G=KP}.

Definition 8 A probability measure {\mu} on {G} is called spherical if

  • {\mu} is absolutely continuous with respect to Haar measure on {G} and
  • {\mu} is {K}-bi-invariant, i.e., {k\mu=\mu k=\mu} for all {k\in K}, or, equivalently, {\int_G f(k_1 g k_2)d\mu(g)=\int_G f(g)d\mu(g)} for all {k_1, k_2\in K}.

Let us now consider the homogenous space {B=B_n=G/P} (of complete flags in {\mathbb{R}^n}), i.e., a space where {G} acts continuously and transitively (by left multiplication {g(hP)=(gh)P} on cosets {hP\in G/P}, of course).

Since {K} acts transitively on {B}, there exists an unique {K}-invariant measure on {B} that we will denote {m_B} (philosophically, this is comparable to the fact that the unique translation-invariant measure on the real line is Lebesgue). Now, given any probability measure {\nu} on {B} and any spherical measure {\mu} on {G}, note that {\mu\ast\nu} is a {K}-invariant probability measure on {B}, and, a fortiori, {\mu\ast\nu=m_B}. In particular, {m_B} is the unique {\mu}-stationary measure on {B}.

In this context, Furstenberg proved the following theorem:

Theorem 9 (Furstenberg) {(B,m_B)} is the Poisson boundary of {(G,\mu)} whenever {\mu} is a spherical measure.

We will not comment on the proof of this theorem here: instead, we refer the reader to the original article of Furstenberg for more details (and, in fact, a more general result valid for all semi-simple Lie groups {G}).

An interesting consequence of this theorem is the following fact:

Corollary 10 The class of {\mu}-harmonic functions on {G} is the same for all spherical measures {\mu}.

Proof: By the definition of the Poisson boundary, the fact (ensured by Furstenberg’s theorem) that {(B,m_B)} is the Poisson boundary of {(G,\mu)} means that we have a Poisson formula

\displaystyle h(g)=\int_B\hat{h}(g\xi)dm_B(\xi)

giving all {\mu}-harmonic functions {h} on {G} as integrals of bounded measurable functions {\hat{h}} on {B}. Since the right-hand side of this equation does not depend on {\mu} (only on {m_B}), the corollary follows. \Box

From this corollary, it is natural to define a harmonic function on {G_n=SL(n,\mathbb{R})} as a {\mu}-harmonic function with respect to any spherical measure {\mu}.

For later use, let us describe more geometrically the Poisson boundaries {B_2} and {B_3} (resp.) of {SL(2,\mathbb{R})} and {SL(3,\mathbb{R})} (resp.). In fact, we already hinted that, in general, {B=B_n} is the complete flag variety of {\mathbb{R}^n}, but we will discuss the particular cases of {B_2} and {B_3} because our plan is to show Theorem 1 in the context of {SL(2,\mathbb{R})} and {SL(3,\mathbb{R})} only.

1.3.1. Poisson boundary of {SL(2,\mathbb{R})}

Consider the usual action of {G=SL(2,\mathbb{R})} on {\mathbb{R}^2} and the corresponding induced action on the projective space {\mathbb{P}^1} of directions/lines {\overline{v}=\mathbb{R}\cdot v} in {\mathbb{R}^2} associated to non-zero vectors {v\in\mathbb{R}^2}.

By definition, the subgroup {P} of upper triangular matrices in {SL(2,\mathbb{R})} is the stabilizer of the direction {\overline{e_1}=\mathbb{R}\cdot e_1} generated by the vector {e_1=(1,0)}. In particular, since {G} acts transitively on {\mathbb{P}^1}, we deduce that {B_2=G/P\simeq\mathbb{P}^1}.

Let us now try to understand how {B_2} is attached to {G=G_2=SL(2,\mathbb{R})} in terms of the measure topology described in the previous post.

By definition, an element {g\in G_2} is close to {\xi\in B_2} whenever the measure {gm_B} is close to {\delta_{\xi}}.

On the other hand, a “large” element

\displaystyle g=\left(\begin{array}{cc}a & b \\ c & d \end{array}\right)\in SL(2,\mathbb{R}),

i.e., a matrix {g} with large operator norm {\|g\|} has at least one large of the column vector, either {ge_1=\left(\begin{array}{c}a \\ c\end{array}\right)} or {ge_2=\left(\begin{array}{c}b \\ d\end{array}\right)}, and, if both column vectors are large, then their directions {\overline{ge_1}} and {\overline{ge_2}} are close by unimodularity of {g} (indeed, if they are both large and they form a rectangle of area {1}, then their angle must be small). In particular, we conclude that, for large elements {g\in SL(2,\mathbb{R})}, most directions {\overline{u}} are close to the larger of {\overline{ge_1}}, {\overline{ge_2}} except when {\overline{u}} is approximately orthogonal to {{}^t ge_1=(a,b)}, {{}^t ge_2=(c,d)} (where {{}^t g} is the transpose of {g}).

In summary, we just proved the following lemma:

Lemma 11 Given {\varepsilon>0}, there exists a compact subset {C_{\varepsilon}\subset SL(2,\mathbb{R})} such that, if {g\notin C_{\varepsilon}}, then there are two intervals {I, J\subset\mathbb{P}^1} of sizes {<\varepsilon} with {g(\mathbb{P}^1-I)\subset J}.

An interesting consequence of this lemma concerns the action of large elements of {SL(2,\mathbb{R})} on non-atomic measures on {\mathbb{P}^1}.

Indeed, given {\nu} an arbitrary non-atomic probability measure on {\mathbb{P}^1}, we have that for each {\delta>0} there exists {\varepsilon>0} such that any interval {L\subset\mathbb{P}^1} of size {<\varepsilon} has {\nu}-measure {\nu(L)<\varepsilon} (because any point has zero {\nu}-measure by assumption).

In particular, by combining this information with the previous lemma, we deduce that any large element {g\notin C_{\varepsilon}} of {SL(2,\mathbb{R})} moves most of the mass of {\nu} to a small interval. More precisely, denoting by {I} and {J} the intervals of sizes {<\varepsilon} provided by the lemma, we have that {\nu(I)<\delta}, i.e., {\nu(\mathbb{P}^1-I)>1-\delta}. Therefore, {g\nu(J)=\nu(g^{-1}(J))\geq \nu(\mathbb{P}^1-I)>1-\delta} (since {g(\mathbb{P}^1-I)\subset J}), i.e., most of the mass of {g\nu} is concentrated on {J}. In other words, we proved the following crucial lemma:

Lemma 12 Let {\nu} be a non-atomic probability measure on {\mathbb{P}^1}. Then, as {g\rightarrow\infty}, {g\nu} converges to a Dirac mass.

An alternative way of phrasing this lemma is: as {g\rightarrow\infty}, it approaches a definite point of {B_2}. In particular, {G_2\cup B_2} is compact, that is, we get a compactification of {G_2} by attaching {B_2}! (This result is comparable to Corollary 4 in the context of free groups.) As we will see below, this compacteness property is no longer true for {G_3\cup B_3} (i.e., in the context of {SL(3,\mathbb{R})}), and this is one of the main differences between the Poisson boundaries of {SL(2,\mathbb{R})} and {SL(3,\mathbb{R})}. As the reader might imagine, this observation will be important for the proof of Theorem 1 (that is, at the moment of distinguishing between the lattices of {SL(2,\mathbb{R})} and {SL(3,\mathbb{R})}).

Note that, from this lemma, we can show directly that {(\mathbb{P}^1, m_B)} is a boundary of {(G_2,\mu)} whenever {\mu} is spherical (without appealing to Furstenberg’s theorem above). Indeed, denoting by {x_n} the random walk with respect to {\mu}, we have that, by definition, {(\mathbb{P}^1,m_B)} is a boundary of {(G_2,\mu)} if and only if {x_1\dots x_n m_B} converges to a Dirac mass with probability {1}. Since {\mathbb{P}^1} is a compact space, we know that {x_1\dots x_n m_B} converges to some measure with probability {1} (cf. Corollary 11 of the previous post) and the previous lemma says that this measure is a Dirac mass unless the products {x_1\dots x_n} stay bounded (with positive probability). Now, a random product of elements in a topological group remains bounded (with positive probability) only if the distribution is supported on a compact subgroup (by Birkhoff’s ergodic theorem and the fact that a compact semigroup of a group is a subgroup), but this is not the case as {\mu} is spherical (and thus absolutely continuous with respect to Haar).

Finally, once we know that {(\mathbb{P}^1,m_B)} is a boundary of {(G_2,\mu)}, i.e., {x_1\dots x_n m_B} converges to a Dirac mass {\delta_{z_1}} where {z_k} is a {\mu}-process on {\mathbb{P}^1}, we can obtain the following interesting consequences. Firstly, {x_1\dots x_n\rightarrow\infty}, i.e., the random walk is transient. Moreover, the direction of the larger of the column vectors of {x_1\dots x_n} converges with probability {1} to a direction {z_1}, and, in fact, it can be shown that both column vectors become large. In particular, by unimodularity, it follows that, with probability {1}, both column vectors of {x_1\dots x_n} converge to a random point (direction) of {\mathbb{P}^1}.

1.3.2. Poisson boundary of {SL(3,\mathbb{R})}

By Furstenberg’s theorem, {B_3=G_3/P_3} is the space underlying the Poisson boundary of {G_3=SL(3,\mathbb{R})} equipped with any spherical measure {\mu}. Here, we recall that {P_3} is the subgroup of upper triangular {3\times 3} matrices in {G_3}.

We proceed as before, i.e., let us consider the usual action of {G_3} on {\mathbb{R}^3} and the induced action on the space {F_3} of pairs {(\overline{u},\overline{v})} of points in the projective space {\mathbb{P}^2} corresponding to orthogonal directions {\overline{u}} and {\overline{v}} in {\mathbb{R}^3}.

Note that {G_3} acts naturally on {F_3} via {g(\overline{u},\overline{v})=(g\overline{u},{}^t g^{-1}\overline{v})} (our choice of {{}^t g^{-1}} is natural because the orthogonality condition is preserved and {g\mapsto {}^t g^{-1}} is an automorphism of {G_3} [in particular, our matrices are multiplied in the ``correct order'']). Moreover, this action is transitive, so that {F_3} is the quotient of {G_3} by the stabilizer of a point of {F_3}. For sake of concreteness, let us consider the point {(\overline{e_1}, \overline{e_3})\in F_3} (where {e_1, e_2, e_3} are the vectors of the canonical basis of {\mathbb{R}^3}) and let us determine its stabilizer in {G_3}. By definition, if {g\in G_3}, say

\displaystyle g=\left(\begin{array}{ccc}g_{11} & g_{12} & g_{13} \\ g_{21} & g_{22} & g_{23} \\ g_{31} & g_{32} & g_{33} \end{array} \right)

stabilizes {(\overline{e_1}, \overline{e_3})}, then {g\overline{e_1}=\overline{e_1}} and {{}^t g\overline{e_3}=\overline{e_3}}, i.e.,

\displaystyle g_{21}=g_{31}=0

and

\displaystyle {}^t g_{13} = g_{31}=0=g_{32}={}^tg_{23}

that is, {g} is upper triangular. In other words, {P_3} is the stabilizer of {(\overline{e_1}, \overline{e_3})} and, thus, {F_3\simeq G_3/P_3=B_3}.

Next, let us again try to understand how {B_3} is attached to {G_3} in terms of the measure topology. Once more, by performing the random walk {x_1\dots x_n} with distribution {\mu}, one can check that the column vectors of {x_1\dots x_n} converge to a limit vector {u_1} in {\mathbb{P}^2}. In terms of the boundary {B_3}, we can recover {u_1} by letting {z_1=(u_1, v_1)} be the point ({\mu}-process) of {B_3\simeq F_3} obtained as the almost sure limit of {x_1\dots x_n}.

Of course, in this description, the role of {v_1} remains somewhat mysterious, and so let us try to uncover it now. By definition, {v_1} is the limit point of the column vectors of {{}^t x_1^{-1}\dots {}^t x_n^{-1}}, i.e., the random walk obtained from {x_1\dots x_n} by applying the automorphism {g\mapsto {}^t g^{-1}}. On the other hand, if {r_n=U_n(e_1), s_n=U_n(e_2), t_n=U_n(e_3)} are the column vectors of {U_n=x_1\dots x_n}, then the vectors {s_n\times t_n}, {t_n\times r_n} and {r_n\times s_n} are the column vectors of {{}^t x_1^{-1}\dots {}^t x_n^{-1}} (where {u\times v} is the vector product of {u, v\in\mathbb{R}^3}). In particular, the fact that these vectors converge to {v_1} means that the perpendicular directions to the 3 planes passing through the vectors {r_n, s_n, t_n} converge to {v_1}, i.e., all of these 3 planes converge to the plane perpendicular to {v_1}.

Finally, let us make more precise the observation from the previous subsubsection that the boundary behaviors of elements in {G_2=SL(2,\mathbb{R})} and {G_3=SL(3,\mathbb{R})} are different as this is one of the main points in the proof of Theorem 1.

Once more, let us point out that Lemma 12 above says that any element {g\in G_2} approaches the boundary (i.e., becomes large) by getting close to a specific point {\xi\in B_2}. On the other hand, this is no longer true for elements {g\in G_3}: for instance, the sequence

\displaystyle g_n=\left(\begin{array}{ccc}n&0&0 \\ 0&n&0 \\ 0&0&1/n^2\end{array}\right)\in SL(3,\mathbb{R})

goes to infinity without getting close to any specific point of {B_3} because its larger column vectors {g_ne_1} and {g_ne_2} do not converge together to a single direction in {\mathbb{P}^2}. Instead, it is not hard to convince oneself (by playing with the definitions) that the sequence of measures {g_n m_B} converges to a measure supported on the great circle {\{(\overline{u},e_3): \overline{u}\perp e_3\}\subset B_3}. In general, a great circle is a fiber of one of the two natural fibrations {B_3\simeq F_3\rightarrow\mathbb{P}^2} and it is possible to show that the measures {gm_B} can approach measures supported on any given great circle.

Anyhow, the basic point is that {G_2\cup B_2} is compact but {G_3\cup B_3} is not compact. For sake of completeness, let us point out what are the measure that we must add to {B_3} to get a compactification of {G_3}. It can be proven that any convergent sequence {g_nm_B} tends to a Dirac mass, a circle measure (i.e., a measure supported on a great circle) or an absolutely continuous measure with respect to {m_B} (this last case occurring only if {g_n} is bounded in {G_3}). Moreover, in the case of getting a circle measure as a limit, we get a very specific object: by identifying the great circle with {\mathbb{P}^1} and denoting by {m_{\mathbb{P}^1}} the Lebesgue measure, one has that all circle measure have the form {gm_{\mathbb{P}^1}} where {g\in G_2=SL(2,\mathbb{R})} (and {SL(2,\mathbb{R})} acts on its boundary {\mathbb{P}^1}).

1.3.3. Harmonic functions on {SL(2,\mathbb{R})}

Closing this quick discussion on the Poisson boundaries of {SL(n,\mathbb{R})}, let us quickly comment on the relationship between {\mu}-harmonic functions on {SL(2,\mathbb{R})} and classical harmonic functions on Poincaré’s disk.

Let {\mu} be a spherical measure on {G_2=SL(2,\mathbb{R})}. For the sake of simplicity of notation, we will say that a random walk {U_n=gx_1\dots x_n} with respect to a spherical measure is a Brownian motion. By definition of sphericity of {\mu}, the transition from {g} to {gg'} has the same probability as the transition from {gk} to {gg'}, and from {g} to {gg'k} for each {k\in K_2=SO(2,\mathbb{R})}. In particular, the Brownian motion {U_n} can be transferred to a Brownian motion {W_n=gx_1\dots x_n K_2} on the symmetric space {G_2/K_2} of {G_2}.

Now, we recall that {G_2/K_2} is Poincaré’s disk {\mathbb{D}}/hyperbolic upper-half plane {\mathbb{H}}: more concretely, by letting {G_2} act on the upper-half plane {\mathbb{H}} by Möebius transformations, i.e., isometries of {\mathbb{H}} equipped with the hyperbolic metric,

\displaystyle g(z)=(az+b)/(cz+d),

for {g=\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)\in SL(2,\mathbb{R})}, we see that {K_2} is the stabilizer of {i\in\mathbb{H}}. In particular, {G_2/K_2} is naturally identified with {\mathbb{H}} via {gK_2\mapsto g(i)}, and, hence, we can also think of {G_2/K_2} as the Poincaré’s disk after considering the fractional linear transformation sending {\mathbb{H}} to {\mathbb{D}} in such a way that {i} is sent to the origin {0} and {\infty} is sent to {1}.

In summary, we can think of the Brownian motion {U_n} as a Brownian motion {W_n=gx_1\dots x_n(0)} on Poincaré’s disk. Here, it is worth to point out that the transitions of {W_n} are not given by group multiplication as {G_2} acts on the left and the {x_j}‘s are multiplied from the right. Finally, if we transfer the measure topology on {G_2\cup B_2} to {\mathbb{D}\cup\partial\mathbb{D}=\overline{\mathbb{D}}}, we get the usual Euclidean topology on the closed unit disk {\overline{\mathbb{D}}}. Indeed, suppose that {g(0)=z} with {|z|\sim 1}. Then, {g} transfers most of the mass of {m_B} to a point of {\partial\mathbb{D}} close to {z}. In particular, if {g_n(0)=z_n\rightarrow\xi\in\partial\mathbb{D}}, then {g_nm_B\rightarrow\delta_{\xi}}, that is, {g_n\in G_2} converges to {\xi\in\mathbb{P}^1}, and, a fortiori, the Euclidean topology on {\overline{\mathbb{D}}} is the topology we get after transferring the measure topology.

Finally, note that the value of a harmonic function {h(g)} (with respect to any spherical measure) depends only on the coset {gK_2} (by sphericity and the mean value property of harmonic functions). Thus, {h} induces a function {\widetilde{h}} on {G_2/K_2}. By Poisson’s formula (in the definition of Poisson boundary), we have that

\displaystyle \widetilde{h}(g(0)) = h(g) = \int\hat{h}(g\xi) dm_B(\xi) = \int\hat{h}(\xi)\frac{dgm_B}{dm_B}(\xi)dm_B(\xi).

On the other hand, by computing the density {dgm_B/dm_B} (using that {g} acts via Möebius transformations and {m_B} is the Lebesgue measure), we can show that

\displaystyle \frac{dgm_B}{dm_B}(\xi)=P(g(0),\xi)

where {P(z,\xi)} is the classical Poisson kernel on the unit disc. In other words, by letting {z=g(0)}, we obtain that

\displaystyle \widetilde{h}(z)=\int\hat{h}(\xi) P(z,\xi) dm_B(\xi)

and, hence, the function {\widetilde{h}} is harmonic in the classical sense. In summary, the two notions of `harmonic’ are the same. Probabilistically speaking, the formula above says that the value {\widetilde{h}(z)} is obtained by integrating the boundary values {\hat{h}(\xi)} with respect to the `hitting measure {gm_B} on the boundary starting at {z}‘ (as {gm_B} is the distribution of the limit of {W_n=gx_1\dots x_n} [because {m_B} is the distribution of the limit of {x_1\dots x_n} and by invariance of the Brownian motion under the group]).

1.4. Mapping class group and Teichmüller space

As it is “customary”, the mapping class groups and Teichmüller spaces are very close to lattices in Lie groups and homogenous spaces. In particular, this partly motivates these two articles of Kaimanovich and Masur about the Poisson boundary of the mapping class group and Teichmüller spaces, where it is shown that it is the Thurston compactification (via projective measure foliations) equipped with a natural harmonic measure. Of course, it is out of the scope of this post to comment on this subject and we refer the curious reader to (very well-written) papers of Kaimanovich-Masur.

2. Poisson boundary of lattices of {SL(n,\mathbb{R})}

After this series of examples of Poisson boundaries, let us come back to the proof of Theorem 1. At this stage, we know that {G_n=SL(n,\mathbb{R})} equipped with any spherical measure has Poisson boundary {(B_n, m_{B_n})} and now we want to distinguish between lattices of {G_n}.

As we mentioned in the previous post, the basic idea is that a `nice’ random walk in a lattice {\Gamma} of {G_n} should see the whole boundary of {G_n}. In fact, this statement should be compared with the results in Subsubsection 1.2.3 above where we saw that an adequate random walk in the free group {F_{\infty}} in {\infty} generators sees the whole boundary of the free group {F_2} in two generators because {F_{\infty}} behaved like a lattice in {F_2}, or, more accurately, it was a recurrence set for the symmetric random walk in {F_2}.

However, this heuristic for free groups can not be applied ipsis-literis to lattices {\Gamma} of {G_n} because a countable set {\Gamma} can not be a recurrence set for a Brownian motion in {G_n}. Nevertheless, one still has the following result:

Theorem 13 (Furstenberg) If {\Gamma} is a lattice of {G_n= SL(n,\mathbb{R})}, then there exists a probability measure {\mu} on {\Gamma} such that the Poisson boundary of {(\Gamma,\mu)} coincides with the Poisson boundary {(B_n,m_{B_n})} of {G_n} (with respect to any spherical measure).

In order to simplify the exposition, we will restrict our attention to the low dimensional cases. More concretely, we will sketch the construction of {\mu} in the case of cocompact lattices in {G_2=SL(2,\mathbb{R})} and we will show that {(\Gamma,\mu)} has {(B_3,m_{B_3})} as a boundary. However, we will not enter into the details of showing that {(B_n,m_{B_n})} is the Poisson boundary of {(\Gamma,\mu)}: instead, we refer the reader to Furstenberg’s survey for a proof in the case of {n=2} (i.e., {SL(2,\mathbb{R})}) and his original article for the general case.

2.1. Construction of {\mu} in the case {n=2}

Consider again the symmetric space {G_n/K_n} associated to {G_n}. This space has a natural {G_n}-invariant metric {d(gK_n,g'K_n)} and, using this metric, we have a function

\displaystyle \Lambda(g)=d(gK,K)

measuring the distance to the origin. Note that, in the particular case {n=2}, the function {\Lambda} has a very simple expression:

\displaystyle \Lambda(g)=\log\frac{1+|g(0)|}{1-|g(0)|}

for {g\in SL(2,\mathbb{R})=G_2}.

Proposition 14 If {\Gamma} is a cocompact subgroup of {G_n}, then there exists a probability measure {\mu} on {\Gamma} such that

  • (a) {\textrm{supp}(\mu)=\Gamma}, i.e., {\mu(\{\gamma\})>0} for all {\gamma\in\Gamma},
  • (b) {m_B=m_{B_n}} is {\mu}-stationary, i.e., {\mu\ast m_B = m_B},
  • (c) the restriction of the function {\Lambda} to {\Gamma} is {\mu}-integrable, i.e.,

    \displaystyle \sum\limits_{\gamma\in\Gamma} \mu(\gamma)\Lambda(\gamma)<\infty.

Remark 3 The condition (b) above implies that the restriction to {\Gamma} of an arbitrary harmonic function on {G_n} is {\mu}-harmonic. In particular, this means that a harmonic function on {G_n} satisfies plenty (at least one per cocompact lattice of {G_n}) of discrete mean-value equalities.

For the proof of this proposition, we will focus on the construction of a measure satisfying item (b) and then we will adjust it to satisfy items (a) and (c). Also, we will discuss only the case of cocompact lattices {\Gamma} in {G_2=SL(2,\mathbb{R})}.

In this direction, let us re-interpret item (b) in terms of the Brownian motion on Poincaré’s disk {\mathbb{D}} (that is, the symmetric space {G_2/K_2} of {G_2}). As we saw above, {gm_B} is the hitting distribution on {\partial\mathbb{D}} of the Brownian motion starting at {g(0)}.

In particular, the stationarity condition in item (b) says that the hitting probability {m_B} on {\partial\mathbb{D}} starting at the origin {0} is a convex linear combination of the hitting probabilities {\gamma m_B} on {\partial\mathbb{D}} starting at the points {\gamma(0)} (for {\gamma\in\Gamma}). This hints how we must show item (b): the measure {\mu} will correspond to the weights {\mu(\gamma)} used to write {m_B} is a linear convex combination of {\gamma m_B}, {\gamma\in\Gamma}. Keeping this goal in mind, it is clear that the following lemma will help us with our task:

Lemma 15 Let {\Gamma} be a cocompact lattice of {G_2=SL(2,\mathbb{R})} and denote {m_z=gm_B} the hitting probability on {\partial\mathbb{D}} of a Brownian motion starting at {z=g(0)}. Then, there are two constants {0<L<\infty} and {\delta>0} such that, for any {z_0\in\mathbb{D}}, one has

\displaystyle m_{z_0} = \sum\limits_{j}p_j m_{\gamma_j(0)}+\int m_{\zeta}d\lambda(\zeta)

where {p_j>0}, the points {\gamma_j(0)} and {\zeta} are within a hyperbolic distance {L} of {z_0}, and {\lambda} is a non-negative measure of total mass {<1-\delta}.

Proof: Since {\Gamma} is cocompact, it has a compact fundamental domain. In particular, we can find a large constant {L<\infty} such that the hyperbolic ball {B(z_0, L)} of radius {L} around any {z_0\in\mathbb{D}} contains in its interior at least one point of the form {\gamma(0)} with {\gamma\in\Gamma}.

For sake of simplicity of the exposition, during this proof, we will replace the discrete-time Brownian motion {gU_n=gx_1\dots x_n} by its continuous-time analog for a technical reason that we discuss now.

For each {z_0\in\mathbb{D}} and {z} in the interior of {B(z_0, L)}, the hitting measure {m_z} on {\partial\mathbb{D}} starting at {z} can be computed by noticing that a continuous-time Brownian motion emanating from {z} must hit the circle {C(z_0, L):=\partial B(z_0, L)} before heading towards {\partial\mathbb{D}}. Of course, the same is not true for the discrete-time Brownian motion (as we can jump across {C(z_0, L)}), but we could have overcome this small difficulty by considering an annulus around {\partial C(z_0, L)}. However, we will stick to the continuous-time Brownian motion to simplify matters. Anyhow, by combining this observation with the strong Markov property of the Brownian motion, one has that

\displaystyle m_z=\int_{C(z_0, L)} m_{\zeta} m(z, C(z_0,L); d\zeta)

where {m(z, C(z_0, L); d\zeta)} is the hitting distribution of {\zeta} for a Brownian motion starting at {z} (i.e., for each interval {J\subset C(z_0, L)}, the measure {m(z, C(z_0,L); J)} is the probability that a Brownian motion starting at {z} first hits {C(z_0, L)} at a point in {J}.

Next, let us consider the points of the form {\gamma_j(0)}, {\gamma_j\in\Gamma}, inside (the interior of) {B(z_0,L)}, choose positive numbers {p_j>0} and consider the measure

\displaystyle m_{z_0}-\sum p_j m_{\gamma_j(0)}

By the previous formula for the hitting measures {m_z}, we can rewrite the measure above as

\displaystyle \int_{C(z_0,L)} m_{\zeta}\left(m(z_0, C(z_0,L); d\zeta)-\sum p_j m(\gamma_j(0), C(z_0,L); d\zeta)\right) \ \ \ \ \ (2)

Pictorially, this integral represents weighted contributions from the following Brownian motions:
rw-2

At this point, we observe that the measures {m(z, C(z_0, L);.)} are absolutely continuous with respect to the Lebesgue measure on {C(z_0,L)} whenever {z} belongs to the interior of {B(z_0, L)} (as our Brownian motion is guided by a spherical measure [by definition]). Therefore, by taking {p_j} small (depending on how close {\gamma_j(0)} is to the circle {C(z_0, L)}), we can ensure that the measure

\displaystyle \lambda=m(z_0, C(z_0,L); d\zeta)-\sum p_j m(\gamma_j(0), C(z_0,L); d\zeta) \ \ \ \ \ (3)

appearing in the right-hand side of (2) is positive. Furthermore, note that this scenario is {\Gamma}-invariant: if we replace {z_0} by {\gamma z_0} for {\gamma\in\Gamma}, the circle {C(z_0, L)} is replaced by {\gamma(C(z_0,L))=C(\gamma z_0, L)} and the elements {\gamma_j\in\Gamma} are replaced by {\gamma\gamma_j}, but we can keep the same values of {p_j}.

In other words, the values of {p_j} (making the measure in (3) positive), and, a fortiori, the quantity {\sum p_j}, depends only on the coset {\Gamma g_0} where {g_0\in G_2} satisfies {g_0(0)=z_0}. Therefore, by the compactness of {G/\Gamma}, we can find some {\delta>0} such that, for all {z_0\in\mathbb{D}}, the values of {p_j} (making (3) positive) can be chosen so that

\displaystyle \sum p_j>\delta

In particular, it follows that {\lambda} is a positive measure of total mass {<1-\delta}.

In summary, we managed to write

\displaystyle m_{z_0}=\sum p_j m_{\gamma_j(0)} + \int m_{\zeta} d\lambda(\zeta)

where {\lambda} is a positive measure of total mass {<1-\delta}, as desired. \Box

By taking {z_0=0} in this lemma, we know that one can write {m_B=m_{0}} as a convex linear combination of a “main contribution” coming from {m_{\gamma_j(0)}}‘s (where the distance from {\gamma_j(0)}‘s to {0} are {<L}) and a “boundary contribution” coming from an integral of {m_{\zeta}} with respect to a measure {\lambda} of total mass {<1-\delta}.

From this point, the idea to construct a probability measure {\mu} on {\Gamma} satisfying item (b) of Proposition 14 is very simple: we repeatedly apply the lemma to the {m_{\zeta}}‘s appearing in the “boundary contribution” in order to push it away to infinity; here, the convergence of this procedure is ensured by the fact that the boundary measure {\lambda} loses a definite factor (of {1-\delta}) of its mass at each step.

More concretely, this idea can be formalized as follows. By induction, assume that, at the {n}th step, we wrote

\displaystyle m_B = m_0 = \sum p_{\gamma}^{(n)} m_{\gamma(0)} + \int m_{\eta} d\lambda^{(n)}(\eta) \ \ \ \ \ (4)

where {p_{\gamma}^{(n)}>0} only for {\gamma\in\Gamma} such that the distance {d(\gamma(0), 0)} between {\gamma(0)} and {0} is {<nL} and {\lambda^{(n)}} is a positive measure on the hyperbolic ball {B(0, nL)} (of radius {nL} and center {0}) of total mass {<(1-\delta)^n}.

By applying the lemma to {m_{\eta}}, we also have

\displaystyle m_{\eta}=\sum p_{\gamma}(\eta)m_{\gamma(0)} + \int m_{\zeta} d\lambda_{\eta}(\zeta)

where {p_{\gamma}(\eta)>0} only for {d(\gamma(0),\eta(0))\leq L}, and {\lambda_{\eta}} is supported in {B(\eta(0), L)} and it has total mass {<(1-\delta)}.

By combining these equations, we deduce that

\displaystyle \begin{array}{rcl} m_0 & = & \sum\left(p_{\gamma}^{(n)}+\int p_{\gamma}(\eta)d\lambda^{(n)}(\eta)\right)m_{\gamma(0)}+\int\int m_{\zeta}d\lambda_{\eta}(\zeta)d\lambda^{(n)}(\eta) \\ & := & \sum p_{\gamma}^{(n+1)} m_{\gamma(0)} + \int m_{\eta} d\lambda^{(n+1)}(\eta) \end{array}

where {p_{\gamma}^{(n+1)}>0} only for {d(\gamma(0), 0)\leq d(\gamma(0),\eta(0))+d(\eta(0),0)<(n+1)L}, and {\lambda^{(n+1)}} is a positive measure supported on {B(0, (n+1)L)} whose total mass is

\displaystyle \int \lambda_{\eta}(\mathbb{D}) d\lambda^{(n)}(\eta)\leq (1-\delta)^{n+1}.

In particular, by setting {\mu(\gamma)=\lim\limits_{n\rightarrow\infty} p_{\gamma}^{(n)}}, we find that

\displaystyle m_B=m_0=\sum\limits_{\gamma\in\Gamma}\mu(\gamma)m_{\gamma(0)}=\mu\ast m_0=\mu\ast m_B,

that is, the stationarity condition of item (b) of Proposition 14 is proved.

Now, we claim that item (c) of Proposition 14 is satisfied by {\mu}. Indeed, by construction, for the elements {\gamma\in\Gamma} with {\Lambda(\gamma):=d(\gamma(0),0)>nL}, the quantity {\mu(\gamma)} comes from the {\lambda^{(n)}}-combination of the contributions of the measures {m_{\eta}} in the right-hand side of (4). Since the measure {\lambda^{(n)}} has total mass {<(1-\delta)^n}, we deduce that

\displaystyle \sum\limits_{\substack{\gamma\in\Gamma, \\ \Lambda(\gamma)>nL}}\mu(\gamma)<(1-\delta)^n

and, therefore,

\displaystyle \sum\limits_{n\in\mathbb{N}}\sum\limits_{\substack{\gamma\in\Gamma, \\ \Lambda(\gamma)>nL}}\mu(\gamma)<\infty.

In particular, given that any element {\gamma\in\Gamma} with {kL<\Lambda(\gamma)\leq (k+1)L} appears {k} times in the sum above, we conclude that

\displaystyle \sum\limits_{\gamma\in\Gamma}\mu(\gamma)\Lambda(\gamma)<\infty,

that is, the {\mu}-integrability condition on {\Lambda} in item (c) of Proposition 14 is verified.

Finally, the full support condition in item (a) of Proposition 14 might not be true for the probability measure {\mu} constructed above. However, it is not hard to fulfill this condition by slightly changing the construction above. Indeed, it suffices to add all points {\gamma(0)} at distance {<nL} from {0} in the {n}th step of the construction of {\mu} and then assign to them some tiny but positive probabilities so that the measure {\lambda^{(n)}} in the right-hand side of (4) is still positive. In this way, we are sure that in the end of the construction of {\mu}, all {\gamma}‘s in {\Gamma} were assigned some non-trivial mass.

This completes the sketch of the proof of Proposition 14 in the case {n=2} (i.e., cocompact lattices in {SL(2,\mathbb{R})=G_2}).

After constructing {\mu}, let us show that the Poisson boundary {(B_n, m_{B_n})} of {G_n=SL(n,\mathbb{R})} equipped with a spherical measure is a boundary of {(\Gamma,\mu)}.

2.2. {(B_n,m_{B_n})} is the Poisson boundary of {(\Gamma,\mu)}

A reasonably detailed proof that {(B_n,m_{B_n})} is the Poisson boundary of {(\Gamma,\mu)} is somewhat lengthy because the verification of the maximality property (i.e., any boundary of {(\Gamma,\mu)} is an equivariant image of {(B_n, m_{B_n})}) needs a certain amount of computation (in fact, we might come to this point later in a future post, but, for now, let us skip this point). In particular, we will content ourselves with checking only that {(B_n, m_{B_n})} is a boundary of {(\Gamma, \mu)} in the cases {n=2} and {3}.

As it turns out, the fact that {(B_2, m_{B_2})} is a boundary of {(\Gamma,\mu)} (in the case {n=2}) is not hard. By definition, we have to show that a stationary sequence {y_n} of independent random variables with distribution {\mu} has the property that {y_1\dots y_n m_{B_2}} converges to a Dirac mass with probability {1}. On the other hand, by Corollary 11 of the previous post and the compactness of {B_2\simeq\mathbb{P}^1}, we know that {y_1\dots y_n m_{B_2}} converges to some measure with probability {1}, and, by Lemma 12, this limit measure is a Dirac mass if the elements {y_1\dots y_n} are unbounded. Now, it is clear that these elements are unbounded because {y_n} has distribution {\mu} and, by construction (cf. item (a) of Proposition 14), {\mu} is fully supported on a lattice of {SL(2,\mathbb{R})} (and, thus, its support is not confined in a compact subgroup).

Next, let us show that {(B_3, m_{B_3})} is a boundary of {(\Gamma, \mu)} (in the case {n=3}). In this direction, we will need the following lemma playing the role of an analog of Lemma 12 in the context of {SL(3,\mathbb{R})}:

Lemma 16 Let {\mu} be a probability measure on {G_3=SL(3,\mathbb{R})} such that:

  • (i) {\mu} has a rich support: the support of {\mu} is not confined to a compact or reducible subgroup of {G_3};
  • (ii) the norm function is {\mu}-{\log}-integrable: {\int \log(\|g\|\cdot\|g^{-1}\|)d\mu(g)<\infty};
  • (iii) {m_{B_3}} is {\mu}-stationary: {\mu\ast m_{B_3}=m_{B_3}}

Then, for any stationary sequence {\{y_n\}} of independent random variables with distribution {\mu}, the sequence of measures {(y_1\dots y_n)_* m_{B_3}} converges to a Dirac mass on {B_3} with probability {1}.

Before proving this lemma, let us see how it allows to prove that {(B_3, m_{B_3})} is a boundary of {(\Gamma,\mu)} for the measure {\mu} constructed in Proposition 14, thus completing our sketch of proof of Furstenberg’s theorem 13. By definition of boundary, it suffices to check that the measure {\mu} provided by Proposition 14 fits the assumptions of Lemma 16. Now, by item (a) of Proposition 14, {\mu} is fully supported on the lattice {\Gamma} of {G_3}. Since a lattice of {SL(n,\mathbb{R})} is Zariski dense (by Borel’s density theorem), {\textrm{supp}(\mu)=\Gamma} is not a compact subgroup nor reducible subgroup of {G_3}, that is, {\mu} satisfies item (i) of the lemma above. Next, we notice that a computation shows that the distance function to origin {\Lambda(g)=d(gK_3, K_3)} in the symmetric space {G_3/K_3} satisfies {\log\|g\|=O(\Lambda(g))} and {\log\|g^{-1}\|=O(\Lambda(g))}. In particular, the integrability condition in item (c) of Proposition 14 implies that {\mu} satisfies item (ii) of the lemma above. Finally, the item (b) of Proposition 14 is precisely the stationarity condition in item (iii) of the lemma above.

So, let us complete the discussion in this section by sketching the proof of Lemma 16.

Proof: By the compactness of {B_3} (and Corollary 11 of the previous post), we know that {(y_1\dots y_n)_* m_{B_3}} converges to some measure with probability {1}. This allows us to consider the sequence

\displaystyle z_k=\lim\limits_{n\rightarrow\infty} (y_k\dots y_{k+n})_* m_{B_3}

of measure-valued random variables.

Our task consists into showing that {z_k} are Dirac masses. Keeping this goal in mind, note that {y_k z_{k+1} = z_k} and {y_k} is independent of {z_{k+j}} for {j\geq 1}. Also, let us observe that we can extend the sequence {\{y_k, z_k: k\in\mathbb{N}\}} can be extended to non-positive indices {k\leq 0} by relabeling {y_1, z_1} as {y_{-n}, z_{-n}} and shifting the remaining variables. Here, by stationarity of {\{y_k\}_{k\in\mathbb{N}}} (by definition) and {\{z_k\}_{k\in\mathbb{N}}} (by item (iii)), the variables {y_k, z_k} with positive indices {k\in\mathbb{N}} are probabilistically isomorphic to the original sequence (before shifting). In other words, we can assume that our sequence to {y_k, z_k} is defined for all integer indices {k\in\mathbb{Z}}. (In terms of Dynamical Systems, this is analog to taking the natural extension {\hat{f}:(X,\mu)^{\mathbb{Z}}\rightarrow (X,\mu)^{\mathbb{Z}}}, {f((x_i)_{i\in\mathbb{Z}})=(x_{i+1})_{i\in\mathbb{Z}}} of the unilateral shift {f:(X,\mu)^{\mathbb{N}}\rightarrow (X,\mu)^{\mathbb{N}}}, {f((x_i)_{i\in\mathbb{N}})=(x_{i+1})_{i\in\mathbb{N}}}). In any case, we can write {z_{-k}=y_{-k}\dots y_{-1} z_0} where {y_{-i}} is a stationary sequence of independent random variables and {z_0} is independent of {y_{-i}}‘s.

At this point, let us recall the discussion of Subsubsection 1.3.2 on the Poisson boundary of {G_3=SL(3,\mathbb{R})}. In this subsubsection we saw that there are only three possibilities for any limit of the measures {gm_{B_3}}, {g\in G_3} such as {z_k}: it is either a Dirac mass, a circle measure or an absolutely continuous (w.r.t. {m_{B_3}}) measure, the latter case occurring only when {g} stays bounded in {G_3}.

By ergodicity (of the shift dynamics underlying the sequence {y_k}), we have that only one of these possibilities for {z_k} can occur with positive probability.

Now, {z_k} can not be absolutely continuous w.r.t. {m_{B_3}} because this would mean that {y_1\dots y_n} is bounded (with positive probability) and, a fortiori, {y_i} is confined to a compact subgroup of {B_3}, a contradiction with our assumption in item (i) about the distribution {\mu} of {y_i}‘s.

Therefore, our task is reduced to show that {z_{-k}}‘s are not circle measures with probability {1}. For sake of concreteness, let us fix {z_0} by assuming that {z_0} is a circle measure supported on our “preferred” circle {\{(\overline{u}, \overline{e_3}): \overline{u}\perp\overline{e_3}\}\subset B_3}. In order to show that {z_{-k}=y_{-k}\dots y_{-1}m_{B_3}} are Dirac masses, it suffices to check that the angles between the column vectors of the matrices {y_{-k}\dots y_{-1}} converge to {0} as {k\rightarrow\infty}. In other terms, denoting by {u_k}, {v_k}, {w_k} the column vectors of {y_{-k}\dots y_{-1}}, and by noticing that {u_k}, {v_k}, {w_k} play symmetric roles, we want to check that

\displaystyle \frac{|u_k\times v_k|}{|u_k|\cdot |v_k|}\rightarrow 0

as {k\rightarrow\infty}.

The idea to show this is based on the simplicity of the Lyapunov spectra of random products of matrices in {G_3} with a distribution law {\mu} that is not confined to compact or reducible subgroups. More concretely, we will show that the column vectors {u_k} and {v_k} have a definite exponential growth

\displaystyle \log|u_k|\sim k\alpha \quad \textrm{and} \quad \log|v_k|\sim k\alpha

with {\alpha>0} (the top Lyapunov exponent) and, similarly, the column vector {u_k\times v_k} of the matrix {{}^t(y_k\dots y_{-1})^{-1}} has a definite exponential growth

\displaystyle \log|u_k\times v_k|\sim k\beta

where {\beta} (the sum of the two largest Lyapunov exponents) satisfies {\beta<2\alpha} (i.e., the top Lyapunov exponent is simple, i.e., it does not coincide with the second largest Lyapunov exponent). Of course, if we show these properties, then {|u_k\times v_k|/(|u_k|\cdot|v_k|)} converges exponentially to {0} as {k\rightarrow\infty} (since {\beta<2\alpha}).

Let us briefly sketch the proof of these exponential growth properties. We write {\log|u_k|} as a “Birkhoff sum”

\displaystyle \log|u_k|=\sum\limits_{i=0}^{k-1}\frac{|y_{-(i+1)}y_{-i}\dots y_{-1} e_1|}{|y_{-i}\dots y_{-1} e_1|}=\sum\limits_{i=0}^{k-1}\rho(y_{-(i+1)}, \overline{t_i})

where {\rho(g,\overline{t})=\log(|gt|/|t|)} for {g\in G_3} and {\overline{t}\in\mathbb{P}^2}. As it turns out, the sequence {(y_{-(i+1)}, \overline{t}_i)} is not stationary, but it is almost stationary, so that Birkhoff’s ergodic theorem says that the time averages converge to the spatial average (with probability {1}):

\displaystyle \frac{1}{k}\log|u_k|=\frac{1}{k}\sum\limits_{i=0}^{k-1}\rho(y_{-(i+1)},\overline{t_i})\rightarrow \alpha = \int \rho(g,\overline{t}) d\mu(g) dm'(\overline{t})

where {m'} is the rotation-invariant distribution of {\overline{t_i}}‘s. Logically, this application of the ergodic theorem is valid only if we check that the observable {\rho(g,\overline{t})} is integrable. However, this is not hard to verify: by definition, {|\rho(g,\overline{t})|\leq \max\{\log\|g\|, \log\|g^{-1}\|\}}, so that the desired integrability is a consequence of the integrability requirement in item (ii). A similar argument shows that

\displaystyle \frac{1}{k}\log|v_k|\sim \alpha \quad \textrm{and} \quad \frac{1}{k}\log|u_k\times v_k|\sim \beta=\int \widetilde{\rho}(g,\overline{t}) d\mu(g)dm'(\overline{t})

where {\widetilde{\rho}(g,\overline{t})=\log(|{}^tg^{-1}(t)|/|t|)}. So, it remains only to check the simplicity condition {\beta<2\alpha}. For this sake, we combine the two integrals defining {2\alpha} and {\beta}, and we transfer it from {\overline{t}\in \mathbb{P}^2} to {B_3}. In this way, we obtain:

\displaystyle 2\alpha-\beta=\int \log\frac{|gu|\cdot|gv|}{|gu\times gv|} dm_{B_3}(\overline{u}, \overline{v})d\mu(g)

On the other hand, by definition, {(\overline{u}, \overline{v})} runs over orthogonal pairs of directions in {\mathbb{P}^2}. Thus, since {\mu} is not confined to an orthogonal subgroup of {G_3}, we have that the integral in the right-hand side of the equation above is strictly positive, i.e., {\beta<2\alpha}.

In summary, we showed that, for each fixed {z_0}, the sequence {z_{-k}} converges to a Dirac mass with probability {1}. Since {y_{-i}} are independent of {z_0}, this actually proves that {z_{-k}} converges to Dirac masses independently of {z_0}. Finally, since the sequence {\{z_k\}} is stationary, we conclude that the {z_{-k}} were Dirac masses to begin with.

This completes the proof of Lemma 16. \Box

Closing this post, we will use the fact that {(B_n, m_{B_n})} is the Poisson boundary of {(\Gamma, \mu)} (where {\mu} is the probability measure constructed in Proposition 14) to show Furstenberg’s theorem 1 that the lattices of {SL(2,\mathbb{R})} are “distinct” from the lattices of {SL(3,\mathbb{R})}.

3. End of the proof of Furstenberg’s theorem 1

In this section, we will show the following statement (providing a slightly stronger version of Theorem 1) in the case {n=3}:

Theorem 17 A cocompact lattice of {G_n=SL(n,\mathbb{R})}, {n\geq 3}, can not be isomorphic to a subgroup of {SL(2,\mathbb{R})}.

The proof of this theorem proceeds by contradiction. Suppose that {\Gamma} is isomorphic to a cocompact lattice of {G_3=SL(3,\mathbb{R})} and also to a subgroup of {G_2=SL(3,\mathbb{R})}.

By Theorem 14, we can equip {\Gamma} with a fully supported probability measure {\mu} such that {(\Gamma, \mu)} has Poisson boundary {P(\Gamma,\mu)=(B_3, m_{B_3})}.

Let us think now of {\Gamma} as a subgroup of {SL(2,\mathbb{R})}. We claim that {\Gamma} can not be confined to a compact subgroup of {SL(2,\mathbb{R})}: indeed, if this were the case, {\Gamma} would be Abelian; however, we saw that Abelian groups have trivial Poisson boundary, while {P(\Gamma,\mu)=(B_3,m_{B_3})}.

Next, let us observe that {B_2} admits some {\mu}-stationary probability measure {\theta}, i.e., {\mu\ast\theta=\theta}. Indeed, this is a consequence of the Krylov-Bogolyubov argument: we fix {\theta_1} an arbitrary probability measure on the compact space {B_2}, and we extract a {\mu}-stationary probability {\theta} as a limit of some convergent subsequence of the sequence

\displaystyle \lambda_n:=\frac{1}{n}\sum\limits_{i=0}^{n-1}\mu^i \ast\theta_1 := \frac{1}{n}\sum\limits_{i=0}^{n-1}\underbrace{\mu\ast\dots\ast\mu}_{i}\ast\theta_1

We affirm that {(B_2,\theta)} is a boundary of {(\Gamma,\mu)}. In fact, given a stationary sequence {\{y_n\}} of independent random variables with distribution {\mu}, we know that the elements {y_1\dots y_n\in G_2} are unbounded as {\Gamma} is a non-compact subgroup of {SL(2,\mathbb{R})} (as we just saw) and {\mu} is fully supported on {\Gamma}. By Lemma 12, it follows that {y_1\dots y_n \theta} converges to a Dirac mass, and so, by Proposition 7 of the previous post, we deduce that {(B_2,\theta)} is a boundary of {(\Gamma,\mu)}.

By definition of Poisson boundary, the facts that {(\Gamma,\mu)} has Poisson boundary {P(\Gamma, \mu)=(B_3, m_{B_3})} and {(B_2,\theta)} is a boundary of {(\Gamma, \mu)} imply that {(B_2,\theta)} is an equivariant image of {(B_3, m_{B_3})} under some equivariant map {\rho}. We will prove that this is not possible (thus completing today’s post). The basic idea is that the sole way of going to infinity in {G_2} is to approach {B_2}, i.e., {\gamma_n\theta} converges to a Dirac mass as {\gamma_n\rightarrow\infty} in {G_2}, but, on the other hand, we can go to infinity in {G_3} without approaching {B_3}, i.e., we can let {g\rightarrow\infty} in {G_3} in such a way that {gm_{B_3}} converges to a circle measure. Then, in the end of the day, these distinct (and incompatible) boundary behaviors (Dirac mass versus circle measure) will lead to the desired contradiction.

In order to get Dirac mass behavior in the context of {G_2}, our plan is to apply Lemma 12. But, before doing so, we need to know that {\theta} is not atomic, a property that we claim to be true. Indeed, suppose to the contrary that {\theta(\{\zeta\})>0} for some point {\zeta\in B_2}, and denote by {\Delta=\rho^{-1}(\zeta)}, a set of {m_{B_3}}-measure {m_{B_3}(\Delta)=\theta(\{\zeta\})>0}. Consider the translates {\gamma\Delta} of {\Delta} under {\gamma\in\Gamma\subset G_3}. On one hand, if {\gamma\Delta} intersects {\Delta} in a subset of positive measure, then their images {\rho(\gamma\Delta)} and {\rho(\Delta)=\{\zeta\}} under {\rho} in {B_2} intersect, and, thus, by equivariance of {\rho} and the fact that {\rho(\Delta)} is a singleton, {\rho(\gamma\Delta)=\rho(\Delta)}, and, a fortiori, {\gamma\Delta=\Delta}. In particular, the property that {\gamma\Delta=\Delta} whenever {\gamma\Delta} intersects {\Delta} with positive measure implies that {\Gamma} does not act ergodically on {B_3\times B_3} (because the {\gamma\Delta}‘s, {\gamma\in\Gamma}, do not get mixed together). However, this is a contradiction with Moore’s ergodicity theorem (implying that a lattice of {G_3} acts ergodically on {B_3\times B_3}). This shows that {\theta} is non-atomic.

In particular, given {Q_1} and {Q_2} two disjoint closed subsets of {B_2}, if {\gamma_n\in\Gamma\subset G_2} is any sequence going to {\infty}, then

\displaystyle \lim\limits_{n\rightarrow\infty} \min\{\gamma_n\theta(Q_1), \gamma_n\theta(Q_2)\}=0

Indeed, this is so because Lemma 11 (and the proof of Lemma 12) says that, for each {\varepsilon>0}, the measure {\gamma_n\theta} concentrates at least {1-\varepsilon} of its mass in an interval of length {<\varepsilon} for all {n} sufficiently large. In particular, for any {\varepsilon>0} smaller than the distance separating the disjoint closed sets {Q_1} and {Q_2} (i.e., {0<\varepsilon<dist(Q_1, Q_2)}), we obtain that either {\gamma_n\theta(Q_1)\leq\varepsilon} or {\gamma_n\theta(Q_2)\leq\varepsilon} for {n} sufficiently large, as desired.

We can rephrase the “concentration property” of the last paragraph in terms of {\mu}-harmonic functions as follows. Let {0\leq\psi_i\leq1}, {i=1,2}, be measurable functions supported on {Q_1}, {Q_2}, and consider the associated {\mu}-harmonic functions

\displaystyle h_i(\gamma)=\int_{B_2}\psi_i(\zeta)d\gamma\theta(\zeta). \ \ \ \ \ (5)

Then, the “concentration property” is

\displaystyle \lim\limits_{\gamma\rightarrow\infty}\min\{h_1(\gamma), h_2(\gamma)\}=0 \ \ \ \ \ (6)

Now, let us “transfer” this picture to the {G_3=SL(3,\mathbb{R})} context, i.e., let us think of the {\mu}-harmonic {h_i}, {i=1,2}, as defined on {\Gamma\subset G_3}. By the Poisson formula (and the fact that {P(\Gamma,\mu)=(B_3,m_{B_3})}), we can represent the {\mu}-harmonic functions {h_i}, {i=1,2}, as

\displaystyle h_i(\gamma)=\int_{B_3} \Psi_i(\xi)d\gamma m_{B_3}(\xi) \ \ \ \ \ (7)

where {\Psi_i}, {i=1, 2}, are bounded measurable functions on {B_3}. By replacing the variable {\gamma\in\Gamma} by {g\in G_3} in this formula, we see that {h_i} can be extended to harmonic functions on {G_3} (w.r.t. any spherical measure on {G_3}).

In what follows, we will try to understand the boundary behavior of {h_i}‘s, ultimately to contradict the concentration property (6). In this direction, let us first “transfer” the concentration property (6) from {G_2} to {G_3} as follows. Observe that {\Gamma} is a cocompact lattice of {G_3}, so that we can select a bounded fundamental domain {A}, i.e., a bounded set such that any element of {G_3} has the form {\gamma g} with {\gamma\in\Gamma} and {g\in A}. Now, given {\gamma\in\Gamma}, let us compare the values {h_i} of {\gamma} and one of its “neighbors” {\gamma g}, {g\in A}. Here, we observe that

\displaystyle \left(\frac{d(\gamma g)_* m_{B_3}}{dm_{B_3}}(\xi)\right)\Big\slash\left(\frac{d(\gamma)_* m_{B_3}}{dm_{B_3}}(\xi)\right) = \frac{d(\gamma g)_* m_{B_3}}{d(\gamma)_*m_{B_3}}(\xi) = \frac{d(g)_* m_{B_3}}{dm_{B_3}}(\gamma^{-1}\xi)

In particular, since the right-hand side is bounded for {g} in the bounded set {A}, we conclude that the ratio between

\displaystyle \frac{d(\gamma g)_* m_{B_3}}{dm_{B_3}}(\xi)\quad \textrm{and} \quad \frac{d(\gamma)_* m_{B_3}}{dm_{B_3}}(\xi)

is uniformly bounded away from {0} and {\infty} for {\gamma\in\Gamma} and {g\in A}. Therefore, since the values of a positive {\mu}-harmonic function {h} at {\gamma} and {\gamma g} can be written as

\displaystyle h(\gamma)=\int\hat{h}(\xi)\frac{d(\gamma)_* m_{B_3}}{dm_{B_3}}(\xi)dm_{B_3}(\xi)

and

\displaystyle h(\gamma g) = \int\hat{h}(\xi)\frac{d(\gamma g)_* m_{B_3}}{dm_{B_3}}(\xi)dm_{B_3}(\xi),

we deduce that the values {h(\gamma)} and {h(\gamma g)} are uniformly comparable for {\gamma\in\Gamma} and {g\in A}, that is, there exists a constant {c>0} such that

\displaystyle h(\gamma g)<c\cdot h(\gamma)

for all {\gamma\in\Gamma} and {g\in A}. Hence, given {\widetilde{g}\in G_3}, there exists {\gamma\in\Gamma} such that

\displaystyle h(\widetilde{g})<c\cdot h(\gamma)

because {\widetilde{g}=\gamma g} for some {\gamma\in\Gamma}, {g\in G}. Furthermore, when letting {\widetilde{g}\rightarrow\infty}, we have that {\gamma\rightarrow\infty} (as {g\in A} and {A} is bounded). Thus, by combining this with the “concentration property” (6), we get the following concentration property in the context of {SL(3,\mathbb{R})}:

\displaystyle \lim\limits_{\substack{g\rightarrow\infty, \\ g\in G_3}} \min\{h_1(g), h_2(g)\}=0 \ \ \ \ \ (8)

Let us now try to contradict the “transferred concentration property” (8) by analyzing the values {h_i(g)} when {g\rightarrow\infty} without approaching {B_3} (something that is not possible in {G_2}!). More concretely, let us consider the sequences

\displaystyle g_n^{(A)}=\left(\begin{array}{ccc}n & 0 & 0 \\ 0 & n & 0 \\ 0 & 0 & 1/n^2\end{array}\right) \quad \textrm{and} \quad g_n^{(B)} = \left(\begin{array}{ccc}n^2 & 0 & 0 \\ 0 & 1/n & 0 \\ 0 & 0 & 1/n\end{array}\right)

We want to investigate the “boundary” values of the harmonic functions {h_1} and {h_2} along the sequences {g_n^{(A)}} and {g_n^{(B)}} (and some adequate translates). For this, we need an analog of Fatou’s theorem saying that harmonic functions have boundary values along almost all radial direction. In our context, the analog of Fatou’s theorem goes as follows.

The limits of the sequences {g_n^{(A)}m_{B_3}} and {g_n^{(B)}m_{B_3}} are circle measures {\omega^{(A)}} and {\omega^{(B)}} supported on the great circles

\displaystyle S_A=\{(\overline{u}, \overline{e_3}): \overline{u}\perp\overline{e_3}\}

and

\displaystyle S_B=\{(\overline{e_1}, \overline{v}): \overline{v}\perp\overline{e_1}\}

Also, it is possible to check that all circle measures have the form {k\omega^{(A)}} or {k\omega^{(B)}} for {k} in the orthogonal subgroup {K_3} of {G_3}. Now, we affirm that, given {\phi} a bounded measurable function on {B_3}, the integrals

\displaystyle \Phi^{(A)}(k)=\int\phi(\xi)dk\omega'(\xi) \quad \textrm{and} \quad \Phi^{(B)}(k)=\int\phi(\xi) dk\omega^{(B)}(\xi)

are defined for almost every {k\in K_3}. Indeed, we recall that {K_3} acts transitively on {B_3=G_3/P_3}, so that {B_3=K_3/W} where {W} is a finite group, i.e., {K_3} is a finite cover of {B_3}. The great circles {S_A} and {S_B} correspond to two {1}-parameter subgroups {T_A} and {T_B} of {K_3} (as any great circle of {B_3} passing through the identity coset). Also, the great circles of {B_3} are just the cosets {kT_A} and {kT_B}, {k\in K_3}, modulo {W}. In particular, given a bounded measurable function {\phi} on {B_3}, we can lift it to a bounded measurable function {\widetilde{\phi}} on {K_3} and then define

\displaystyle \Phi^{(A)}(k)=\int_{T_A}\widetilde{\phi}(kt)dt \quad \textrm{and} \quad \Phi^{(B)}(k)=\int_{T_B}\widetilde{\phi}(kt) dt

For later use, we observe that

\displaystyle \int\Phi^{(A)}(k)dk=\int\Phi^{(B)}(k)dk=\int\phi(\xi)dm_{B_3}(\xi) \ \ \ \ \ (9)

Anyhow, in this setting, Fatou’s theorem implies that

\displaystyle \int\phi(k\xi)dg_n^{(A)}m_{B_3}(\xi)\rightarrow\Phi^{(A)}(k) \textrm{ and } \int\phi(k\xi)dg_n^{(B)}m_{B_3}(\xi)\rightarrow\Phi^{(B)}(k)

(at least) in measure as {n\rightarrow\infty}.

Therefore, if {h(g)=\int\phi(\xi)dgm_{B_3}(\xi)} is the harmonic function associated to {\phi}, then

\displaystyle h(kg_n^{(A)})\rightarrow\Phi^{(A)}(k) \quad \textrm{and} \quad h(kg_n^{(B)})\rightarrow\Phi^{(B)}(k) \ \ \ \ \ (10)

in measure as {n\rightarrow\infty}.

Now let us come back to the harmonic functions {h_1} and {h_2} constructed above satisfying the concentration property (8). Denoting by {\Phi_i^{(A)}} and {\Phi_i^{(B)}} the “boundary values” of {h_i} (along {kg_n^{(A)}} and {kg_n^{(B)}}), we see that (10) and the concentration property (8) imply that

\displaystyle \min\{\Phi_1^{(A)}(k), \Phi_2^{(A)}(k)\}=0 \quad \textrm{and} \quad \min\{\Phi_1^{(B)}(k), \Phi_2^{(B)}(k)\}=0 \ \ \ \ \ (11)

for almost every {k\in K_3}.

We will show that this is not possible by choosing the disjoint closed sets {Q_1} and {Q_2} of {B_2} leading to {h_1} and {h_2} in a careful way and by using the fact that the {\mu}-stationary measure {\theta} on {B_2} is non-atomic.

More concretely, since {\theta} is not atomic, we can choose a compact subset {Q_1} of {B_2} with {0<\theta(Q_1)<1} that we fix once and for all. Set {\psi_1=\chi_{Q_1}} and construct from {\psi_1} a harmonic function {h_1} as above (cf. (5)) and let {\Psi_1} the associated function in (7). Denote by {\Phi_1^{(A)}} and {\Phi_1^{(B)}} the boundary values of {h_1} along sequences {kg_n^{(A)}} and {kg_n^{(B)}}.

Now, we consider an increasing sequence {Q_2(n)} of compact subsets exhausting {B_2-Q_1}. Again, set {\psi_2(n)=\chi_{Q_2(n)}}, construct the corresponding harmonic functions {h_2(n)} (cf. (5)), and let {\Phi_2^{(A)}(n)} and {\Phi_2^{(B)}(n)} be the boundary values of {h_2(n)} along sequences {kg_n^{(A)}} and {kg_n^{(B)}}. By construction, since {Q_2(n)} is an increasing sequence, the functions {\Phi_2^{(A)}(n)} and {\Phi_2^{(B)}(n)} form two sequences of non-negative functions uniformly bounded by {1} that do not decrease as {n} increases. It follows that we can extract limits {\Phi_2^{(A)}(\infty)=\lim\limits_{n\rightarrow\infty}\Phi_2^{(A)}(n)} and {\Phi_2^{(B)}(\infty)=\lim\limits_{n\rightarrow\infty}\Phi_2^{(B)}(n)}. Moreover, from the definition of {\Phi_2^{(A)}(n)} and {\Phi_2^{(B)}(n)}, we get

\displaystyle 0\leq\Phi_2^{(A)}(\infty), \Phi_2^{(B)}(\infty)\leq 1 \ \ \ \ \ (12)

Also, since {Q_2(n)} exhausts {B_2-Q_1}, i.e., {\lim\limits_{n\rightarrow\infty}\int_{B_2}(\psi_1(\zeta)+\psi_2(n)(\zeta))d\theta(\zeta)=1}, we obtain from (9) that

\displaystyle \int(\Phi_1^{(A)}(k)+\Phi_2^{(A)}(\infty)(k))dk=1 \textrm{ and } \int(\Phi_1^{(B)}(k)+\Phi_2^{(B)}(\infty)(k))dk=1 \ \ \ \ \ (13)

Furthermore, the concentration property (11) implies that

\displaystyle \min\{\Phi_1^{(A)}(k),\Phi_2^{(A)}(\infty)(k)\}=0 \quad \textrm{and} \quad \min\{\Phi_1^{(B)}(k),\Phi_2^{(B)}(\infty)(k)\}=0 \ \ \ \ \ (14)

for almost every {k\in K_3}.

Finally, since the harmonic functions {h_2(n)} have nice integral representations as in (7), i.e.,

\displaystyle h_2(n)(g)=\int_{B_3}\Psi_2(n)(\xi)dgm_{B_3}(\xi),

we can extract a limit {\Psi_2(\infty)} such that

\displaystyle \Phi_2^{(A)}(\infty)(k)=\int\Psi_2(\infty)(\xi)dk\omega^{(A)}(\xi), \quad \Phi_2^{(B)}(\infty)(k)=\int\Psi_2(\infty)(\xi)dk\omega^{(B)}(\xi), \ \ \ \ \ (15)

where {\omega^{(A)}} and {\omega^{(B)}} are circle measures in our preferred circles {S_A} and {S_B}.

Now, we can get a contradiction as follows. By (12), (13), and the concentration property (14), we have that

\displaystyle \Phi_1^{(A)}(k), \Phi_1^{(B)}(k), \Phi_2^{(A)}(\infty)(k), \Phi_2^{(B)}(\infty)(k)\in\{0, 1\}

for almost every {k\in K_3}. By (15), this means that the functions {\Psi_1} and {\Psi_2(\infty)} are constant ({0} or {1}) on each great circle (up to some neglectable expectional set of zero measure).

In particular, after lifting the function {\Psi_1} on {B_3} to a function {\widetilde{\Psi}_1} on {K_3}, we have that {\widetilde{\Psi}_1(k_0t)=\widetilde{\Psi}_1(k_0)} for {t\in T_A\cup T_B} (where {T_A} and {T_B} are the lifts of the great circles {S_A} and {S_B} supporting {\omega^{(A)}} and {\omega^{(B)}}), that is,

\displaystyle T_A\cup T_B\subset T:=\{k\in K_3: \widetilde{\Psi}_1(k_0t)=\widetilde{\Psi}_1(k_0)\}.

Note that {T} is a subgroup of {K_3}. It follows that {T=K_3}: indeed, it is a general fact that the group generated by two distinct {1}-parameter subgroups of {K_3} (such as {T_A} and {T_B}) is the whole {K_3}. This shows that {\widetilde{\Psi}_1}, and, a fortiori, {\Psi_1} is the constant function {0} or {1}.

However, {\Psi_1} can not be {0} nor {1}: in fact, by (7) and (5), we know that

\displaystyle \int\Psi_1(\xi)dm_{B_3}(\xi)=h_1(e)=\int\psi_1(\zeta)d\theta(\zeta)=\theta(Q_1),

a contradiction with the fact that {\Psi_1\equiv 0} or {1} by our choice of {Q_1} with {0<\theta(Q_1)<1}.

This completes our sketch of the proof of Theorem 17 (and Theorem 1).


Responses

  1. Dear Matheus,

    Just wanted to let you know that this is fantastic blog post and I will be recommending it to my students to read!

    Jayadev

  2. Great post! It is indeed frustrating that Theorem 13 is (was!) hard to find online. -Alex


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

Follow

Get every new post delivered to your Inbox.

Join 96 other followers

%d bloggers like this: