In this previous blog post here (about this preprint joint with Alex Eskin), it was mentioned that the simplicity of Lyapunov exponents of the Kontsevich-Zorich cocycle over Teichmüller curves in moduli spaces of Abelian differentials (translation surfaces) can be determined by looking at the group of matrices coming from the associated monodromy representation thanks to a profound theorem of H. Furstenberg on the so-called Poisson boundary of certain homogenous spaces.
In particular, this meant that, in the case of Teichmüller curves, the study of Lyapunov exponents can be performed without the construction of any particular coding (combinatorial model) of the geodesic flow, a technical difficulty occurred in previous papers dedicated to the simplicity of Lyapunov exponents of the Kontsevich-Zorich cocycle (such as these articles here and here).
Of course, I was happy to use Furstenberg’s result as a black-box by the time Alex Eskin and I were writing our preprint, but I must confess that I was always curious to understand how Furstenberg’s theorem works. In fact, my curiosity grew even more when I discovered that Furstenberg wrote a survey article (of 63 pages) on this subject, but, nevertheless, this survey was not easily accessible on the internet. For this reason, after consulting a copy of Furstenberg’s survey at Institut Henri Poincaré (IHP) library, I was impressed by the high quality of the material (as expected) and I decided to buy the book containing this survey.
As the reader can imagine, I learned several theorems by reading Furstenberg’s survey and, for this reason, I thought that it could be a good idea to describe here the proof of a particular case of Furstenberg’s theorem on the Poisson boundary of lattices of (mostly for my own benefit, but also because Furstenberg’s survey is not easy to find online to the best of my knowledge).
For the sake of exposition, I will divide the discussion of Furstenberg’s survey into two posts, using Furstenberg’s survey, his original articles and A. Furman’s survey as basic references.
For this first (introductory) post, we will discuss (below the fold) some of the motivations behind Furstenberg’s investigation of Poisson boundaries of lattices of Lie groups and we will construct such boundaries for arbitrary (locally compact) groups equipped with probability measures.
1. Some Motivations
Definition 1 A lattice
of a Lie group
is a discrete subgroup such that the quotient
has finite volume with respect to the natural invariant measure induced from the (left-invariant) Haar measure on
.
Example 1 Among the most basic examples of lattices, one has:
is a lattice of
, and
is a lattice of
(cf. Zimmer’s book for instance).
Definition 2 We say that a non-compact Lie group
is an envelope of
whenever
is isomorphic to a lattice of
.
Example 2 The Lie group
is an envelope of the free group on two generators
. Indeed, it is possible to show that
is isomorphic to the level two congruence subgroup of
via the isomorphism
and
. Since
is a finite-index (actually index
, cf. Bergeron’s book for example) subgroup of
(a lattice of
, cf. Example 1), this shows that
envelopes
.
Example 3 The Lie group
envelopes the fundamental group
of any compact surface of genus
. In fact, by fixing any Riemann surface structure on
, we obtain from the uniformization theorem that
where
is a subgroup of the group
of hyperbolic isometries (Möbius transformations) of the hyperbolic plane
. Note that
is a lattice of
because
is naturally identified to the unit cotangent bundle of the compact surface
, so that, by definition,
envelopes
.
Once we have the notion of envelope of a (discrete) group
, the following three questions arise naturally:
- i) Existence: What are the (discrete) groups admitting envelopes?
- ii) Uniqueness: If
admits an envelope
, to what extend is
unique?
- iii) Rigidity: Let
and
(resp.) be (discrete) groups enveloped by
and
(resp.) and assume that
is isomorphic to
, say via an isomorphism
. Is it true that this isomorphism
is the restriction of an isomorphism
of the Lie groups
and
?
Here are some partial answers to these questions.
Firstly, all these questions have affirmative answers in the context of finitely generated nilpotent groups: see these references here for more details.
On the other hand, we do not dispose of complete answers in the other settings.
For instance, the existence question i) is open in general, but if we ask what discrete groups have semi-simple envelopes, then some necessary conditions are known. For example, H. Kesten proved in 1959 (see this paper here) that if is a finitely generated group, say that
is generated by
, and a semi-simple Lie group envelopes
, then
where is the number of elements of
obtained as the products of at most
of the
‘s and their inverses (i.e., elements of the form
where
for each
and
).
Remark 1 Note that Kesten’s condition (of exponential growth of
) is independent of the particular set
of generators of
used. Moreover, it is not satisfied by Abelian groups and it is satisfied by free groups
(on any number
of generators).
Also, the uniqueness question ii) is trivially false if we insist on a “simple-minded uniqueness”: indeed, from the definition, if is an envelope of
, then
is also an envelope of
whenever
is a compact Lie group.
So, the uniqueness question ii) has some chance of having an affirmative answer only if we ask for uniqueness of the envelope modulo compact factors. In particular, it is natural to ask whether two Lie groups and
without compact factors (i.e., for
, one has that
with
compact Lie group) can envelope the same discrete group.
In this direction, H. Furstenberg proved the following result:
Theorem 3 (H. Furstenberg (1967))
,
, can not envelope a discrete subgroup enveloped by
.
A direct consequence of this theorem (and Examples 1 and 2) is:
Corollary 4 The free group on two generators
can not occur with index in
for
.
Finally, the rigidity question iii) is known to admit affirmative or negative answers depending on the context: for example, Mostow rigidity theorem shows that the answer is affirmative for the fundamental groups of complete, finite-volume, hyperbolic manifolds of dimension , while the presence of several distinct (i.e., non-biholomorphic) Riemann surface structures on a topological compact surface of genus
shows that the isomorphism between fundamental groups do not extend to automorphisms of
.
In the current series of posts, we will discuss the general lines of the proof of Furstenberg’s theorem 3 as an “excuse” to study the features of the Poisson boundaries of lattices of . Indeed, the basic idea is that a discrete subgroup
“determines” the behavior at infinite of its envelopes
in the sense that the “accumulation points” of random walks on
form a natural (Poisson) “boundary”
coinciding with the corresponding “boundaries”
of its envelopes. In particular, we will see that a discrete group
can’t be enveloped by both
and
,
at the same time because the boundaries of
and
,
, are distinct.
In other words, despite the fact that Theorem 3 is a statement about lattices of Lie groups, Furstenberg’s proof of it is mainly probabilistic (i.e., based on the nature of random walks).
2. Naive description of the boundary
Before giving the formal definition of the Poisson boundary, let us spend some time discussing a partial version of the definition in the particular case of the free group in finitely many generators where the naive idea of “taking accumulation points of random walks” works.
Let be the free group in
generators
. By definition, an element of
has the form
where
for each
, and, furthermore, this representation is unique if we require that there is no cancellation, i.e.,
for each
. In order to compactify
by adding a boundary associated to accumulation points of random walks, we consider the set
consisting of infinite words verify the no-cancellation condition
for all
. We see that the space
equipped with the topology of pointwise convergence is compact.
The boundary has two important features that we will encounter later when introducing the Poisson boundary.
Firstly, the boundary of
was found by looking at the accumulation points of sequences in
.
Secondly, all “lattices”, i.e., finite-index subgroups, of
have the same boundary
. In fact, given
, we can find a sequence
such that
as follows. Fix
such that
. Since
has finite-index in
, there exists a finite set
such that
. In particular, there exists
such that
for infinitely many
, i.e., there exists a subsequence
such that
for all
. On the other hand, from the definition of pointwise convergence, we have that
as
(because
as
), that is,
is accumulated by elements in
. Because
was arbitrary, we deduce that
is the boundary of
.
In general, a straightforward generalization of the boundary via taking accumulation points of sequences does not work well for arbitrary discrete groups (in the sense that, even in the favorable situations when we get a well-defined boundary from this construction, this boundary might heavily depend on the group, and this is a property that is not desirable in our context as we want all lattices of a given Lie group to share the same boundary).
Here, the basic idea is to add further structure to the discussion. More concretely, instead of trying to understand all possible ways to go to infinity, it is better to use group invariant random walks. While a more formal definition of the Poisson boundary will appear later in this post, let us for now informally discuss this random walk approach for the construction of a boundary of a couple examples of groups.
We think of a random walk as a particle jumping from one state to the next state
accordingly to a set of probabilities describing how likely it is to jump from one state to another state. We say that a random walk on
is group invariant if the probability of jumping from
to
is the same as jumping from
to
for all
.
Example 4 The random walk on
where the probabilities of jumping from
to
or
are equal to
is group invariant. More generally, the random walk on
where the probabilities of jumping from
to
or
or
or
are equal to
is group invariant, as well as the natural generalization of this random walk to
.
In the context of random walks on
described in the previous example, it is known that there are two possible behaviors (with probability
): for
or
, the random walk is recurrent in the sense that it will visit all states of
infinitely often (with probability
), while for
, the random walk is transient in the sense that
as
(with probability
).
As we already hinted, the boundary will arise from the fine properties of the random walks in the transient case. Actually, as we will see later, the random walks on Abelian groups (such as the random walks on that we have just introduced) have a boring behavior at infinity (in the sense that the boundary can not be larger than one point [for abstract reasons to be detailed later]). In particular, let us change our example for the non-Abelian group
. In this situation, if we consider certain “nice” random walks, then we can think of them as occurring in the hyperbolic plane
(i.e., the symmetric space associated to
). In terms of the Poincaré disk model
, the random walk is “comparable” to the Brownian motion
on
(a continuous version of random walks) and the latter are known to be obtained from time-reparametrizations of pieces of the Brownian motion
on
inside the Euclidean unit disk
until they hit the boundary
, see the picture below.
From this picture, we see that the random walk / Brownian motion in approaches exactly one point
(with probability) of the Euclidean circle
(working as a circle at infinity for
equipped with the hyperbolic metric, of course). In particular, we are tempted to say that
is the natural boundary obtained from random walks on (the symmetric space
of)
.
The discussion of the previous paragraphs can be summarized as follows. We started with and we considered certain (“nice”) group invariant random walks
. Then, we attached a boundary space
to form a topological space
in such a way that
converges to a point in
with probability
. Note that this permits to define a continuous
-action on
by setting
if
. In the literature, one then says that
is a
–space.
Actually, the boundary space comes with a natural probability measure
corresponding to the distribution of
. For our purposes, it is important to consider
(and not only the topological
-space
alone) and, for this reason, what we will call Poisson boundary will be
.
After this informal description of the features of the Poisson boundary, let us try to formalize this notion.
3. Formal description of the Poisson boundary
From now on, let us fix be a locally compact group with a countable basis of open sets and
a probability measure on
. The basic examples to keep in mind are:
a group of matrices (e.g.,
) and
a probability measure on
that is absolutely continuous with respect to Haar measure;
is a discrete group (e.g.,
) and
is a countable sequence of non-negative weights
,
, such that
.
We want to attach a -space
to
in such a way that
is still a
-space and a group invariant random walk with law
in
converges to a point in
with probability
.
As it is usual, one can get an idea of what kind of space should be by assuming that
was already constructed and then by trying to extract several properties that
must satisfy hoping that this set of properties “determine”
.
Let us try this approach now.
3.1. Definition of a boundary of
We start by describing what is the class of random walks that we want to look at in order to extract limits. Consider the product probability space
We represent the points of as
and we observe that, by definition, the coordinate functions
are independent
-valued random variable with distribution
. We refer to
as a stationary sequence of independent random variables with distribution
.
We now form the product random variables . The sequence
is a Markov process as the probability of a sequence of steps
is the product of the probabilities of the individual transitions
. Moreover, the transitions
in the sequence
are given by (right) group multiplication:
. For this reason, we call the sequence of product random variables
a random walk in
with law
. By the way, note that this setting is entirely determined from the data of
and
.
The scenario provided by is almost the one that we want to consider. Indeed, we said “almost” only because the group invariance is missing in the picture, that is, we want to consider all random walks
with
in order to get a setting that is invariant under (left) group multiplication.
In summary, given and
, our group invariant random walk consists of the random variables
with
and
as above.
Next, let us suppose that we have a -space
such that
is a
-space and the group invariant random walk
converges to a point in
with probability
. For later use, let us observe that if the random variables
converge to a
-valued random variable
, then
converges to
. In particular, for each
, the sequence
converges to a
-valued random variable
with the following properties:
- (i)
(by the definition of
and the fact that
is a
-space);
- (ii)
is a function of
(by definition);
- (iii) all
‘s have the same distribution (by the group invariance of
);
- (iv)
is independent of
(by item (ii) and the independence of
‘s);
In the literature, a sequence of random variables on a
-space
satisfying items (i), (ii), (iii) and (iv) above is called a
–process.
In this language, we just saw that any candidate for a (Poisson) boundary of
must be a
-space equipped with a
-process.
Let us now investigate more closely the properties of the -process
associated to a “(Poisson) boundary candidate”
.
Denote by the distribution of an arbitrary
(by item (iii) they have all the same distribution). In particular, the relation
(from item (i)) implies that the random variable
has distribution
.
On the other hand, given a -space
and two random variables
and
with distributions
and
, it is not hard to check that the distribution of the
-valued random variable
is the convolution
measure defined by:
Hence, since has distribution
, we deduce from the equality
that
or, in probabilistic nomenclature, is a
–stationary measure.
In principle, it seems that the -process
is more important (in the study of Poisson boundaries) than the stationary measure
. However, the following proposition shows that one can recover the topological structure of
from the knowledge of
thanks to the martingale convergence theorem.
Proposition 5 If
is a
-process with distribution
, then
with probability
.
Here, denotes the push-forward of
by
and the convergence of the measures is in the weak-
topology.
Proof: Let be a test (bounded, continuous) function on the
-space
. By definition of weak-
convergence, our task consists into showing that
as with probability
.
We claim that this is a consequence of the martingale convergence theorem that the integrable random variable satisfies
as with probability
where
denotes the conditional expectation.
Indeed, let us recall that (cf. item (i) above) where
and
are independent (cf. item (iv) above), and
has distribution
(cf. item (iii) above). By plugging this into the definition of conditional expectation, we obtain the equality
so that the desired convergence follows from the martingale convergence theorem.
In other words, this proposition allows to recover the limit relation , i.e., the topology of
from the
-stationary measure
by identifying the points
of
with Dirac masses
and by analyzing the convergence of the sequence of push-forwards
to Dirac masses
. This observation motivates the following definition. Given
a
-space equipped with a probability measure
, the measure topology of
with respect to
is the weakest topology such that the natural inclusions
and
are homeomorphisms into their images, and the map
from
to the space
of probability measures on
given by
for
and
is continuous.
At this point, our discussion so far can be summarized as follows. The investigation of the properties of a potential candidate to (Poisson) boundary led us to the notion of
-process
, and, in some sense,
-processes are the right object to look at because Proposition 5 says that a
-process on
allows to think of
as boundary after endowing
with the measure topology with respect to the distribution of the
-process.
For this reason, we introduce the following definition:
Definition 6 A
-space
equipped with a
-stationary measure
is a boundary of
if
is the distribution of a
-process on
.
Given this scenario, it is natural to ask whether a given -space
admits some
-process. The next proposition says that we already know the answer to this question.
Proposition 7 Let
be a
-stationary measure on a
-space
(i.e.,
). Then, there exists a
-process on
with distribution
if and only if the sequence
converges to Dirac masses on
.
Proof: The implication was already shown in Proposition 5. For the converse implication, let us set
. The sequence of random variables
satisfies items (i) (by continuity of the push-forward operation under continuous transformations), (ii) (by definition) and (iv) (by item (ii) and independence of
‘s). Also, item (iii) follows from the fact that the sequences
and
have the same probabilistic behavior.
In particular, it remains only to check that the distribution of is
. For this sake, we take a test function
and we notice that
Here, and, in the last equality, we used the fact that the distribution of
is
if
has distribution
and
has distribution
. Now, since
is
-stationary, we deduce that
for all . Therefore, we showed that
, i.e.,
has distribution
.
3.2. Definition of the Poisson boundary of
For our purposes, we want the Poisson boundary of to be a boundary that is as “large” as possible: intuitively, the large boundary sees the fine properties of
, and, in particular, we can expect to distinguish between several groups (such as the lattices of
and
,
) by looking at their largest boundaries.
3.2.1. Equivariant images
In order to “compare” boundaries, we consider equivariant maps between them: given two boundaries and
of
, we say that
is an equivariant image of
if there is an equivariant map
(i.e., a map such that
for all
and
) such that
.
Remark 2 This definition makes sense as the notion of equivariant image preserves boundaries: in fact, if
is a
-process on
, then
is a
-process on
.
In the light of this definition, it is tempting to say that the Poisson boundary of is the “largest” boundary
in the sense that all other boundaries
can be obtained as equivariant images of
.
As it turns out, this is an almost complete description of the Poisson boundary. Indeed, before giving the complete definition, we will need to discuss –harmonic functions on
because, as we will see, they are important objects in the construction of Poisson boundaries.
Here, the basic idea is that we can “recover” a space from the knowledge of the functions on it. In the particular case that
is a
-space such that
is a
-space and
converges with probability
(i.e.,
is a candidate to be a boundary), a continuous function
extending to a continuous function on
has the property that
converges with probability
. Thus, we can “recover” information on
from the class
of functions
on
with the property that
converges with probability
(because these functions “induce” functions on
). Of course, the main point of the class
is that it is canonically attached to
and hence it is natural to use
to produce reference (Poisson) boundaries. In this setting, the
-harmonic functions that we mentioned above are an interesting subclass of the class
.
3.2.2. -harmonic functions on
A -harmonic function is a function satisfying the following analog of the mean value property for classical harmonic functions:
Definition 8 A bounded measurable function
on
is
-harmonic if
for all
. We denote by
the class of
-harmonic functions.
Remark 3 Since the stationary sequence of independent random variables
has distribution
, we have that any
satisfies
for each
.
The following proposition says that the class of
-harmonic functions is a subclass of the class
of bounded measurable functions
such that
converges with probability
.
Proposition 9 For each
, let
and
where
(i.e.,
is
-harmonic). Then,
converges with probability
and
satisfies
Proof: Similarly to Proposition 5, this proposition is a consequence of the martingale convergence theorem. More concretely, the scheme of the proof is the following. We will show below that converges in
(where
) to
, and
for
. In particular, since the martingale convergence theorem ensures that
converges to
with probability
, the first assertion of the proposition will then follow.
Let us now show that in
. Set
and
. We claim that the
‘s are mutually
-orthogonal. Indeed, by Remark 3,
so that for each
.
Now, let us use this information to compute the -inner product
between
and
for
. By letting the variables
fixed while allowing
to vary, we see that
is fixed and only
varies. In particular, by performing first the integration with respect to
in the integral defining
, we deduce that
is a multiple of
(by the previous paragraph), i.e., the random variables
are mutually
-orthogonal.
From this -orthogonality, we get
In particular, has a limit in
that we denote by
.
As we already mentioned, the first assertion of the proposition will follow (from the martingale convergence theorem) once we show that for
. In this direction, we observe that
for all
(cf. Remark 3). By putting this together with the
-orthogonality of
‘s (and the fact that
by definition), we obtain that
for all . Therefore, by the
-convergence of
to
, we conclude that
so that the proof of the proposition is complete.
For later use, we will note that a -harmonic function is determined by its boundary values (similarly to Poisson’s formula for classical harmonic functions):
Proposition 10 Let
be a
-stationary measure on a
-space
. Then, given
a bounded measurable function on
, the function
is
-harmonic.
Proof: By definition, given ,
By performing the change of variables and using the definition of the convolution measure
, we see that
Since is
-stationary, i.e.,
, we deduce that
that is, is
-harmonic.
Corollary 11 Let
be a compact
-space equipped with a
-stationary probability measure
. Then, the sequence of probability measures
converges in
with probability
.
Proof: We want to show that the sequence converges in
with probability
. For this sake, it is sufficient to check that, with probability
, the integrals
converge for all continuous functions on
.
Given a continuous function on
, by Proposition 10 we have that
where the function is
-harmonic.
It follows from Proposition 9 that converges with probability
and this almost complete the proof of the corollary.
Indeed, we showed that, for each continuous function on
, there exists a set
with full probability such that the integrals
converge whenever
. However, the quantifiers in the last phrase do not correspond to the statement in the corollary as the latter asks for a set
of full probability working for all continuous functions
on
at once! Fortunately, this little technical problem is not hard to overcome: since
is a compact (Hausdorff) space, the space of continuous functions on
has a countable dense subset
; in particular, by setting
we get a full probability set such that, for all continuous , the integrals
converge whenever .
At this stage, we are ready to give the definition of the Poisson boundary of .
3.2.3. Definition of the Poisson boundary
We say that a boundary of
is the Poisson boundary if:
- (a)
is maximal: every boundary
of
is an equivariant image of
.
- (b) Poisson’s formula induces an isomorphism: if
is a
-harmonic function on
, there exists a bounded measurable function
on
such that
moreover,
is unique modulo
-nullfunctions, i.e., measurable functions vanishing
-almost everywhere.
Completing our discussion so far, we will show in next section that the Poisson boundary always exists.
4. Construction of Poisson boundary
The main result of this section is:
Theorem 12 (Furstenberg (1963)) Let
be a locally compact group with a countable basis of open sets and let
be a probability measure on
. Then,
admits a Poisson boundary
.
Proof: The basic strategy to construct using the class
of
-harmonic functions.
However, we will not work exclusively with and, in fact, we will use also the slightly larger class
of bounded measurable functions
on
such that
exists with probability
. Indeed, from the technical point of view, the main advantage of
over
is the fact that
is a Banach algebra (with respect to the
-norm) while
is not.
Nevertheless, is not “very different” from
. More precisely, let
be the ideal of
consisting of the functions
such that
converges to zero with probability
.
For the proof of this lemma (and also for later use), we will need the auxiliary class of limit functions
corresponding to the “boundary values” of functions
. Note that
is also a Banach algebra and
.
Proof: Given , we can produce a
-harmonic function
by letting
and
. Indeed, the
-harmonicity of
can be checked as follows. Let
be a random variable independent of the random variables
‘s on
with distribution
. Consider the expression
. Since the sequences
and
are probabilistically equivalent, we have that
Let us now show that (so that
with
and
). By repeating the “shift of variables” argument of the previous paragraph, we see that
On the other hand, the martingale convergence theorem says that (with probability
), so that we deduce that
(with probability ). In particular, this means (by definition) that
.
Completing the proof of the lemma, it remains to verify that . This fact follows immediately from Proposition 9 saying that a
-harmonic function can be recovered from its boundary values.
From this lemma, we have that . Now, note that
is a commutative
-algebra, so that it has a representation as the space
of continuous function on a compact (Hausdorff) space
(called the spectrum of
) by Gelfand’s representation theorem.
From this, we deduce two consequences: firstly, we have a correspondence between -harmonic functions
on
and continuous functions
on
; secondly, the “evaluation at identity” functional
associating to each
-harmonic function
is value at
, i.e.,
is a linear functional that is non-negative (that is, it takes non-negative values on non-negative elements of
) and it takes the constant function
to the real value
, so that, by Riesz representation theorem, there exists an unique probability measure
on
such that
Note that is a
-space as the natural action of
on
via
for each
sends
into itself. Also, if a
-harmonic function
corresponds to
, then
corresponds to
. In particular, the formula above for
gives the following Poisson formula:
A pleasant point about the construction of is that it is canonical (i.e., it leads to an unique object) and, in particular, it is tempting to use
as the Poisson boundary.
However, this does not work because is too “large”, i.e., it might not have a countable basis of open sets. So, the notions of convergence of sequences of points or measures is not “natural” for a technical reason that we already encountered in the end of the proof of Corollary 11. Namely, when trying to prove that a sequence of measures
(depending on
) on
converges with probability
, we will show that for each continuous function
there exists a full measure set
such that the integrals
converges for any element of
, but if
has no countable basis, we can’t select a countable dense set
of continuous functions and, hence, we can’t conclude that the integrals
converge with probability
via the usual argument of taking
.
To overcome this difficulty, we observe that is separable, so that the spaces
are separable for
. In particular, one can find a subalgebra
of
possessing a countable dense set which is also dense in
for all
. Furthermore, since
has a countable dense subset
, we can choose the subalgebra
to be
-invariant (by “forcing” invariance with respect to every
). Using
one can define a quotient space
of
such that
is isomorphic to the space of continuous functions
. Note that
comes equipped with a probability measure
(obtained by push-forward of
with respect to the projection
). Moreover, since
is the completion of
with respect to the
-norm and
is dense in
, we see that the spaces
and
are the same. In other words, the passage from
to
makes that the class of continuous functions gets smaller, but the class of bounded measurable functions stays the same.
We claim that, by replacing by
, the technical difficulty mentioned above disappears and we get the Poisson boundary of
.
Let us start the proof of this claim by observing that the Poisson formula, i.e., item (b) in the definition of Poisson boundary, follows immediately from the definition of and the corresponding Poisson formula (1) for
. In particular, the Poisson formula for
becomes
where is a bounded measurable function (corresponding to
).
Remark 4 In fact, in item (b) of the definition of the Poisson boundary, one also requires the uniqueness of
modulo nullfunctions. In fact, this is not hard to show, but we will omit the details.
Next, let us check that is a
-stationary measure. Note that, by definition, given a continuous function
, the function
is -harmonic, i.e.,
(as
is the measure representing the linear function
). In particular,
that is, and
define the same linear functional from
to
. Therefore,
, i.e.,
is
-stationary.
Now, let us show that is a boundary of
. By Proposition 7, our task consists in proving that
converges to a Dirac mass with probability
. In this direction, note that, by Corollary 11 (applied to
and then “transferred” to
), we know that
converges to some probability measure
(with probability
). So, it remains to show that
is a Dirac mass with probability
. For this sake, let us fix
a test function and let us denote by
the corresponding
-harmonic function. From the Poisson formula (2), we get that
On the other hand, the isomorphism between and the functions on
is an algebra isomorphism. In particular,
that is, we have equality in Cauchy-Schwarz inequality. It follows that is
-almost everywhere constant. Since this occurs for all continuous function
, we deduce that
is a Dirac mass.
Finally, we complete the sketch of proof of the theorem by showing that is maximal in the sense of item (a), i.e., any boundary
is an equivariant image of
. Keeping this goal in mind, we will construct a natural
-algebra morphism from
into
, i.e., a morphism which is compatible with the natural
-actions on both algebras and respecting the linear functionals induced by
and
. Let
and consider the
-harmonic function
on
. Denote by
the limit function associated to
and let
the
-process on
. By Proposition 5,
, and, thus,
This formula makes it clear that the map is an algebra morphism from
into
. Furthermore, this formula also shows that the natural actions of
on these algebras are preserved, and, moreover, the linear functional induced by both
and
is given by
.
This completes the proof of Furstenberg’s theorem on the existence of Poisson boundaries.
Remark 5 The arguments above show that the measure-theoretical object
is uniquely determined, despite the fact that the topological space
is not unique. Nevertheless, we will not dispense the topological structure
in the definition of Poisson boundary because we want to think about it as a space attached to the group.
Remark 6 A “cousin” of the Poisson boundary is the so-called Martin boundary. Very roughly speaking, the Martin boundary is related to positive not necessarily bounded harmonic function while the Poisson boundary is related to bounded harmonic functions. In general, the Martin boundary is a realization of the Poisson boundary, but not vice-versa. For our current purpose (namely, the proof of Theorem 3), the Poisson boundary has the advantage that it does not change too much when we change the measure
in a reasonable way, while the same is not true for the Martin boundary.
The summary of today’s post is the following. We saw that Furstenberg’s idea for the proof of his Theorem 3 (that a lattice of can’t be realized as a lattice of
,
) was to show that the boundary behavior of a discrete group is determined by the boundary behavior of its envelope. Of course, the formalization of this idea requires the construction of an adequate boundary and this is precisely what we did in this section.
Next time, we will discuss some examples of Poisson boundary. After that, we will relate the Poisson boundary of a lattice of to the Poisson boundary of
and, then, we will complete the proof of Theorem 3.
Leave a Reply