Posted by: matheuscmss | February 26, 2013

Eskin-Kontsevich-Zorich regularity conjecture II: three facts about SL(2,R) and a variant of Rokhlin’s disintegration theorem

As we mentioned in the first post of this series, our goal today is to discuss some elementary facts about {SL(2,\mathbb{R})} and conditional measures. For this sake, we divide this post into two completely independent sections: in the next one we’ll exclusively talk about {SL(2,\mathbb{R})}, and in the final section we’ll discuss a variant of the so-called Rokhlin’s disintegration theorem.

1. Some facts about {SL(2,\mathbb{R})}

In this section, we’ll discuss 3 results where:

  • we’ll study the action of {SL(2,\mathbb{R})} on the Euclidean norms of non-collinear vectors {v,v'\in\mathbb{R}^2} (cf. Proposition 2 below),
  • we’ll compute the Haar measure on a certain open subset {W} of {SL(2,\mathbb{R})} (cf. Proposition 3 below), and
  • we’ll estimate the amount of time that the diagonal subgroup {g_t=\textrm{diag}(e^t, e^{-t})} keeps the Euclidean norm of the vector {(-\sin\theta,\cos\theta)} below a given threshold {\exp(-T)} (cf. Proposition 4 below).

During this entire section (and, actually, in the other posts of this series also), we will use the following notation:

  • {\{e_1, e_2\}} is the standard basis of {\mathbb{R}^2} and {\|.\|} denotes the Euclidean norm of {\mathbb{R}^2};
  • {g_t:=\textrm{diag}(e^t, e^{-t})} is the (positive) diagonal subgroup of {SL(2,\mathbb{R})};
  • {R_{\theta}=\left(\begin{array}{cc} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{array}\right)\in SO(2,\mathbb{R})} is the rotation by angle {\theta};
  • {n_u=\left(\begin{array}{cc} 1 & 0 \\ u & 1 \end{array}\right)\in SL(2,\mathbb{R})} is a lower triangular matrix.
  • {N_{a,b}=\left(\begin{array}{cc} a & b \\ 0 & a^{-1} \end{array}\right)\in SL(2,\mathbb{R})} is a upper triangular matrix.

1.1. Euclidean norms under {SL(2,\mathbb{R})}-action on {\mathbb{R}^2}

The following lemma computing the Haar measure on {SL(2,\mathbb{R})} is well-known, see, e.g., A. Knapp’s book (especially Chapter 5 on Iwasawa decomposition and integral formulas, and Equation (10.7) in Chapter 10).

Lemma 1 The map {(\theta,a,b)\mapsto R_{\theta} N_{a,b}} from {\mathbb{R}/2\pi\mathbb{Z}\times\mathbb{R}_{>0}\times\mathbb{R}} to {SL(2,\mathbb{R})} is a diffeomorphism and the measure {d\theta\,da\,db} is sent to a Haar measure by this diffeomorphism.

Using this lemma, we will estimate the Haar measure of the set of elements of {SL(2,\mathbb{R})} with bounded norm keeping two fixed non-collinear vectors inside a given annulus.

Proposition 2 Let {v,v'\in\mathbb{R}^2} be vectors with {v'\neq\pm v}. Then, the Haar measure of the set

\displaystyle E(v,v',\tau)=\{g\in SL(2,\mathbb{R}): \|g\|\leq 2, \exp(-\tau)\leq\|gv\|,\|gv'\|\leq\exp(\tau)\}

is {O(\tau^{3/2})} when {\tau} is small. Here, the implied constant is uniform when {\|v\pm v'\|} is bounded away from {0}.

Remark 1 For the application we have in mind, it suffices to know that the Haar measure of {E(v,v',\tau)} is {o(\tau)}, but we prefer to state this proposition in this form in order to highlight the following phenomenon. A naive computation reveals that the Haar measure of the set of elements of {SL(2,\mathbb{R})} with bounded norm keeping a given vector {v} inside the annulus {\{\exp(-\tau)\leq\|w\|\leq\exp(\tau)\}} is {O(\tau)}. In particular, it would be natural to think that {E(v,v',\tau)} has Haar measure {O(\tau^2)} as we are trying to confine two non-collinear vectors inside the annulus {\{\exp(-\tau)\leq\|w\|\leq\exp(\tau)\}}. However, as we’ll see during the proof of the proposition, this intuitive picture is wrong because of certain singularities (critical points) of “square root” type develop (in other words, the events of confining two non-collinear vectors inside an annulus are not always independent). Fortunately, the singularity is mild and the estimate {O(\tau^{3/2})=o(\tau)} of the Haar measure of {E(v,v',\tau)} is sufficient for our future purposes.

Proof: The set {E(v,v',\tau)} is empty (for small {\tau}) unless {1/3\leq\|v\|, \|v'\|\leq 3}, so we’ll assume that {v} and {v'} satisfy these inequalities. Also, from the right-invariance of the Haar measure under {R_{\alpha}}, we can suppose that {v=(p,0)} with {1/3\leq p\leq 3}.

Now, let us write {v'=(q,r)} and {g=R_{\theta} N_{a,b}}, and let us introduce the function

\displaystyle N: g\mapsto (\|gv\|^2,\|gv'\|^2)

Clearly {N} doesn’t depend on {\theta}, so that we will write {N=N(a,b)} in the sequel.

We have {N(a,b)=(p^2 a^2, (qa+rb)^2+r^2/a^2)}. Thus, the Jacobian matrix of {N} is

\displaystyle 2\left(\begin{array}{cc}p^2 a & 0 \\ q(qa+rb)-r^2/a^3 & r(qa+rb)\end{array}\right)

Since {p\geq 1/3} and {a>0}, this matrix is invertible unless {r(qa+rb)=0}, i.e., {gv} and {gv'} are collinear ({r=0}) or orthogonal ({qa+rb=0}).

We will complete the proof of this proposition by considering the following cases:

  • Suppose that {v'=\pm(\lambda v+w)} with {0<\lambda\neq 1} and {\|w\|<(1/100)|\lambda-1|}. Then, {gv'=\pm(\lambda gv+gw)} with {\|gw\|<(2/100)|\lambda-1|} when {\|g\|\leq 2}. Since {\lambda\neq 1}, we can’t have

    \displaystyle \exp(-\tau)\leq\|gv\|\leq\exp(\tau) \textit{ and } \exp(-\tau)\leq\|gv'\|\leq\exp(\tau)

    at the same time if {\tau} is small enough depending only on {|\lambda-1|}, i.e., {\|v\pm v'\|}. In other words, in this situation, the set {E(v,v',\tau)} is empty for {\tau} small enough depending on {\|v\pm v'\|}.

  • Suppose that {v} and {v'} are orthogonal, i.e., {q=0}. Then, {N(a,b)=(p^2 a^2, r^2b^2+r^2/a^2)}. The condition {\exp(-\tau)\leq \|gv\|\leq \exp(\tau)} determines an interval of {a}‘s of length {O(\tau)}: indeed, {\exp(-\tau)\leq \|gv\|\leq \exp(\tau)} is equivalent to {\exp(-\tau)\leq pa\leq \exp(\tau)}; since {1/3\leq p\leq3}, the interval {\frac{1}{p}\exp(-\tau)\leq a\leq \frac{1}{p}\exp(\tau)} has length {O(\tau)}. Next, we observe that, for a fixed {a} in this interval, the condition {\exp(-\tau)\leq\|gv'\|\leq\exp(\tau)} determines an interval of {b}‘s of length {O(\tau^{1/2})} (and exactly of order {\tau^{1/2}} in the worst case {pr=1}): in fact, {\exp(-\tau)\leq \|gv'\|\leq \exp(\tau)} is equivalent to {\exp(-2\tau)\leq r^2(b^2+a^{-2})\leq \exp(2\tau)}, that is, {\frac{1}{r^2}\exp(-2\tau)-\frac{1}{a^2}\leq b^2\leq \frac{a^2}{r^2}\exp(2\tau)-\frac{1}{a^2}}; since {\frac{\exp(-\tau)}{p}\leq a\leq \frac{\exp(\tau)}{p}} (as we mentioned above) and {1/3\leq p\leq3}, we get from {\frac{1}{r^2}\exp(-2\tau)-\frac{1}{a^2}\leq b^2\leq \frac{a^2}{r^2}\exp(2\tau)-\frac{1}{a^2}} that {\frac{1}{p^2r^2}(p^2\exp(-2\tau)-r^2\exp(2\tau))\leq b^2\leq \frac{1}{p^2r^2}(p^2\exp(2\tau)-r^2\exp(-2\tau))}, and, thus, {b} belongs to an interval of size {\leq (1/pr)O(\sqrt{\tau})=O(\sqrt{\tau})}. From Lemma 1, it follows that, in this situation, the Haar measure of {E(v,v',\tau)} is {O(\tau^{3/2})}. Moreover, the same estimate is true if we relax the condition {\|g\|\leq 2} to {\|g\|\leq 6} in the definition of {E(v,v',\tau)}.
  • Assume that there exists {g_0\in SL(2,\mathbb{R})} such that {\|g_0\|\leq 3} and {g_0v} is orthogonal to {g_0v'}. In this case, the fact that {E(v,v',\tau)} has Haar measure {O(\tau^{3/2})} follows from the previous item after performing a right-translation of {E(v,v',\tau)} by {g_0}.
  • Assume that none of the assumptions hold. Then, the Jacobian matrix of {N} is invertible on the set {\{g\in SL(2,\mathbb{R}): \|g\|\leq 2\}} and the norm of its inverse is uniformly bounded by a constant depending only on how far {\|v\pm v'\|} is from {0}. In this case, from the inverse function theorem and Lemma 1, we deduce that the Haar measure of {E(v,v',\tau)} is {O(\tau^2)} where the implied constant depends only on {\|v\pm v'\|}.

This completes the proof of the proposition. \Box

1.2. The decomposition {g_t R_{\theta} n_u} and Haar measure on {SL(2,\mathbb{R})}

Let {W=\left\{\left(\begin{array}{cc} a & b \\ c & d \end{array}\right)\in SL(2,\mathbb{R}): d>0 \textrm{ and } |bd|<1/2\right\}}.

Proposition 3 The map {(t,\theta,u)\mapsto g_t R_\theta n_u} from {\mathbb{R}\times (-\pi/4,\pi/4)\times \mathbb{R}} is a diffeomorphism onto {W}. Furthermore, up to a multiplicative constant, the restriction to {W} of a Haar measure is equal to {\cos 2\theta dt\,d\theta\,du} in the coordinates {(t,\theta,u)}.

Proof: The vertical basis vector {e_2=(0,1)\in\mathbb{R}^2} is fixed by {n_u} and it is mapped to {(-e^t\sin\theta, e^{-t}\cos\theta)} under {g_t R_\theta}. Therefore, for {t\in\mathbb{R}}, {|\theta|<\pi/4}, {u\in\mathbb{R}}, the matrix {g_t R_{\theta} n_u=\left(\begin{array}{cc} a & b \\ c & d \end{array}\right)} satisfies {d>0} and {|bd|<1/2}, i.e., {g_t R_{\theta} n_u} belongs to {W}.

Conversely, given a vector {(b,d)} with {d>0} and {|bd|<1/2}, there exists a unique pair {(t,\theta)\in\mathbb{R}\times(-\pi/4,\pi/4)} depending smoothly on {(b,d)} with {g_t R_\theta e_2=(b,d)}. Indeed, this follows from the facts that the matrix {g_t} moves vectors {(x_0,y_0)\neq(0,0)} along hyperbolas {\{(x,y)\in\mathbb{R}^2: xy=x_0y_0\}} and, for {|\theta|<\pi/4}, the matrix {R_\theta} moves {e_2} along the arc of unit circle {\{(-\sin\theta,\cos\theta):|\theta|<\pi/4\}} containing {e_2} between the hyperbolas {\{(x,y)\in\mathbb{R}^2:xy=-1/2\}} and {\{(x,y)\in\mathbb{R}^2:xy=+1/2\}}, see the figure below:

amyIIfigura

This proves the first assertion of the proposition because, after writing a vector {(b,d)} with {d>0} and {|bd|<1/2} uniquely as {g_t R_\theta e_2=(b,d)} for {t\in\mathbb{R}}, {|\theta|<\pi/4}, if we are given a vector {(a,c)} such that {\left(\begin{array}{cc} a & b \\ c & d \end{array}\right)\in SL(2,\mathbb{R})}, then we can write {\left(\begin{array}{cc} a & b \\ c & d \end{array}\right)=g_t R_{\theta} n_u} by choosing the unique {u\in\mathbb{R}} such that {g_t R_{\theta} n_u e_1=(a,c)} where {e_1=(1,0)} is the horizontal basis vector of {\mathbb{R}^2}.

Now, let us write the restriction of a Haar measure to {W} as {\gamma(t,\theta,u) dt\,d\theta\,du} where {\gamma(t,\theta,u)} is a positive smooth density function. Since any Haar measure on {SL(2,\mathbb{R})} is left and right invariant (i.e., {SL(2,\mathbb{R})} is unimodular), we deduce that {\gamma(t,\theta,u)} depends only on {\theta}, i.e., {\gamma(t,\theta,u)=\gamma(\theta)}. In order to compute {\gamma(\theta)}, we will use the left-invariance of Haar measures under {R_{\theta}}. More concretely, let us fix {\theta_0\in(-\pi/4,\pi/4)}, and let us consider tiny open sets around the origin {(t,\theta,u)=(0,0,0)} and their respective images under {R_{\theta_0}}. In terms of matrices, this amounts to consider the equation:

\displaystyle R_{\theta_0}g_t R_{\theta} n_u = g_T R_{\Theta} n_U \ \ \ \ \ (1)

for {t,\theta, u} close to {0}, and {T=T_{\theta_0}(t,\theta,u)}, {\Theta=\Theta_{\theta_0}(t,\theta,u)}, {U=U_{\theta_0}(t,\theta,u)}. For sake of simplicity, since {\theta_0} is fixed, we will omit the dependence of the functions {T, \Theta, U} on {\theta_0} in what follows. In this language, we have from the {R_{\theta_0}}-invariance of Haar measures and the change of variables formula that {\gamma(\theta_0)=\gamma(0)\cdot (1/J_{\theta_0}(0,0,0))} where {J_{\theta_0}(0,0,0)} is the determinant of the Jacobian matrix {D(T,\Theta,U)=\frac{\partial(T,\Theta,U)}{\partial(t,\theta,u)}} at the origin {(0,0,0)}. In particular, our task is reduced to show that {J_{\theta_0}(0,0,0)=1/\cos2\theta_0}. Keeping this goal in mind, let us notice that {t=0} implies that {T=0}, {\Theta=\theta+\theta_0} and {U=u} in Equation (1). Thus, we have that

\displaystyle \frac{\partial T}{\partial \theta}(0,0,0)=\frac{\partial T}{\partial u}(0,0,0)=0,

\displaystyle \frac{\partial \Theta}{\partial \theta}(0,0,0)=1, \quad \frac{\partial \Theta}{\partial u}(0,0,0)=0

and

\displaystyle \frac{\partial U}{\partial \theta}(0,0,0)=0, \quad \frac{\partial T}{\partial u}(0,0,0)=1

In other words, the Jacobian matrix {\frac{\partial(T,\Theta,U)}{\partial(t,\theta,u)}} has the form

\displaystyle \left(\begin{array}{ccc}\frac{\partial T}{\partial t} & 0 & 0 \\ \frac{\partial \Theta}{\partial t} & 1 & 0 \\ \frac{\partial U}{\partial t} & 0 & 1\end{array}\right)

at the origin {(0,0,0)}, so that {J_{\theta_0}(0,0,0)=\frac{\partial T}{\partial t}(0,0,0)}. Hence, it remains only to show that {\frac{\partial T}{\partial t}(0,0,0)=1/\cos 2\theta_0}. In this direction, we apply both matrices in Equation (1) to the vertical basis vector {e_2} to get the equations:

\displaystyle -e^T \sin\Theta = -e^t \cos\theta_0 \sin\theta - e^{-t} \sin\theta_0 \cos\theta \ \ \ \ \ (2)

and

\displaystyle e^{-T} \cos\Theta = -e^t \sin\theta_0\sin\theta+e^{-t}\cos\theta_0\cos\theta \ \ \ \ \ (3)

By multiplying these equations, we get the relation:

\displaystyle \begin{array}{rcl} & &\frac{1}{2}\sin 2\Theta=\sin\Theta \cos\Theta = \\ & & (e^t \cos\theta_0 \sin\theta + e^{-t} \sin\theta_0 \cos\theta) (-e^t \sin\theta_0\sin\theta+e^{-t}\cos\theta_0\cos\theta) \end{array}

By taking the partial derivative with respect to {t}, we deduce that

\displaystyle \begin{array}{rcl} & &\cos 2\Theta \frac{\partial \Theta}{\partial t} = \\ & & (e^t \cos\theta_0 \sin\theta - e^{-t} \sin\theta_0 \cos\theta) (-e^t \sin\theta_0\sin\theta+e^{-t}\cos\theta_0\cos\theta) + \\ & & (e^t \cos\theta_0 \sin\theta + e^{-t} \sin\theta_0 \cos\theta) (-e^t \sin\theta_0\sin\theta-e^{-t}\cos\theta_0\cos\theta) \end{array}

Therefore, since {(T,\Theta, U)=(0,\theta_0,0)} at the origin {(t,\theta,u)=(0,0,0)}, we have that

\displaystyle \cos 2\theta_0 \frac{\partial \Theta}{\partial t} (0,0,0)= -2\sin\theta_0\cos\theta_0,

that is,

\displaystyle \frac{\partial \Theta}{\partial t}(0,0,0)=-\tan 2\theta_0 \ \ \ \ \ (4)

Now, we can plug this information into Equation (3) to calculate {\frac{\partial T}{\partial t}(0,0,0)}. Indeed, by taking the partial derivative with respect to {t} in (3), we get

\displaystyle -\frac{\partial T}{\partial t}e^{-T}\cos\Theta - e^{-T}\sin\Theta \frac{\partial \Theta}{\partial t} = -e^t\sin\theta_0\sin\theta-e^{-t}\cos\theta_0\cos\theta

Since {(T,\Theta,U)=(0,\theta_0,0)} at the origin {(t,\theta,u)=(0,0,0)}, we obtain that

\displaystyle -\frac{\partial T}{\partial t}(0,0,0)\cos\theta_0 - \sin\theta_0 \frac{\partial \Theta}{\partial t}(0,0,0) = -\cos\theta_0

By combining this relation with (4) we conclude that

\displaystyle \begin{array}{rcl} \frac{\partial T}{\partial t}(0,0,0)&=&1+\tan\theta_0\tan2\theta_0 = 1+ \frac{\sin\theta_0}{\cos\theta_0}\frac{\sin2\theta}{\cos2\theta_0} \\ &=& \frac{1}{\cos 2\theta_0} \left(\cos 2\theta_0 + \frac{\sin\theta_0}{\cos\theta_0}(2\sin\theta_0\cos\theta_0)\right) \\ &=& \frac{1}{\cos 2\theta_0}(\cos^2\theta_0-\sin^2\theta_0+2\sin^2\theta_0) \\ &=& \frac{1}{\cos 2\theta_0}(\cos^2\theta_0+\sin^2\theta_0) \\ &=& \frac{1}{\cos 2\theta_0} \end{array}

Of course, this proves the second assertion of the proposition. \Box

Remark 2 Note that the computations above give all entries of the Jacobian matrix {\frac{\partial(T,\Theta,U)}{\partial(t,\theta,u)}} at the origin but {\frac{\partial U}{\partial t}(0,0,0)}. Evidently, the knowledge of this particular entry was irrelevant for the computation of {J_{\theta_0}(0,0,0)} in the previous proposition, but the curious reader is invited to compute this entry along the following lines. By applying both matrices in (1) to the horizontal basis vector {e_1=(1,0)}, one gets two relations:

\displaystyle \begin{array}{rcl} e^T\cos\Theta-Ue^T\sin\Theta&=&e^t\cos\theta_0\cos\theta-e^{-t}\sin\theta_0\sin\theta \\ &-&u(e^t\cos\theta_0\sin\theta+e^{-t}\sin\theta_0\cos\theta) \end{array}

and

\displaystyle \begin{array}{rcl} e^{-T}\sin\Theta+Ue^{-T}\cos\Theta&=&e^t\sin\theta_0\cos\theta + e^{-t}\cos\theta_0\sin\theta \\ &-&u(e^t\sin\theta_0\sin\theta-e^{-t}\cos\theta_0\cos\theta) \end{array}

By taking the partial derivative of the second relation above with respect to {t} at the origin and by plugging the values {\frac{\partial \Theta}{\partial t}(0,0,0)=-\tan2\theta_0} and {\frac{\partial T}{\partial t}(0,0,0)=1/\cos2\theta_0} already computed, one deduces that

\displaystyle -\frac{\sin\theta_0}{\cos2\theta_0}-\cos\theta_0\tan2\theta_0+\cos\theta_0\frac{\partial U}{\partial t}(0,0,0)=\sin\theta_0,

i.e.,

\displaystyle \begin{array}{rcl} \frac{\partial U}{\partial t}(0,0,0)&=&\tan2\theta_0+\frac{\sin\theta_0}{\cos\theta_0}\left(1+\frac{1}{\cos2\theta_0}\right) \\ &=& \tan2\theta_0+\frac{\sin\theta_0}{\cos\theta_0} \left(\frac{\cos^2\theta_0-\sin^2\theta_0+1}{\cos2\theta_0}\right) \\ &=& \tan2\theta_0+\frac{\sin\theta_0}{\cos\theta_0} \left(\frac{2\cos^2\theta_0}{\cos2\theta_0}\right) = \tan2\theta_0+\frac{2\sin\theta_0\cos\theta_0}{\cos2\theta_0} \\ &=& 2 \tan 2\theta_0 \end{array}

1.3. On the action of the diagonal subgroup {g_t=\textrm{diag}(e^t,e^{-t})}

Let {R_{\theta}} be a given rotation. For {T>0}, define:

\displaystyle J(T,\theta):=\{t\in\mathbb{R}: \|g_tR_{\theta}e_2\|<\exp(-T)\}

Geometrically, {J(T,\theta)} is the interval of times {t} such that, after applying {g_t} to the unit vector {R_{\theta}e_2}, the norm of the vector {g_tR_{\theta}e_2} stays below the “threshold” {\exp(-T)}.

Proposition 4 The set {J(T,\theta)} is empty if and only if {|\sin2\theta|\geq \exp(-2T)}. If {|\sin2\theta|<\exp(-2T)}, then, by writing {\sin2\theta=\exp(-2T)\sin\omega} with {\cos\omega>0}, one has that {J(T,\theta)} is an open interval of length {\frac{1}{2}\log\frac{1+\cos\omega}{1-\cos\omega}}.

Proof: By definition, {t\in J(T,\theta)} if and only if {\|g_t R_{\theta}e_2\|^2<\exp(-2T)}, i.e.,

\displaystyle e^{2t}\sin^2\theta+e^{-2t}\cos^2\theta<\exp(-2T)

By performing the change of variables {x=e^{2t}} or {x=e^{-2t}} depending on whether {\sin\theta\neq0} or {\cos\theta\neq0}, and by multiplying the previous inequality by {x}, we get {x^2\sin^2\theta-\exp(-2T)x+\cos^2\theta<0} or {x^2\cos^2\theta-\exp(-2T)x+\sin^2\theta<0}. This inequality has a solution if and only if the discriminant of the corresponding second degree equation is positive, i.e.,

\displaystyle \Delta=\exp(-4T)-4\sin^2\theta\cos^2\theta=\exp(-4T)-\sin^2 2\theta>0

This proves the first assertion of the proposition.

If {|\sin2\theta|<\exp(-2T)} and we write {\sin2\theta=\exp(-2T)\sin\omega} with {\cos\omega>0}, then {\Delta=\exp(-2T)(1-\sin^2\omega)=\exp(-2T)\cos^2\omega}, so that {\sqrt{\Delta}=\exp(-T)\cos\omega}. It follows that the solutions of

\displaystyle x^2\cos^2\theta-\exp(-2T)x+\sin^2\theta<0

belongs to the open interval {(x_-,x_+)} between the roots

\displaystyle x_{\pm}=\frac{\exp(-2T)\pm\sqrt{\Delta}}{2\sin^2\theta}=\frac{\exp(-2T)(1\pm\cos\omega)}{2\sin^2\theta}

of {x^2\sin^2\theta-\exp(-2T)x+\cos^2\theta=0}. By recalling the change of variables {x=e^{2t}}, we deduce that {J(T,\theta)} is an open interval {(t_-,t_+)} where {x_{\pm}=e^{2t_{\pm}}}. In particular, the length of {J(T,\theta)} is

\displaystyle t_+-t_-=\frac{1}{2}\log\frac{x_+}{x_-}=\frac{1}{2}\log\frac{1+\cos\omega}{1-\cos\omega}

This completes the proof of the second assertion of the proposition. \Box

Later in our discussion, we will combine Propositions 3 and 4 to derive some measure estimates of the sets of translation surfaces with short saddle-connections, and, for this sake, we will need the following equality:

Lemma 5

\displaystyle \int_0^{\pi/2}\log\frac{1+\cos\omega}{1-\cos\omega} \cos\omega \, d\omega=\pi

Proof: The change of variables {u=\tan(\omega/2)} gives

\displaystyle \int_0^{\pi/2}\log\frac{1+\cos\omega}{1-\cos\omega} \cos\omega \, d\omega = 4\int_0^1\log(1/u)\frac{1-u^2}{(1+u^2)^2}\, du. \ \ \ \ \ (5)

Since {\frac{1-u^2}{(1+u^2)^2}=\sum\limits_{n\geq0}(-1)^n(2n+1)u^{2n}} (as {1/(1+x)^2=\sum\limits_{n\geq 0} (-1)^n (n+1)x^n}) and {\int_0^1 u^n\log(1/u)\, du=1/(n+1)^2} for {n\geq 0} (by integration by parts), we deduce that

\displaystyle \int_0^1\log(1/u)\frac{1-u^2}{(1+u^2)^2}\,du=\int_0^1\log(1/u)\sum\limits_{n\geq0}(-1)^n(2n+1)u^{2n}\,du

\displaystyle =\sum\limits_{n\geq0}\frac{(-1)^n}{2n+1} \ \ \ \ \ (6)

By Leibniz formula, we know that {\sum\limits_{n\geq0}\frac{(-1)^n}{2n+1}=\pi/4}. By plugging this into Equations (6) and (5), we obtain the desired lemma. \Box

Also for later use, we will need the following corollary of the proof of Proposition 4.

Corollary 6 Given {\omega_0}, there exists a constant {K=K(\omega_0)>0} such that, if

\displaystyle \exp(-2T)\sin\omega_0<|\sin2\theta|<\exp(-2T),

then

\displaystyle \|g_t R_{\theta} e_2\|\leq K\exp(-t)

for all {t\in J(T,\theta)}.

Proof: Recall that {\|g_t R_{\theta}e_2\|^2 = e^{2t}\sin^2\theta+e^{-2t}\cos^2\theta}, so that

\displaystyle \|g_t R_{\theta} e_2\|^2 = e^{-2t}(\cos^2\theta + e^{4t}\sin^2\theta)\leq e^{-2t}(1 + e^{4t}\sin^2\theta).

In particular, from the definition of {J(T,\theta)}, we have that {e^{2t}\sin^2\theta\leq \|g_t R_{\theta} e_2\|^2<\exp(-2T)} for {t\in J(T,\theta)}. By combining these two estimates, we deduce that

\displaystyle \|g_t R_{\theta} e_2\|^2\leq e^{-2t}(1+\exp(-2T)e^{2t})

On the other hand, from the proof of Proposition 4, we know that, by writing {\sin2\theta=\exp(-2T)\sin\omega},

\displaystyle e^{2t}\leq x_+:=\exp(-2T)\frac{1+\cos\omega}{2\sin^2\theta}

for every {t\in J(T,\theta)}.

By hypothesis, {\exp(-2T)\sin\omega_0<|\sin2\theta|<\exp(-2T)} (i.e., {\omega_0<|\omega|<\pi/2}), so that we conclude from the previous inequality that

\displaystyle \exp(-2T)e^{2t}\leq \exp(-4T)\frac{1}{\sin^2\theta}\leq \frac{4}{\sin^2\omega_0}

By plugging this into our estimate of {\|g_t R_{\theta} e_2\|^2} above, we deduce that

\displaystyle \|g_t R_{\theta} e_2\|^2\leq e^{-2t}\left(1+\frac{4}{\sin^2\omega_0}\right):=e^{-2t} K(\omega_0),

and thus the proof of the corollary is complete. \Box

At this point we have all elementary facts about {SL(2,\mathbb{R})} and its action on {\mathbb{R}^2}, and we will close this post with the following remark:

Remark 3 Historically, the proof of Proposition 4 was the starting point of our solution with A. Avila and J.-C. Yoccoz of the Eskin-Kontsevich-Zorich regularity conjecture. Interestingly enough, after Artur, Jean-Christophe and I completed our article, A. Eskin pointed out to us that this lemma was previously known by G. Margulis and, indeed, they planned to include this fact in one of their joint articles but they ended up by not relying on it.

2. Rokhlin’s disintegration theorem

Very roughly speaking, given a probability measure {m} on a space {X} and a partition {\zeta(x)} of {X}, Rokhlin’s disintegration theorem concerns the problem of writing/decomposing {m} as a “superposition” of (conditional) probability measures {m_x} supported on the elements {\zeta(x)} of the given partition. In other words, Rokhlin’s theorem addresses the question of disintegrating {m} into {m_x} supported on

Of course, such a decomposition might not exist in general, but Rokhlin’s disintegration theorem provides fairly general conditions ensuring the existence and uniqueness of disintegrations/conditional measures.

Theorem 7 (Rokhlin’s disintegration theorem) Let {(X,\mathcal{B}, m)} be a Lebesgue space and let {\zeta} be a measurable partition of {X}, i.e., there is a sequence {\zeta_n} of finite partitions of {X} with the following properties:

  • denoting by {\psi(x)} denotes the element of a given partition {\psi} containing {x}, one has that {\zeta_n(x)\in\mathcal{B}} for all {x\in X} and {n\in\mathbb{N}};
  • the sequence is monotonic: for all {x\in X} and {n\in\mathbb{N}}, {\zeta_{n+1}(x)\subset\zeta_n(x)};
  • {\zeta} is the limit of {\zeta_n}: for all {x\in X}, {\zeta(x)=\cap_n\zeta_n(x)}.

Then, there exists a system of conditional measures {(m_x)_{x\in X}} for {(X,\mathcal{B},m,\zeta)}, i.e.,

  • for all {x\in X}, {m_x(\zeta(x))=1} (i.e., {m_x} is supported on {\zeta(x)});
  • for all {y\in\zeta(x)}, {m_y=m_x} (i.e., {m_x} is constant on elements of the partition {\zeta});
  • for any {B\in\mathcal{B}}, the function {x\mapsto m_x(B)} is measurable and

    \displaystyle m(B)=\int_X m_x(B) dm(x)

    (i.e., {m_x} disintegrates {m}).

Moreover, the system of conditional measures {(m_x)} is essentially unique: if {(m_x')} is another system of conditional measures, then {m_x'=m_x} for {m}-almost every {x\in X}.

Remark 4 In some sense, Rokhlin’s disintegration theorem is a sort of martingale convergence theorem: indeed, for the finite partitions {\zeta_n}, it is easy to disintegrate {m} by letting {(m_n)_x} be the normalized restriction of {m} to {\zeta_n(x)}, and one has to work to show that the desired {m_x}‘s are the limits of {m_n(x)}‘s. This point of view is nicely explained in these notes of M. Viana here (where a proof of a particular case of Rokhlin’s disintegration theorem is proved).

For our future applications, the probability measure {m} will leave on a space with a partition given by pieces of a natural action of a Lie group and {m} will be “invariant” under this Lie group. In this context, Rokhlin’s disintegration theorem alone will not suffice for our purpose because we will want to know that the conditional measures are essentially pieces of Haar measures. Here, it is worth to point out that similar settings were already considered by other authors, but the exact statement (below) needed in our applications in the next posts of this series doesn’t seem to appear in the literature. So, we will close today’s post with the following variant of Rokhlin’s disintegration theorem.

Let {X} be a Polish space and let {G} be a Lie group. Denote by {\nu} a left-invariant Haar measure on {G} and {d} a left-invariant metric on {G}.

Let {G} act on {X\times G} by {g(x,h)=(x,gh)} and denote by {\pi:X\times G\rightarrow X} the canonical projection. Let {Z\subset X\times G} be an open subset and let {m} be a probability measure on {X}.

Note that {Z} comes with a natural measurable partition {Z_{(x,h)}=Z_x=Z\cap\pi^{-1}(\{x\}):=\{x\}\times U_x}. Thus, by Rokhlin’s disintegration theorem, {m} has a system of conditional measures {(m_x)} that we can think as a probability measure on the fiber {U_x} of {Z} over {x}.

We will assume that {m} is invariant, that is, for all measurable {W\subset Z} and all {g\in G} such that {g(W)\subset Z}, we have {m(g(W))=m(W)}. In this context, we can assert that the conditional measures are pieces of Haar ({\nu}-)measure on {G}.

Proposition 8 If {m} is invariant, then, for {m_1=\pi_*(m)}-almost every {x\in X}, {U_x} has finite (Haar) {\nu}-measure and

\displaystyle m_x=\frac{1}{\nu(U_x)}\nu|_{U_x}

Proof: For {m_1}-a.e. {x\in X}, the conditional measure {m_x} is “invariant”. Intuitively, this follows from the uniqueness of the system of conditional measures and the invariance of {m}, but one has to be take a little bit of care about the meaning of “invariant” for {m_x}.

For our purposes, we will consider the following notion. Let {D\subset G} be a countable dense subset of {G} and let {\mathcal{R}} be the countable set of balls of the left-invariant metric {d} on {G} centered at points in {D} with positive rational radius.

Since {D} and {\mathcal{R}} are countable, we can use the uniqueness of conditional measures and the invariance of {m} (plus the fact that countable unions of sets of zero measure have zero measure) to deduce that, for {m_1}-almost every {x\in X}, if {B\in\mathcal{R}} and {g\in D} satisfy {B\subset U_x} and {g(B)\subset U_x}, then {m_x(g(B))=m_x(B)}, i.e., {m_x} is “invariant” if we test the invariance properties exclusively with elements {B\in\mathcal{R}} and {g\in D} of the countable (dense) sets {\mathcal{R}} and {D}.

In any case, the fact that {\mathcal{R}} and {D} are dense are enough to deduce that {m_x}‘s are Haar measures from the “weak” form of invariance. Indeed, notice that it suffices to show that {m_x}‘s are absolutely continuous with respect to {\nu} because the “weak” invariance property is good enough to ensure that the density {dm_x/d\nu} is constant. Now, once we reduced our task to the absolutely continuity of {m_x}, we can proceed as follows. Given {B\in\mathcal{R}}, {B\subset U_x} with radius {r>0}, by the nice (homogeneity) properties of the Haar measure on a Lie group, we can find an integer {M\geq c r^{-\textrm{dim}(G)}} (with {c=c(U_x)} independent of {B}) and some elements {g_1,\dots, g_M\in D} such that the balls {g_i(B)} are mutually disjoint and contained in {U_x}. By the “weak” invariance of {m_x}, we obtain that

\displaystyle M\cdot m_x(B) = m_x\left(\bigcup\limits_{i=1}^M g_i(B)\right)\leq m_x(U_x)=1,

so that {m_x(B)\leq (1/c) r^{\textrm{dim}(G)}}, and, hence, {m_x} is absolutely continuous with respect to {\nu}. \Box


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: