Posted by: matheuscmss | August 3, 2016

Yoccoz proof of Jakobson theorem II

Last time, we introduced the notion of regular parameter {c\in[-2,0)} of the quadratic family {P_c(x) = x^2+c} and we saw that the orbits of {P_c} have a nice statistical description when {c} is regular. In particular, this reduced our initial goal (of proving Jakobson’s theorem) to show that regular parameters are abundant near {-2}, i.e.,

\displaystyle \lim\limits_{\varepsilon\rightarrow 0} \frac{\textrm{Leb}(\{c\in[-2,-2+\varepsilon]; c \textrm{ is regular}\})}{\varepsilon} = 1 \ \ \ \ \ (1)

As it turns out, Yoccoz’s proof of (1) is indirect: first, he introduces the notion of strongly regular parameter and he proves that strongly regular parameters are a special case of regular parameters; secondly, he exploits the nice features of strong regularity to transfer some key properties from the phase space {x\in\mathbb{R}} to the parameter space {c\in[-2,0)} in order to prove that

\displaystyle \lim\limits_{\varepsilon\rightarrow 0} \frac{\textrm{Leb}(\{c\in[-2,-2+\varepsilon]; c \textrm{ is strongly regular}\})}{\varepsilon} = 1 \ \ \ \ \ (2)

Today, we shall define strong regularity and prove the regularity of such parameters (while leaving the proof of (2) for the final post of this series).

1. Some preliminaries

1.1. Quick review of the regularity property

For {c\in [-2,0)}, {P_c(x):=x^2+c} has two fixed points {\alpha=\alpha(c)} and {\beta=\beta(c)} with {-\beta<\alpha<0}. Note that the critical value {c=P_c(0)} belongs to {[-\beta,\alpha)}.

In a certain sense, the key idea is to study the dynamics of {P_c} via certain intervals (“Yoccoz puzzle pieces”) bounded by points in the pre-orbits of {\pm\alpha}.

For example, the notion of regular parameter was defined with the aid of the intervals {A:=[\alpha,-\alpha]} and {\widehat{A}:=[\alpha^{(1)},-\alpha^{(1)}]} where {\alpha^{(1)}=\alpha^{(1)}(c)\in (-\beta,\alpha)} is given by {P_c(\alpha^{(1)}) = -\alpha}. Indeed, {c\in [-2,0)} is regular if there are {C>0} and {\theta>0} such that

\displaystyle \textrm{Leb}(\{x\in A: x \textrm{ is not } n\textrm{-regular}\})\leq C e^{-\theta n}

for all {n\in\mathbb{N}}. Here, {x} is called {n}regular if there are {0<m\leq n} and an interval {\widehat{J}\ni x} such that {P_c^m} sends {\widehat{J}} diffeomorphically onto {\widehat{A}} in such a way that {P_c^m(x)\in A}. For later use, we denote by {g_J} the inverse branch of {P_c^m} restricted to {\widehat{J}}.

In general, any {n}-regular point belongs to a regular interval of order {m\leq n}, that is, an interval {J\subset [-\beta,\beta]} possessing an open neighborhood {\widehat{J}\supset J} such that {P_c^m} sends {\widehat{J}} diffeomorphically onto {\widehat{A}} in such a way that {P_c^m(J)=A}. In other words, the set of {n}-regular points is the union of regular intervals of orders {\leq n}.

It is easy to describe regular intervals (“Yoccoz puzzle pieces”) in terms of the pre-orbits of {\pm\alpha}. In fact, denote by {\Delta_n=\Delta_n(c):=P_c^{-n-1}(\{\alpha\})} (so that {\Delta_0=\{\alpha,-\alpha\}} and {\Delta_1=\{\pm\alpha,\pm\alpha^{(1)}\}}). It is not difficult to check that if {J=[\gamma^-,\gamma^+]} is a regular interval of order {m} and {\widehat{J}=(\widehat{\gamma}^-,\widehat{\gamma}^+)} is the associated neighborhood, then {\gamma^-<\gamma^+} are consecutive points of {\Delta_m} and {\widehat{\gamma}^-<\gamma^-<\gamma^+} are consecutive points of {\Delta_{m+1}}.

1.2. Dynamically meaningful partition of the parameter space

For later use, we organize the parameter space {[-2,0)} as follows. For each {M\in\mathbb{N}}, we consider a maximal open interval {(c^{(M)}, c^{(M-1)})} such that {P_c^M(0)} is the first return of {0} to {A} under {P_c} for all {c\in(c^{(M)}, c^{(M-1)})}.

In analytical terms, we can describe the sequence {c^{(m)}} as follows. For {c\in [-2,0)}, let {\alpha^{(0)} = \alpha^{(0)}(c)} be {\alpha^{(0)}:=\alpha} and, for {m>0}, define recursively {\alpha^{(m)}=\alpha^{(m)}(c)} as

\displaystyle P_c(\alpha^{(m)})=-\alpha^{(m-1)}, \quad \alpha^{(m)} < 0

In these terms, {c^{(m)}} is the solution of the equation {c=P_c(0)=\alpha^{(m-1)}(c)}.

Remark 1 From this analytical definition of {c^{(m)}}, one can show inductively that {P_c^M(0)\in A} for {c\in (c^{(M)},c^{(M-1)})} along the following lines.By definition, {\frac{\partial\alpha^{(m)}}{\partial c} = -\frac{1}{2\alpha^{(m)}} (1+\frac{\partial\alpha^{(m-1)}}{\partial c})}. This inductive relation can be exploited to give that {1/3\leq \frac{\partial\alpha^{(m)}(c)}{\partial c}\leq 1/2} for all {m\in\mathbb{N}} and {c\in [-2,-3/2]}.

This estimate allows us to show that the function {c\mapsto P_c(0) - \alpha^{(m-1)}(c)} has derivative between {1/2} and {2/3} for {c\in[-2,-3/2]}. Since this function takes a negative value {-2-\alpha^{(m-1)}(-2)<0} at {c=-2} and a positive value {\alpha^{(m-2)}(c^{(m-1)})-\alpha^{(m-1)}(c^{(m-1)})>0} at {c=c^{(m-1)}}, we see that this function has a unique simple zero {c^{(m)}\in (-2, c^{(m-1)})} such that {P_c(0)\in [-2,\alpha^{(m-1)}(c)]} for {c\in [-2,c^{(m)}]}, as desired.

Remark 2 Note that {c^{(m)}} is a decreasing sequence such that {\frac{1}{C 4^m}\leq c^{(m)}+2\leq \frac{C}{4^m}} for some universal constant {C>0}. Indeed, the function {c\mapsto P_c(0)-\alpha^{(m-1)}(c)} takes the value {-2-\alpha^{(m-1)}(-2)=-4\sin^2\frac{\pi}{3 2^m}} at {c=-2} (cf. Subsection 4.2 of the previous post), it vanishes at {c=c^{(m)}}, and it has derivative between {1/2} and {2/3}, so that {1/1000\leq 4^m(c^{(m)}+2)\leq 1000}.

From now on, we think of {c\in (c^{(M)}, c^{(M-1)})} where {M} is a large integer.

2. Strong regularity

Given {c\in (c^{(M)}, c^{(M-1)})}, let {\mathcal{J}} be the collection of maximal regular intervals of positive order contained in {A} and consider

\displaystyle W := \bigcup\limits_{J\in\mathcal{J}} \textrm{int}(J),

{N:W\rightarrow\mathbb{N}} the function {N(x)=\textrm{order}(J)} for {x\in J}, and {T:W\rightarrow A} the map {T(x)=P_c^{N(x)}(x)} ({=P_c^{\textrm{order of } J}(x)} for {x\in J}): cf. Subsection 4.3.3 of this post here.

Remark 3 Even though {0} is not contained in any element of {\mathcal{J}}, we set {N(0)=M} and {T(0)=P_c^M(0)} for {c\in (c^{(M)}, c^{(M-1)})}.

The elements of {\mathcal{J}} of “small” orders are not hard to determine. Given {1\leq n < M}, define {\widetilde{\alpha}^{(n)}\in\Delta_n} by:

\displaystyle P_c(\widetilde{\alpha}^{(n)}) = \alpha^{(n-1)}, \quad \widetilde{\alpha}^{(n)} < 0

It is not difficult to check that the sole elements of {\mathcal{J}} of order {1<n<M-1} are the intervals

\displaystyle C_n^+ := [\widetilde{\alpha}^{(n-1)}, \widetilde{\alpha}^{(n)}] \quad \textrm{and} \quad C_n^- = [-\widetilde{\alpha}^{(n)}, -\widetilde{\alpha}^{(n-1)}]

and, furthermore, any other element of {\mathcal{J}} has order {>M}.

The intervals {C_n^{\pm}}, {1<n<M-1}, are called simple regular intervals: this terminology reflects the fact that they are the most “basic” type of regular intervals.

In this setting, a parameter {c} is strongly regular if “most” of the returns of {\{P_c^j(0)\}_{j\in\mathbb{N}}} to {W} occur on simple regular intervals:

Definition 1 We say that {c\in (c^{(M)}, c^{(M-1)})} is strongly regular up to level {K} if {P_c^M(0):=T(0)\in \bigcap\limits_{k=0}^{K-1} T^{-k}(W)} and, for each {1\leq k\leq K}, one has

\displaystyle \sum\limits_{\substack{0<\ell\leq k \\ N(T^{\ell}(0))>M}} N(T^{\ell}(0)) \leq 2^{-\sqrt{M}} \sum\limits_{\ell=1}^k N(T^{\ell}(0)) \ \ \ \ \ (3)

A parameter {c} is called strongly regular if it is strongly regular of all levels {K\in\mathbb{N}}.

Remark 4 Let {c\in (c^{(M)}, c^{(M-1)})} be a strongly regular parameter. It takes a while before {\{P_c^j(0)\}_{j\in\mathbb{N}}} encounters a non-simple regular interval: if {N(T^k(0))>M} (or, equivalently, {T^k(0)\in (-\widetilde{\alpha}^{(M-2)}, \widetilde{\alpha}^{(M-2)})}), then (3) implies that

\displaystyle N(T^k(0))\leq \frac{2^{-\sqrt{M}}}{1-2^{-\sqrt{M}}}(N_k-M)

where {N_{k+1}:=\sum\limits_{\ell=0}^{k} N(T^{\ell}(0)) = M + \sum\limits_{\ell=1}^k N(T^{\ell}(0))}. In particular, {N_{k+1}\geq 2^{\sqrt{M}} M}, so that the first {2^{\sqrt{M}} M} iterates of {0} encounter {A} exclusively at simple regular intervals.

3. Regularity of strongly regular parameters

Let us now outline the proof of the fact that strongly regular parameters are regular.

3.1. Singular intervals

Given {n>1}, we say that an interval {J\subset A} is {n}singular if its boundary {\partial J} consists of two consecutive points of {\Delta_n}, but {J} is not contained in a regular interval of order {\leq n}. The collection of {n}-singular intervals is denoted by {\mathcal{E}(n)}.

Remark 5 For {c\in (c^{(M)}, c^{(M-1)})} and {2\leq n\leq M-2}, there is only one {n}-singular interval, namely {[\widetilde{\alpha}^{(n)}, -\widetilde{\alpha}^{(n)}]}. For {n=M-1} and {M}, there are exactly three {n}-singular intervals, namely {C_{M-1}^{\pm}} and {[\widetilde{\alpha}^{(M-1)}, -\widetilde{\alpha}^{(M-1)}]}.

By definition, {\{x\in A: x \textrm{ is not } n\textrm{-regular}\} = \bigcup\limits_{J\in\mathcal{E}(n)} J}.

For later reference, denote {E(n):=\textrm{Leb}(\{x\in A: x \textrm{ is not } n\textrm{-regular}\}) = \textrm{Leb}(\bigcup\limits_{J\in\mathcal{E}(n)} J)}. In these terms, {c} is regular whenever there are {C, \theta>0} such that

\displaystyle E(n)\leq C e^{-\theta n} \quad \forall \, n\in\mathbb{N}

As a “warm-up”, let us show the following elementary fact:

Proposition 2 One has

\displaystyle E(n)\leq 3\cdot 2^{-n/2}

for all {c\in (c^{(M)}, c^{(M-1)})} and {2\leq n\leq 2M-9}.

Proof: For {2\leq n\leq M-2}, we have that {\mathcal{E}(n)=\{[\widetilde{\alpha}^{(n)}, -\widetilde{\alpha}^{(n)}]\}} is a singleton (cf. Remark 5). Moreover,

\displaystyle E(n) = 4\sin\frac{\pi}{3\cdot 2^n}\leq \frac{4\pi}{3}\cdot 2^{-n}\leq 3\cdot 2^{-n/2}

when {c=-2} (cf. Subsection 4.2 of the previous post). Since the function {c\mapsto P_c(0)-\alpha^{(m)}(c)} is increasing on {c\in[-2,-3/2]}, we get the desired estimate for {2\leq n\leq M-2}.

On the other hand, if {M-1\leq n\leq 2M-9}, then

\displaystyle E(n)\leq E(M-2)\leq \frac{16\pi}{3}\cdot 2^{-M}\leq 2^{9/2-M}\leq 2^{-M/2}

This completes the proof of the proposition. \Box

3.2. Central, peripheral and lateral intervals

The analysis of {E(n)} for {n>2M-9} requires the introduction of certain (combinatorially defined) neighborhoods of the critical point {0} and the critical value {c=P_c(0)}.

Assume that {T(0)\in\bigcap\limits_{0\leq k<K} T^{-k}(W)} for some {K\in\mathbb{N}}. For each {1\leq k\leq K}, let {J(k)\in\mathcal{J}} be the element such that {T^k(0)\in \textrm{int} \, J(k)}.

Denote by {B(k)} the decreasing sequence of regular intervals containing the critical value defined recursively as follows: {B(1):=[\alpha^{(M-1)}, \alpha^{(M-2)}]} is a regular interval of order {M-1} and {B(k+1)} is the regular interval of order {N_{k+1}-1:= \sum\limits_{\ell=0}^{k} N(T^{\ell}(0)) - 1} determined by its inverse branch

\displaystyle g_{B(k+1)} = g_{B(k)}\circ g_{J(k)}

Also, let us consider {A(k):=P_c^{-1}(B(k))} and {\breve{A}(k):=P_c^{-1}(\breve{B}(k))}. Here, if {J} is a regular interval, then {\breve{J} := g_J(\breve{A})}, where {A\subset\breve{A}:= (\widehat{\alpha}^{(3)}, -\widehat{\alpha}^{(3)})\subset\widehat{A}} and {\widehat{\alpha}^{(0)} = -\alpha}, {\widehat{\alpha}^{(1)} = \alpha^{(1)}}, {\widehat{\alpha}^{(2)} = \widetilde{\alpha}^{(2)}}, and {P_c(\widehat{\alpha}^{(n+1)})=\widehat{\alpha}^{(n)}} in general.

Remark 6 By definition, the endpoints of {\breve{J}} are the points of {\Delta_{n+3}} immediately adjacent to the endpoints of {J}.

Note that {A(k)\in \mathcal{E}(N_k)} is the connected component of {A-\Delta_{N_k}} containing the critical point {0}, while the endpoints of {\breve{A}(k)} are adjacent in {\Delta_{N_k+3}} to the endpoints of {A(k)}. Here (and in the sequel), {N_k:=\sum\limits_{\ell=0}^{k-1} N(T^{\ell}(0))}.

We say that an interval {J\in\mathcal{E}(n)} is central, lateral or peripheral depending on its relative position with respect to {\breve{A}(K)}:

Definition 3 Let {c\in (c^{(M)}, c^{(M-1)})} be a strongly regular parameter up to a level {K} such that {N_K+3\leq n < N_{K+1}+3}. An interval {J\in\mathcal{E}(n)} is called:

  • central whenever {J\subset\breve{A}(K)};
  • lateral if {J\not\subset\breve{A}(K)} but {\textrm{int}\, J \cap\textrm{int}\, \breve{A}(1)\neq\emptyset};
  • peripheral if {\textrm{int}\, J \cap\textrm{int}\, \breve{A}(1)=\emptyset}.

3.3. Measure estimate for central intervals

We shall control the total measure of central intervals by estimating the Lebesgue measure {|\breve{A}(k)|} of {\breve{A}(k)}:

Proposition 4 Let {c\in (c^{(M)}, c^{(M-1)})} be a strongly regular parameter up to level {K}. Then,

\displaystyle \left|\log|\breve{A}(k)|+\frac{1}{2}(N_k+M)\log 2\right| \leq C N_k/M

for all {1\leq k\leq K}.

Proof: {\breve{A}(k)} is the neighborhood of the critical point {0} of the quadratic map {P_c} defined by {\breve{A}(k) = P_c^{-1}(\breve{B}(k))}. Therefore, {|\breve{A}(k)|} is comparable to {|\breve{B}(k)|^{1/2}}:

\displaystyle C^{-1} |\breve{B}(k)|^{1/2} \leq |\breve{A}(k)| \leq C |\breve{B}(k)|^{1/2}

By the usual distortion estimates (cf. Subsection 4.3 of the previous post), it is possible to check that {|\breve{B}(k)|} is comparable to {|B(k)|}:

\displaystyle |\breve{B}(k)|\leq C |B(k)|

This reduces our task to estimate {|B(k)|}. Since the interval {A} has a fixed size and {|B(k)|/|A| = |Dg_{B(k)}(x)|} for some {x\in A} (as {g_{B(k)}(A):=B(k)}), it suffices to control {|Dg_{B(k)}(x)|} for {x\in A}. For this sake, we recall that the derivative of {P_c} is not far from a “coboundary”:

\displaystyle |DP_c(x)| = 2\frac{h_c(x)}{h_c(P_c(x))}\sqrt{\frac{1}{1+(\beta+c)/x^2}}

where {h_c(x) = 1/\sqrt{\beta^2-x^2}} (see Proposition 6 of the previous post for a motivation of {h_c} in the case {c=-2}). In particular,

\displaystyle \left|\log |DP_c^n(x)| -\log\left(2^n\frac{h_c(x)}{h_c(P_c^n(x))}\right)\right| \leq \left\{ \begin{array}{cc} C n 4^{-M} & \textrm{for } n>0, x\in[\alpha^{(n)}, \alpha^{(n-1)}] \\ C 4^{n-M} & \textrm{for }1<n<M-1, x\in C_n^{\pm}\end{array} \right.

By exploiting this estimate, one can show (with a one-page long argument) that the strong regularity up to level {K} of {c\in (c^{(M)}, c^{(M-1)})} implies a (strong form of) Collet-Eckmann condition:

\displaystyle \left|\log|Dg_{B(k)}(x)| + (N_k-1)\log 2 +\log\frac{h_c(g_{B(k)}(x))}{h_c(x)} \right|\leq C N_k/M

for all {1\leq k\leq K+1} and {x\in A}. Because {C^{-1}\leq h_c(x)\leq C} for {x\in A} and {C^{-1} 2^M\leq h_c(y)\leq C 2^M} for {y\in B(k)}, the proof of the proposition is complete. \Box

Corollary 5 If {c\in (c^{(M)}, c^{(M-1)})} is strongly regular up to level {K} and {N_K+3\leq n<N_{K+1}+3}, then

\displaystyle E_{cent}(n):=\textrm{Leb}\left(\bigcup\limits_{\substack{J\in\mathcal{E}(n) \\ J \textrm{ central }}} J\right) \leq |\breve{A}(k)| \leq 2^{-(N_K+M)/2} \cdot 2^{C N_K/M}

3.4. Measure estimates for peripheral intervals

We control the total measure of peripheral intervals by relating them to singular intervals of lower order. More concretely, a (half-page long) combinatorial argument provides the following structure result for the generation of peripheral intervals:

Proposition 6 (Structure of peripheral intervals) Let {c\in (c^{(M)}, c^{(M-1)})} be strongly regular up to level {K} and consider {N_K+3\leq n\leq N_{K+1}+3}. If {J\in\mathcal{E}(n)} is a peripheral interval, then:

  • either {J^+:=P_c(J)} has the form {J^+=g_{B_0}(J^*)} for some {J^*\in\mathcal{E}(n-M+1)},
  • or {J^+=P_c(J)} has the form {J^+=g_{B_0}(g_{C_2^-}(J^*))} for some {J^*\in\mathcal{E}(n-M-1)},

where {B_0:=[\alpha^{(M-2)}, \alpha^{(M-3)}]} (is a regular interval of order {M-2}).

Corollary 7 Let {c\in (c^{(M)}, c^{(M-1)})} be strongly regular up to level {K} and fix {N_K+3\leq n < N_{K+1}+3}. Then, the total measure of peripheral {n}-singular intervals is

\displaystyle E_{periph}(n):= \textrm{Leb}\left(\bigcup\limits_{\substack{J\in\mathcal{E}(n) \\ J \textrm{ peripheral }}} J \right) \leq C 2^{-M} E(n-M-1)

Proof: A point {x\in J} in a peripheral interval {J} is not close to {0}: indeed, {|x|\geq |\breve{A}(1)|/2} (by definition) and {|\breve{A}(1)|\geq C^{-1} 2^{-M}} (by Corollary 5 for {K=1}). Hence,

\displaystyle |J^+|=\int_J 2|x| dx \geq C^{-1} 2^{-M} |J|

The previous proposition says that if {J} is a peripheral interval, then {J^+=P_c(J)} has the form {g_{B_0}(J^*)} with {J^*\in\mathcal{E}(n-M+1)} or {g_{B_0}(g_{C_2^-}(J^*))} with {J^*\in\mathcal{E}(n-M-1)}. From the fact that the derivative of {P_c} is an “almost coboundary” (cf. the proof of Proposition 4 above), one can show that:

  • {|Dg_{B_0}(P_c^{M-2}(x))|\leq C 4^{-M}} when {x\in J^+=g_{B_0}(J^*)};
  • {|D(g_{B_0}\circ g_{C_2^-})(P_c^M(x))|\leq C 4^{-M}} when {x\in J^+=g_{B_0}(g_{C_2^-}(J^*))}.

Therefore, {|J^+|\leq C 4^{-M}|J^*|} for some {J^*\in\mathcal{E}(n-M+1)} or {\mathcal{E}(n-M-1)} and, a fortiori,

\displaystyle |J|\leq C 2^M |J^+|\leq C 2^{-M}|J^*|

for some {J^*\in\mathcal{E}(n-M+1)} or {\mathcal{E}(n-M-1)}.

It follows that

\displaystyle E_{periph}(n)\leq C 2^{-M} (E(n-M+1)+E(n-M-1))\leq C 2^{-M} E(n-M-1),

so that the proof of the corollary is complete. \Box

3.5. Measure estimates for lateral intervals

The anlysis of lateral intervals is combinatorially more involved. For this reason, we subdivide the class of lateral intervals into stationary and non-stationary:

Definition 8 Let {c\in (c^{(M)}, c^{(M-1)})} be strongly regular up to level {K}, fix {N_K+3\leq n < N_{K+1}+3} and consider {J\in\mathcal{E}(n)} a lateral interval. For each {1\leq k\leq K}, either {J\subset\breve{A}(k)} or {J\cap\textrm{int}\breve{A}(k)=\emptyset} (because {\partial\breve{A}(k)\subset \Delta_{N_k+3}}).The level {k=k(J)} of {J} is the largest integer such that {J\subset\breve{A}(k)}. (Note that {1\leq k<K} and {J\cap\textrm{int}\breve{A}(k+1)=\emptyset}.

We say that the level {k} is stationary if {A(k)=A(k+1)}.

The strategy to control the total measure of lateral intervals is similar to argument used for peripheral intervals: we want to exploit structure results describing the construction of lateral intervals out of singular intervals of lower orders. As it turns out, the case of lateral intervals with stationary level is somewhat easier (from the combinatorial point of view) and, for this reason, we start by treating this case.

For later use, we denote

\displaystyle E_{lat}(n,k):=\textrm{Leb}\left(\bigcup\limits_{\substack{J\in\mathcal{E}(n) \\ J \textrm{ lateral of level } k}} J\right)

3.5.1 Lateral intervals with stationary levels

Denote by {D^+:=[\widehat{\alpha}^{(3)}, \alpha]} (a regular interval of order {3}) and {D^-:=-D^+}. The structure of lateral intervals with stationary levels is given by the following proposition:

Proposition 9 Let {J\in\mathcal{E}(n)} be a lateral interval with stationary level {k}. Suppose that {g_{B(k)}} reverses the orientation. Then, {J^+:=P_c(J)} has the form {g_{B(k)}\circ g_{D^+}(J^*)} for some {J^*\in\mathcal{E}(n-N_k-3)} contained in {[\widetilde{\alpha}^{(2)}, -\alpha]}. A similar statement (with {D^-} replacing {D^+}) holds when {g_{B(k)}} preserves orientation.

The proof of this proposition is short, but we omit it for the sake of discussing the total measure of lateral intervals with stationary level.

Corollary 10 Assume that {k} is a stationary level. Then,

\displaystyle E_{lat}(n,k)\leq C 2^{(N_k+M)/2} 2^{C N_k/M} E(n-N_k-3)

Proof: The argument is very close to the case of peripheral intervals (i.e., Corollary 7). In fact, if {J\in\mathcal{E}(n)} is a lateral interval with stationary level {k} and {g_{B(k)}} reverses the orientation, then {P_c(J) = g_{B(k)}\circ g_{D^+}(J^*)} for some {J^*\in\mathcal{E}(n-N_k-3)} with {J^*\subset [\widetilde{\alpha}^{(2)}, -\alpha]}. Note that {J^*\subset [\widetilde{\alpha}^{(2)}, -\alpha]} implies that {g_{D^+}(J^*)\subset [\widehat{\alpha}^{(3)}, \widehat{\alpha}^{(5)}]}. Since {P_c^{N_k}(0)\in [\alpha,\widetilde{\alpha}^{(2)}]}, the usual bounded distortion properties say that

\displaystyle x-\beta\geq x-P_c(0)\geq C^{-1}|B(k)|

for any {x\in P_c(J)} and, a fortiori,

\displaystyle |J|\leq C|B(k)|^{1/2}|J^*|

This completes the proof of the corollary because {|B(k)|\leq C 2^{N_k+M} 2^{C N_k/M}} (cf. the proof of Proposition 4). \Box

3.5.2 Lateral intervals with non-stationary levels

The structure of lateral intervals with non-stationary levels is the following:

Proposition 11 Let {J\in\mathcal{E}(n)} be a lateral interval with non-stationary level {k}. Suppose that {g_{B(k)}} reverses the orientation. Then,

  • either {P_c(J) = g_{B(k)}(J^*)} for some {J^*\in\mathcal{E}(n-N_k)},
  • or {P_c(J) = g_{B(k)}\circ g_{J_0}(J^*)} where {J_0} is a regular interval of order {2\leq n_0\leq \textrm{order}(J(k))+1} (with right endpoint immediately to the left of {P_c^{N_k}(0)} in {\Delta_{n_0}}) and {J^*\in\mathcal{E}(n-N_k-n_0)}.

A similar statament holds when {g_{B(k)}} preserves orientation.

By exploiting this structure result in a similar way to the arguments in Corollaries 7 and 10, one can show (with a half-page long proof) the following estimate:

Corollary 12 Assume that {k} is a non-stationary level. Then,

\displaystyle E_{lat}(n,k)\leq C\cdot\textrm{order}(J(k)) \cdot 2^{\textrm{order}(J(k))} 2^{-(N_k+M)/2} 2^{C N_k/M} \cdot E(n-N_{k+1}-1)

Moreover, if {\textrm{order}(J(k))<M}, then one has a better estimate:

\displaystyle E_{lat}(n,k)\leq C\cdot 2^{-\frac{N_k+M}{2}+C \frac{N_k}{M}} \cdot \left(2^{\frac{\textrm{order}(J(k))}{2}} E(n-N_k) + 2^{-\frac{\textrm{order}(J(k))}{2}}E(n-N_{k+1}-1)\right)

3.6. Proof of regularity of strongly regular parameters

The measure estimates developed in the last three subsections permit to establish the main result of this post, namely:

Theorem 13 Fix {0<\theta<1/2}. If {M} is large enough depending on {\theta} (i.e., {M\geq M_0(\theta)}), then for any {c\in (c^{(M)}, c^{(M-1)})} strongly regular up to level {K} and any {1<n<N_{K+1}+3} we have

\displaystyle \textrm{Leb}(\{x\in A: x \textrm{ is not } n\textrm{-regular}\}):=E(n)\leq 2^{-\theta n}

In particular, any strongly regular parameter {c} is regular.

Proof: We will show this theorem by induction. The initial cases {1<n<2M-8} of this theorem were already established in Proposition 2.

Suppose that {2M-8\leq n<N_{K+1}+3} and {E(m)\leq 2^{-\theta m}} for all {1<m<n}. Replacing {K} by a smaller integer (if necessary), we can assume that {N_K+3\leq n<N_{K+1}+3}.

By definition of central, lateral and peripheral intervals,

\displaystyle E(n) = E_{cent}(n) + E_{periph}(n) + \sum\limits_{k \textrm{ stationary level}} E_{lat}(n,k) + \sum\limits_{k \textrm{ non-stationary level}} E_{lat}(n,k) \ \ \ \ \ (4)

By Corollary 5, the first term satisfies:

\displaystyle E_{cent}(n)\leq |\breve{A}(K)|\leq 2^{-(N_K+M)/2} 2^{C N_K/M}

The right hand side is at most {\frac{1}{4} 2^{-\theta(N_{K+1}+2)}\leq \frac{1}{4} 2^{-\theta n}} whenever

\displaystyle 2+\theta(N_{K+1}+2)\leq \frac{N_K+M}{2} - C \frac{N_K}{M},

i.e.,

\displaystyle \theta(N_{K+1}-N_K)\leq \left(\frac{1}{2}-\theta-\frac{C}{M}\right) N_K + \left(\frac{M}{2} - 2 - 2\theta\right)

We have two possibilities:

  • if {N_{K+1}-N_K\leq M-2}, then {\theta(N_{K+1}-N_K)\leq \frac{M}{2}-2-2\theta} for {M} large enough (e.g., {M\geq 2/(1/2-\theta)});
  • if {N_{K+1}-N_K>M}, we have {\theta(N_{K+1}-N_K)\leq \left(\frac{1}{2}-\theta-\frac{C}{M}\right) N_K} for {M} large enough thanks to the strong regularity assumption (cf. Remark 4).

In any event, we showed that

\displaystyle E_{cent}(n)\leq \frac{1}{4} 2^{-\theta n} \ \ \ \ \ (5)

when {M} is large enough.

The contribution of peripheral intervals is controlled by induction hypothesis. More precisely, by Corollary 7, one has

\displaystyle E_{periph}(n)\leq C 2^{-M} E(n-M-1)

By induction hypothesis, we conclude that

\displaystyle E_{periph}(n)\leq C 2^{-M} 2^{-\theta(n-M-1)}\leq \frac{1}{4} 2^{-\theta n} \ \ \ \ \ (6)

for {M} large enough.

The contribution of lateral intervals is estimated as follows. Fix {1\leq k<K}. The bounds on {E_{lat}(n,k)} in the case of a non-stationary level {k} are worse than in the case of a stationary level {k}: compare Corollaries 10 and 12. For this reason, we will use only the bounds coming from the non-stationary situation in the sequel.

If {J(k)} is not simple, i.e., its order is {>M}, then the induction hypothesis (applied to {E(n-N_k-1)}) implies that

\displaystyle 2^{\theta n} E_{lat}(n,k)\leq C 2^{\theta} 2^{-\frac{M}{2}}\textrm{order}(J(k)) 2^{\textrm{order}(J(k))} 2^{(\theta+\frac{C}{M}-\frac{1}{2})N_k}

Since {\theta+\frac{C}{M}-\frac{1}{2}<\frac{1}{2}(\theta-\frac{1}{2})} for {M} large enough and {\textrm{order}(J(k))\leq 2^{-\sqrt{M}}(1-2^{-\sqrt{M}})^{-1}N_k} (cf. Remark 4), we conclude that

\displaystyle \sum\limits_{\textrm{order}J(k)>M} E_{lat}(n,k)\leq \frac{1}{4} 2^{-\theta n}

for {M} large enough.

If {J(k)} is simple (i.e., {\textrm{order}(J(k))<M}), then we use the second part of Corollary 12 and the induction hypothesis to obtain:

\displaystyle \begin{array}{rcl} E_{lat}(n,k)&\leq& C\cdot 2^{-\frac{N_k+M}{2}+C \frac{N_k}{M}} \cdot \left(2^{\frac{\textrm{order}(J(k))}{2}} E(n-N_k) + 2^{-\frac{\textrm{order}(J(k))}{2}}E(n-N_{k+1}-1)\right) \\ &\leq& C\cdot 2^{-\frac{N_k+M}{2}+C \frac{N_k}{M}} \cdot (3\cdot 2^{-\theta n} 2^{\frac{M}{2}} 2^{\theta N_k}) \\ &\leq& C 2^{(\theta+\frac{C}{M}-\frac{1}{2})N_k}\cdot 2^{-\theta n} \end{array}

Thus,

\displaystyle \sum\limits_{\textrm{order}J(k)<M} E_{lat}(n,k)\leq \frac{1}{4} 2^{-\theta n}

for {M} large enough.

The last two inequalities together imply that

\displaystyle \sum\limits_{k \textrm{ stationary level}} E_{lat}(n,k) + \sum\limits_{k \textrm{ non-stationary level}} E_{lat}(n,k)\leq \frac{1}{2} 2^{-\theta n} \ \ \ \ \ (7)

for {M} large enough.

Finally, by plugging the estimates (5), (6) and (7) into (4), we deduce that

\displaystyle E(n)\leq \frac{1}{4} 2^{-\theta n} + \frac{1}{4} 2^{-\theta n} + \frac{1}{2} 2^{-\theta n} = 2^{-\theta n}

for {M} large enough. This proves the desired theorem. \Box


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: