Entropy bounds for equilibrium states

Let {X} be a compact metric space and {f\colon X\rightarrow X} a homeomorphism. Recall that an equilibrium state for a continuous potential function {\varphi\colon X\rightarrow {\mathbb R}} is an {f}-invariant Borel probability measure on {X} maximizing the quantity {h_\mu(f) + \int\varphi\,d\mu} over all invariant probabilities; the topological pressure {P(\varphi)} is the value of this maximum.

A classical result on existence and uniqueness of equilibrium states is due to Bowen, who proved that if {f} is expansive and has specification, and {\varphi} has a bounded distortion property (the `Bowen property’), then there is a unique equilibrium state {\mu_\varphi}. In particular, this applies when {f} is Anosov and {\varphi} is Hölder.

It seems to be well-known among experts that under Bowen’s hypotheses, {\mu_\varphi} must have positive entropy (equivalently, {P(\varphi) > \sup_\mu \int\varphi\,d\mu}), but I do not know of an explicit reference. In this post I provide a proof of this fact, which also gives reasonably concrete bounds on the entropy of {\mu_\varphi}; equivalently, a bound on the size of the gap {P(\varphi) - \sup_\mu \int\varphi\,d\mu}.

1. Definitions and result

First, let’s recall the definitions in the form that I will need them. Given {x\in X}, {n\in {\mathbb N}}, and {\varepsilon>0}, the Bowen ball around {x} of order {n} and radius {\varepsilon} is the set

\displaystyle  B_n(x,\varepsilon) := \{y\in X : d(f^kx, f^ky) < \varepsilon \text{ for all } 0\leq k < n\}.

The map {f} has specification if for every {\varepsilon>0} there is {\tau=\tau(\varepsilon)\in {\mathbb N}} such that for every {x_1,\dots, x_k\in X} and {n_1,\dots, n_k\in {\mathbb N}}, there is {x\in X} such that

\displaystyle  x\in B_{n_1}(x_1,\varepsilon),\qquad f^{n_1 + \tau}(x)\in B_{n_2}(x_2,\varepsilon),

and in general

\displaystyle  f^{n_1 + \tau + \cdots + n_{i-1} + \tau}(x) \in B_{n_i}(x_i,\varepsilon)

for every {1\leq k\leq n}. We refer to {\tau} as the “gluing time”; one could also consider a weaker property where the gluing times are allowed to vary but must be bounded above by {\tau}; this makes the estimates below more complicated, so for simplicity we will stick with the stronger version.

A function {\varphi\colon X\rightarrow {\mathbb R}} has the Bowen property at scale {\varepsilon} with distortion constant {V} if {V\in {\mathbb R}} is such that

\displaystyle  |S_n\varphi(x) - S_n\varphi(y)| \leq V \text{ for all } x\in X\text{ and } y\in B_n(x,\varepsilon),

where {S_n\varphi(x) := \sum_{k=0}^{n-1} \varphi(f^k x)}. We write

\displaystyle  \Lambda_n(\varphi,\varepsilon) := \sup_{E\in \mathcal{E}_{n,\varepsilon}} \sum_{x\in E} e^{S_n\varphi(x)},

where {\mathcal{E}_{n,\varepsilon}} is the collection of {(n,\varepsilon)}-separated subsets of {X} (those sets {E\subset X} for which {y\notin B_n(x,\varepsilon)} whenever {x,y\in E}, {x\neq y}). The topological pressure is {P(\varphi) = \lim_{\varepsilon\rightarrow 0} P(\varphi,\varepsilon)}, where

\displaystyle  P(\varphi,\varepsilon) = \limsup_{n\rightarrow\infty} \frac 1n \log \Lambda_n(\varphi,\varepsilon).

Theorem 1 Let {X} be a compact metric space with diameter {>6\varepsilon}, {f\colon X\rightarrow X} a homeomorphism with specification at scale {\varepsilon} with gap size {\tau}, and {\varphi\colon X\rightarrow {\mathbb R}} a potential with the Bowen property at scale {\varepsilon} with distortion constant {V}. Let

\displaystyle  \Delta = (\log 2) \min \big( e^{-(V + 2(2\tau+1)\|\varphi\|)},\ \tfrac{1}{2\tau} \big),

where {\|\varphi\| = \sup_{x\in X} |\varphi(x)|}. Then we have

\displaystyle  P(\varphi) \geq P(\varphi,\varepsilon) \geq \Big( \sup_\mu \int\varphi\,d\mu\Big) + \Delta. \ \ \ \ \ (1)

In particular, if {\mu} is an equilibrium state for {\varphi}, then we have {h_\mu(f) \geq \Delta > 0}.

2. Consequence for Anosov diffeomorphisms

Before proving the theorem we point out a useful corollary. If {M} is a compact manifold and {f\colon M\rightarrow M} is a topologically mixing {C^1} Anosov diffeomorphism, then {f} has specification at every scale (similar results apply in the Axiom A case). Moreover, every Hölder continuous potential has the Bowen property, and thus Theorem 1 applies.

For an Anosov diffeo, the constants {V} and {\tau} in (1) can be controlled by the following factors (here we fix a small {\varepsilon>0}):

  1. the rate of expansion and contraction along the stable and unstable directions, given in terms of {C,\lambda>0} such that {\|Df^n_x(v^s)\| \leq C e^{-\lambda n}} for all {n\geq 0} and {v^s\in E^s}, and similarly for {v^u\in E^u} and {n\leq 0};
  2. how quickly unstable manifolds become dense, in other words, the value of {R>0} such that {W_R^u(x)} is {\varepsilon}-dense for every choice of {x};
  3. the angle between stable and unstable directions, which controls the local product structure, in particular via a constant {K>0} such that {d(x,y) < \varepsilon} implies that {W_{K\varepsilon}^s(x)} intersects {W_{K\varepsilon}^u(y)} in a unique point {z}, and the leafwise distances from {x,y} to {z} are at most {K d(x,y)};
  4. the Hölder exponent ({\beta}) and constant ({|\varphi|_\beta}) for the potential {\varphi}.

For the specification property for an Anosov diffeo, {\tau =\tau(\varepsilon)} is determined by the condition that {C^{-1}e^{\lambda\tau}(\varepsilon/K) > R}, so that small pieces of unstable manifold expand to become {\varepsilon}-dense within {\tau} iterates; thus we have

\displaystyle  \tau(\varepsilon) \approx \lambda^{-1} \log(R(\varepsilon) KC\varepsilon^{-1}).

For the Bowen property, one compares {S_n\varphi(x)} and {S_n\varphi(y)} by comparing each to {S_n\varphi(z)}, where {z} is the (Smale bracket) intersection point coming from the local product structure. Standard estimates give {d(f^j x, f^jz) \leq CK\varepsilon e^{-\lambda j}}, so the Hölder property gives

\displaystyle  \begin{aligned} |S_n\varphi(x) - S_n\varphi(z)| &\leq \sum_{j=0}^{n-1} |\varphi(f^j x) - \varphi(f^j z)| \leq \sum_{j=0}^{n-1} |\varphi|_\beta d(f^jx,f^jz)^\beta \\ &\leq |\varphi|_\beta \sum_{j=0}^\infty (CK\varepsilon)^\beta e^{-\lambda\beta j} = |\varphi|_\beta (CK\varepsilon)^\beta (1-e^{-\lambda\beta})^{-1}. \end{aligned}

A similar estimate for {|S_n\varphi(y) - S_n\varphi(z)|} gives

\displaystyle  V = 2(CK\varepsilon)^\beta(1- e^{-\lambda\beta})^{-1} |\varphi|_\beta.

Thus Theorem 1 has the following consequence for Anosov diffeomorphisms.

Corollary 2 Let {f} be a topologically mixing Anosov diffeomorphism on {M} and {C,\lambda,\varepsilon,R,K} the quantities above. Let

\displaystyle  \delta = \frac{\lambda}{2\log(RKC\varepsilon^{-1})}.

Given a {\beta}-Hölder potential {\varphi\colon M\rightarrow {\mathbb R}}, consider the quantity

\displaystyle  \Delta(\varphi) := 2(CK\varepsilon)^\beta (1-e^{-\lambda\beta})^{-1} |\varphi|_\beta + 5\lambda^{-1} \log(RKC\varepsilon^{-1})\|\varphi\|.

Then we have

\displaystyle  P(\varphi) \geq P(\varphi,\varepsilon) \geq \Big(\sup_\mu \int\varphi\,d\mu\Big) + \min(\delta, (\log 2)e^{-\Delta}),

so that in particular, if {\mu} is an equilibrium state for {\varphi}, then

\displaystyle  h_\mu(f) \geq \min(\delta, (\log 2) e^{-\Delta}) > 0.

Finally, note that since shifting the value of {\varphi} by a constant does not change its equilibrium states, we can assume without loss of generality that {\|\varphi\| \leq (\mathrm{diam}\, M)^\beta |\varphi|_\beta} and write the following consequence of the above, which is somewhat simpler in appearance.

Corollary 3 Let {M} be a compact manifold and {f\colon M\rightarrow M} a topologically mixing Anosov diffeomorphism. For every {\beta>0} there are constants {\delta = \delta(M,f)>0} and {Q = Q(M,f,\beta)} such that for every {\beta}-Hölder potential {\varphi}, we have

\displaystyle  P(\varphi) \geq \Big(\sup_\mu \int\varphi\,d\mu\Big) + \min(\delta, (\log 2) Q^{-|\varphi|_\beta}),

so that as before, if {\mu} is an equilibrium state for {\varphi}, we have

\displaystyle  h_\mu(f) \geq \min(\delta, (\log 2) Q^{-|\varphi|_\beta}) > 0.

This corollary gives a precise bound on how the entropy of a family of equilibrium states can decay as the Hölder semi-norms {|\varphi|_\beta} of the corresponding potentials become large. To put it another way, given any threshold {h_0>0}, this gives an estimate on how large {|\varphi|_\beta} must be before {\varphi} can have an equilibrium state with entropy below {h_0}.

3. Proof of the theorem

We spend the rest of the post proving Theorem 1. Fix {x\in X} and consider for each {n\in {\mathbb N}} the orbit segment {x, f(x), \dots, f^{n-1}(x)}. Fix {\alpha>0} such that {\alpha < \min(\frac 12, \frac 1{2\tau+1})}. Let {m_n = \lceil \alpha n \rceil}, and let

\displaystyle  \mathcal{I}_n = \{ 0 < k_1 < k_2 < \cdots < k_{m_n} < n\}.

Write {k_0 = 0} and {k_{m_n + 1} = n}. The idea is that for each {\vec k\in \mathcal{I}_n}, we will use the specification property to construct a point {\pi(\vec k) \in X} whose orbit shadows the orbit of {x} from time {0} to time {n}, except for the times {k_i}, at which it deviates briefly; thus the points {\pi(\vec k)} will be {(n,\varepsilon)}-separated on the one hand, and on the other hand will have ergodic averages close to that of {x}.

First we estimate {\#\mathcal{I}_n = {n\choose m_n}}. Integrating {\log t} over {[1,k]} and {[1,k+1]} gives

\displaystyle  k\log k - k + 1 \leq \log(k!) \leq k\log k - k + 1 + \log(k+1),

and thus we have

\displaystyle  \begin{aligned} \log{n\choose \ell} &= \log(n!) - \log(\ell!) - \log(n-\ell)! \\ &\geq n\log n + 1 - \ell\log\ell - (n-\ell)\log(n-\ell) - \log((\ell+1)(n-\ell+1)) \\ &\geq h\big( \tfrac\ell n\big) n - 2\log n, \end{aligned}

where {h(\delta) = -\delta\log\delta - (1-\delta)\log(1-\delta)}. This function is increasing on {(0,\frac12)}, so

\displaystyle  \log\#\mathcal{I}_n = \log{n\choose m_n} \geq h(\tfrac {m_n} n) n - 2\log n \geq h(\alpha) n - 2\log n. \ \ \ \ \ (2)

Given {k\in \{0, \dots, n-1\}}, let {y_k \in X} be any point with {d(f^k(x),y_k) > 3\varepsilon} (using the assumption on the diameter of {X}). Now for every {\vec{k}\in \mathcal{I}_n}, the specification property guarantees the existence of a point {\pi(\vec{k})\in X} with the property that

\displaystyle  \begin{aligned} \pi(\vec{k}) &\in B_{k_1-\tau}(x,\varepsilon), \\ \qquad f^{k_1}(\pi(\vec{k})) &\in B(y_{k_1},\varepsilon), \\ \qquad f^{k_1+\tau+1}(\pi(\vec{k})) &\in B_{k_2 - k_1 - 2\tau - 1}(f^{k_1 + \tau+1}(x)), \end{aligned}

and so on, so that in general for any {0\leq i \leq m_n} we have

\displaystyle  \begin{aligned} f^{k_i + \tau + 1}(\pi(\vec{k})) &\in B_{k_{i+1} - k_i - 2\tau - 1}(f^{k_i + \tau _ 1}(x)), \\ f^{k_{i+1}}(\pi(\vec{k})) &\in B(y_{k_{i+1}},\varepsilon). \end{aligned} \ \ \ \ \ (3)

Write {j_i = k_{i+1} - k_i - 2\tau - 1}; then the first inclusion in (3), together with the Bowen property, gives

\displaystyle  |S_{j_i} \varphi(f^{k_i + \tau + 1}(x)) - S_{j_i} \varphi(f^{k_i + \tau + 1} (\pi (\vec{k}))| \leq V.

Now observe that for any {y\in X} we have

\displaystyle  \bigg| S_n \varphi(y) - \sum_{i=0}^{m_n} S_{k_{i+1} - k_i - 2\tau-1}\varphi(f^{k_i + \tau + 1} y) \bigg| \leq (2\tau + 1)m_n \|\varphi\|.

We conclude that

\displaystyle  |S_n\varphi(\pi(\vec{k})) - S_n\varphi(x)| \leq m_n(V + 2(2\tau+1)\|\varphi\|). \ \ \ \ \ (4)

Consider the set {\pi(\mathcal{I}_n) \subset X}. The second inclusion in (3) guarantees that this set is {(n,\varepsilon)}-separated; indeed, given any {\vec{k} \neq \vec{k}' \in \mathcal{I}_n}, there is {0\leq j < n} such that {f^j(\pi(\vec{k})) \in B(y_j, \varepsilon)} and {f^j(\pi(\vec{k}')) \in B(f^j(x),\varepsilon)}; since {d(y_j,f^j(x)) > 3\varepsilon} this guarantees that {\pi(\vec{k}') \notin B_n(\pi(\vec{k}),\varepsilon)}.

Using this fact and the bounds in (4) and (2), we conclude that

\displaystyle  \begin{aligned} \Lambda_n(\phi,\varepsilon) &\geq \sum_{\vec{k} \in \pi(\mathcal{I}_n)} e^{S_n\varphi(\pi(\vec{k}))} \\ &\geq (\#\mathcal{I}_n) \exp\big(S_n\varphi(x) - m_n(V+2(2\tau+1)\|\varphi\|)\big) \\ &\geq n^{-2} \exp\big(S_n \varphi(x) + h(\alpha) n - (\alpha n + 1)(V+2(2\tau+1)\|\varphi\|)\big). \end{aligned}

Taking logs, dividing by {n}, and sending {n\rightarrow\infty} gives

\displaystyle  P(\varphi,\varepsilon) \geq \Big(\limsup_{n\rightarrow\infty} \frac 1n S_n\varphi(x) \Big) + h(\alpha) - \alpha(V+2(2\tau+1)\|\varphi\|).

Given any ergodic {\mu}, we can take a generic point {x} for {\mu} and conclude that the lim sup in the above expression is equal to {\int\varphi\,d\mu}. Thus to bound the difference {P(\varphi,\varepsilon) - \int\varphi\,d\mu}, we want to choose the value of {\alpha} that maximizes {h(\alpha) - \alpha Q}, where {Q=V+2(2\tau+1)\|\varphi\|}. Note that we must have {\alpha \in (0,\frac 1{2\tau+1}]} in order for our construction to work.

A straightforward differentiation and some routine algebra shows that {\frac d{d\alpha} (h(\alpha) - \alpha Q) = 0} occurs when {\alpha = (1+e^Q)^{-1}}, at which point we have {h(\alpha) - \alpha Q = \log(1+e^{-Q}) \geq (\log 2) e^{-Q}}.

For this value of {\alpha} to lie in {(0,\frac 1{2\tau+1}]}, we must have {2\tau \leq e^Q}. If {2\tau>e^Q}, then {h(\alpha) - \alpha Q} achieves its maximum when {\alpha = \frac 1{2\tau+1}}, and again some routine algebra shows that at this point we have {h(\alpha) - \alpha Q \geq \log(1+\frac 1{2\tau}) \geq \frac{\log 2}{2\tau}}. Taken together, these two estimates prove Theorem 1.

Posted in Uncategorized | Leave a comment

Exponential decay of correlations

Let {(X,\mu)} be a probability space and {f\colon X\rightarrow X} a measure-preserving transformation. Let {B_1,B_2} be Banach spaces of measurable functions on {X}, and {\|\cdot\|_i} the corresponding norms. Given {\phi_i\in B_i} and {n\in {\mathbb N}}, the corresponding {n}th correlation is

\displaystyle C_n(\phi_1,\phi_2) := \int \phi_1(x) \phi_2(f^nx) \,d\mu(x) - \int \phi_1\,d\mu \int\phi_2\,d\mu.

We say that {(X,f,\mu)} has exponential decay of correlations with respect to observables in {B_1,B_2} if for any {\phi_i\in B_i}, we have

\displaystyle \limsup_{n\rightarrow\infty} \frac 1n \log |C_n(\phi_1,\phi_2)| < 0. \ \ \ \ \ (1)


Equivalently, for every {\phi_1\in B_1} and {\phi_2\in B_2}, there are {\lambda=\lambda(\phi_1,\phi_2)>0} and {K=K(\phi_1,\phi_2)>0} such that

\displaystyle |C_n(\phi_1,\phi_2)| \leq K(\phi_1,\phi_2) e^{-\lambda(\phi_1,\phi_2) n} \text{ for all } n. \ \ \ \ \ (2)


One sometimes sees the statement of exponential decay given in the following form, which is formally stronger than (2): there are constants {L>0} and {\lambda>0}, independent of {\phi_1,\phi_2}, such that

\displaystyle |C_n(\phi_1,\phi_2)| \leq L\|\phi_1\|_1 \|\phi_2\|_2 e^{-\lambda n} \text{ for all } n,\phi_1,\phi_2. \ \ \ \ \ (3)


In fact, using the Baire category theorem, one can prove that (2) implies (3) under a mild condition on the Banach spaces {B_i}; this is the goal of this post, to show that {\lambda} can be chosen uniformly over all {\phi_i}, and that {K} can be chosen to have the form {K(\phi_1,\phi_2) = L \|\phi_1\|_1\|\phi_2\|_2}. This seems like the sort of thing which is likely known to experts, but I am not aware of the reference in the literature. (I would be happy to learn a reference!)

Proposition 1 Let {(X,f,\mu)} be a probability measure-preserving transformation and {B_1,B_2} Banach spaces of measurable functions on {X} with the following properties:

  • given any {\phi_i\in B_i}, we have {\phi_i\in L^1(\mu)} and {\phi_1\phi_2\in L^1(\mu)};
  • the inclusions {(B_i,\|\cdot\|_i) \rightarrow (L^1,\|\cdot\|_{L^1})} are continuous;
  • for every {\phi_1\in B_1}, the map {(B_2,\|\cdot\|_2) \rightarrow (L^1,\|\cdot\|_{L^1})} given by {\phi_2 \mapsto \phi_1\phi_2} is continuous, and similarly for the map {\phi_1\mapsto \phi_1\phi_2} when {\phi_2} is fixed;
  • the map {f_* \colon B_2 \rightarrow B_2} given by {f_* \phi = \phi\circ f} is bounded w.r.t. {\|\cdot\|_2}.

Under these assumptions, if {(X,f,\mu)} has exponential decay of correlations w.r.t. observables in {B_1,B_2} in the sense of (2), then it also satisfies (3).

To prove the proposition, start by fixing {\phi_1\in B_1} and {\lambda>0}, and consider the function {K_\lambda^{\phi_1} \colon B_2 \rightarrow [0,\infty]} given by

\displaystyle K_\lambda^{\phi_1}(\phi_2) = \sup_n |C_n(\phi_1,\phi_2)| e^{\lambda n}.

Notice that for each {n}, the correlation function {C_n} is bilinear in {\phi_1,\phi_2}, and thus for every {\phi_2,\psi_2\in B_2} and {a,b\in {\mathbb R}}, we have

\displaystyle \begin{aligned} K_\lambda^{\phi_1}(a\phi_2 + b\psi_2) &\leq \sup_n |aC_n(\phi_1,\phi_2)|e^{\lambda n} + |bC_n(\phi_1,\psi_2)|e^{\lambda n} \\ &\leq |a| K_\lambda^{\phi_1}(\phi_2) + |b| K_\lambda^{\phi_1}(\psi_2). \end{aligned} \ \ \ \ \ (4)


Consider the following subsets of {B_2}:

\displaystyle V_{\lambda,K}^{\phi_1}(K) := \{\phi_2\in B_2 : K_\lambda^{\phi_1}(\phi_2) \leq K \}.

It follows from (2) that

\displaystyle \bigcup_{\lambda,K>0} V_{\lambda,K}^{\phi_1} = B_2. \ \ \ \ \ (5)


Moreover, the sets {V_{\lambda,K}^{\phi_1}} are nested (smaller {\lambda} gives a bigger set, larger {K} gives a bigger set) and so it suffices to take the union over rational values of {\lambda,K}, meaning that we can treat (5) as a countable union. In particular, by the Baire category theorem there are {\lambda,K>0} such that the closure of {V_{\lambda,K}^{\phi_1}} has non-empty interior. The next step is to show that

  • {V_{\lambda,K}^{\phi_1}} is closed, so it itself has non-empty interior;
  • in fact, {V_{\lambda,K}^{\phi_1}} contains a neighbourhood of the origin.

For the first of these, observe that by the assumptions we placed on the Banach spaces {B_1,B_2}, there is a constant {Q=Q(\phi_1)} such that

\displaystyle \|\phi_2\|_{L_1} \leq Q \|\phi_2\|_2 \text{ and } \|\phi_1 \phi_2\|_{L^1} \leq Q(\phi_1) \|\phi_2\|_2

for every {\phi_2\in B_2}. In particular,

\displaystyle \begin{aligned} |C_n(\phi_1,\phi_2)| &\leq Q(\phi_1) \|\phi_2\circ f^n\|_2 + Q\|\phi_1\|_{L^1} \|\phi_2\|_2 \\ &\leq (Q(\phi_1) \|f_*\|^n + Q\|\phi_1\|_{L^1}) \|\phi_2\|_2. \end{aligned}

Given a sequence {(\phi_2^{(k)})_k \subset V_{\lambda,K}^{\phi_1}} such that {\phi_2^{(k)} \rightarrow \phi_2 \in B_2} w.r.t. {\|\cdot\|_2} as {k\rightarrow\infty}, it follows that

\displaystyle |C_n(\phi_1,\phi_2)| \leq \limsup_{k\rightarrow\infty} |C_n(\phi_1,\phi_2^{(k)})| \leq K e^{-\lambda n},

and we conclude that {\phi_2\in V_{\lambda,K}^{\phi_1}}, so this set is closed. In particular, there is {\phi_2\in V_{\lambda,K}^{\phi_1}} and {\varepsilon>0} such that if {\|\psi_2\|_{B_2} \leq \varepsilon}, then {\phi_2 + \psi_2\in V_{\lambda,K}^{\phi_1}}. By the same token {\phi_2 - \psi_2\in V_{\lambda,K}^{\phi_1}}, and now the sublinearity property (4) gives

\displaystyle K_\lambda^{\phi_1}(\psi_2) \leq \tfrac 12 \big(K_\lambda^{\phi_1}(\phi_2 + \psi_2) - K_\lambda^{\phi_1}(\phi_2 - \psi_2)\big) \leq K,

and so {\psi_2\in V_{K,\lambda}^{\phi_1}}. This shows that {V_{K,\lambda}^{\phi_1}} contains a neighbourhood of 0, and writing {L = K/\varepsilon}, we see that for every {\psi_2\in B_2} we have {\varepsilon\frac{\psi_2}{\|\psi_2\|_2} \in V_{K,\lambda}^{\phi_1}}, and so

\displaystyle K_\lambda^{\phi_1}(\psi_2) = \varepsilon^{-1} \|\psi_2\|_2 K_\lambda^{\phi_1} \Big(\varepsilon\frac{\psi_2}{\|\psi_2\|_2}\Big) \leq \varepsilon^{-1}\|\psi_2\|_2 K = L \|\psi_2\|_2.

Thus we conclude that

\displaystyle |C_n(\phi_1,\phi_2)| \leq L(\phi_1) \|\phi_2\|_2 e^{-\lambda(\phi_1)n} \text{ for all } n.

To complete the proof of the proposition, it suffices to apply the same argument once more. Writing

\displaystyle W_{\lambda,K} = \{ \phi_1 \in B_1 : \lambda(\phi_1) \geq \lambda \text{ and } L(\phi_1) \leq L\}

we see from (2) that {B_1 = \bigcup_{\lambda,K>0} W_{\lambda,K}}, and so once again there are {\lambda,K>0} such that the closure of {W_{\lambda,K}} has non-empty interior. Given a sequence {\phi_1^{(k)} \in W_{\lambda,K}} with {\|\phi_1^{(k)} - \phi_1\|_1 \rightarrow 0}, we have {C_n(\phi_1^{(k)},\phi_2) \rightarrow C_n(\phi_1,\phi_2)} for all {n} and all {\phi_2}, and so {\phi_1\in W_{\lambda,K}}, demonstrating that this set is closed.

Thus there are {\phi_1\in W_{\lambda,K}} and {\varepsilon>0} such that {\|\psi_1\|\leq \varepsilon} implies {\phi_1+\psi_1\in W_{\lambda,K}}. In particular this gives {\phi_1 \pm \psi_1\in W_{\lambda,K}}, and so for every {n} and every {\phi_2} we have

\displaystyle \begin{aligned} |C_n(\psi_1,\phi_2)| &= \tfrac 12 |C_n(\phi_1 + \psi_1,\phi_2) - C_n(\phi_1 - \psi_1,\phi_2)| \\ & \leq \tfrac 12 |C_n(\phi_1 + \psi_1,\phi_2)| + \tfrac 12|C_n(\phi_1 - \psi_1,\phi_2)| \\ &\leq K \|\phi_2\|_2 e^{-\lambda n}. \end{aligned}

But then for every {\phi_1\in B_1} we can consider {\psi_1 = \varepsilon \phi_1 / \|\phi_1\|_1}, which has {\|\psi_1\|_1 = \varepsilon}, and so

\displaystyle |C_n(\phi_1,\phi_2)| = \|\phi_1\|_1 \varepsilon^{-1} |C_n(\psi_1,\phi_2)| \leq \|\phi_1\|_1 \varepsilon^{-1} K \|\phi_2\|_2 e^{-\lambda n},

which proves (3) and the proposition.

Posted in Uncategorized | Leave a comment

Lebesgue probability spaces, part II

This is a continuation of the previous post on the classification of complete probability spaces. Last time we set up the basic terminology and notation, and saw that for a complete probability space {(X,\mathcal{A},\mu)}, the {\sigma}-algebra {\mathcal{A}} is countably generated mod 0 (which we denoted (CG0)) if and only if the pseudo-metric {\rho(A,B) = \mu(A\Delta B)} makes {\hat{\mathcal{A}} = \mathcal{A}/{\sim}} into a separable metric space. We also considered {(\hat{\mathcal{A}},\mu)} as a measured abstract {\sigma}-algebra with non non-trivial null sets; this is the point of view we will continue with in this post.

Our goal is to present the result from [HvN] and [Ro] (see references from last time) that classifies such measured abstract {\sigma}-algebras up to {\sigma}-algebra isomorphism. To this end, let {(\Sigma,\mu)} be a separable measured abstract {\sigma}-algebra with no non-trivial null sets; let {X} be the maximal element of {\Sigma}, and suppose that {\mu(X)=1}. Note that {X'} is the minimal element of {\Sigma}, which would correspond to {\emptyset} if {\Sigma} were a collection of subsets of some ambient space. In the abstract setting, we will write {0=X'} for this minimal element.

An element {E\in \Sigma} is an atom if it has no proper non-trivial subelement; that is, if {F\in \Sigma, F\leq E} implies {F=0} or {F=E}. By the assumption that {\Sigma} has no non-trivial null sets, we have {\mu(E)>0} for every atom. Note that for any two atoms {E\neq F} we have {E\wedge F = 0}, and so {\mu(E\vee F) = \mu(E) + \mu(F)}.

Let {\Sigma_a \subset \Sigma} be the set of atoms; then we have {\sum_{E\in \Sigma_a} \mu(E) \leq 1}, so {\Sigma_a} is (at most) countable. Let

\displaystyle  \Sigma_{na} = \{ E\in \Sigma \mid E \wedge F = 0 \text{ for all } F\in \Sigma_a\},

then {\Sigma_{na}} is non-atomic; it contains no atoms. Thus {\Sigma} can be decomposed as a non-atomic part together with a countable collection of atoms. In particular, to classify {(\Sigma,\mu)} we may assume without loss of generality that {\Sigma} is non-atomic.

Consider the unit interval {[0,1]} with Lebesgue measure; let {\mathop{\mathcal L}} denote the set of equivalence classes (mod 0) of measurable subsets of {[0,1]}, and {m} denote Lebesgue measure, so {(\mathop{\mathcal L},m)} is a measured abstract {\sigma}-algebra. Moreover, {(\mathop{\mathcal L},m)} is separable, non-atomic, has no non-trivial null sets, and has total weight 1.

Theorem 1 Let {(\Sigma,\mu)} be a separable non-atomic measured abstract {\sigma}-algebra with total weight 1 and no non-trivial null sets. Then {(\Sigma,\mu)} is isomorphic to {(\mathop{\mathcal L},m)}.

The meaning of “isomorphism” here is that there is a bijection {T\colon \Sigma \rightarrow \mathop{\mathcal L}} that preserves the Boolean algebra structure and carries {m} to {\mu}. That is: {A\leq B} iff {T(A)\leq T(B)}; {T(A\vee B) = T(A) \vee T(B)}; {T(A\wedge B) = T(A)\wedge T(B)}; {T(A')=T(A)'}; {T(0)=0}; and finally, {m(T(A)) = \mu(A)}. We are most used to obtaining morphisms between {\sigma}-algebras as a byproduct of having a measurable map between the ambient sets; that is, given measurable spaces {(X,\mathcal{A})} and {(Y,\mathcal{B})}, a measurable map {f\colon X\rightarrow Y} gives a morphism {f^{-1} \colon \mathcal{B} \rightarrow \mathcal{A}}. However, in the abstract setting we currently work in, there is no ambient space, so we cannot yet interpret {T} this way. Eventually we will go from {(\Sigma,\mu)} and ({\mathop{\mathcal L},m)} back to the measure spaces that induced them, and then we will give conditions for {T} to be induced by a function between those spaces, but for now we stick with the abstract viewpoint.

We sketch a proof of Theorem 1 that roughly follows [HvN, Theorem 1]; see also [Ro, §1.3]. Given a measurable set {A\subset [0,1]}, we write {[A]\in \mathop{\mathcal L}} for the equivalence class of {A}; in particular, we write {[[a,b]]\in \mathop{\mathcal L}} for the equivalence class of the interval {[a,b]}.

Observe that if {Q\subset [0,1]} is dense, then {\Sigma} is generated by {\{[0,q]] : q\in Q\}}; thus we can describe {T} by finding {\{B_q \in \Sigma \mid q\in Q\}} such that

  • {\mu(B_q) = q = m[[0,q]]},
  • {B_q \leq B_{q'}} whenever {q\leq q'},
  • {\{B_q\}_{q\in Q}} generates {\Sigma},

and then putting {T(B_q) = [[0,q]]}. This is where separability comes in. Let {A_1,A_2,\dots \in \Sigma} be a countable collection that generates {\Sigma}. Put {T(A_1) = [[0,\mu(A_1)]]}, so {A_1 = B_q} for {q=\mu(A_1)}. Now what about {A_2}? We can use {A_2} to define two more of the sets {B_q} by noting that

\displaystyle  A_1 \wedge A_2 \leq A_1 \leq A_1 \vee A_2, \ \ \ \ \ (1)

and so we may reasonably set {T(E) = [[0,\mu(E)]]} for {E\in \{A_1 \wedge A_2, A_1, A_2 \vee A_2\}}.

To extend this further it is helpful to rephrase the last step. Consider

\displaystyle \begin{aligned} C_{00} &= A_1 \wedge A_2, \\ C_{01} &= A_1 \wedge A_2', \\ C_{10} &= A_1' \wedge A_2, \\ C_{11} &= A_1' \wedge A_2', \end{aligned}

and observe that (1) can be rewritten as

\displaystyle  C_{00} \leq (C_{00} \vee C_{01}) \leq (C_{00} \vee C_{01} \vee C_{10}).

Writing {\preceq} for the lexicographic order on {\{0,1\}^2}, we observe that each of these three is of the form {D_x = \bigvee_{y\preceq x} C_y} for some {x\in \{0,1\}^2}.

Each {C_x} above is determined as follows: {x_1} tells us whether to use {A_1} or {A_1'}, and {x_2} tells us whether to use {A_2} or {A_2'}. This generalises very naturally to {n\geq 2}: given {x\in \{0,1\}^n}, let

\displaystyle \begin{aligned} A_k^x &= \begin{cases} A_k & x_k = 0, \\ A_k' & x_k = 1. \end{cases}\\ C_x &= \bigwedge_{k=1}^n A_k^x. \end{aligned}

Now put {D_x = \bigvee_{y\preceq x} C_y}; this gives {2^n} elements {D_x} (the last one is {X = \bigvee_{y\in \{0,1\}^n} C_y}, and was omitted from our earlier bookkeeping). These have the property that {D_x \leq D_w} whenever {x\preceq w}. Moreover, writing {\{0,1\}^* = \bigcup_{n\in {\mathbb N}} \{0,1\}^n}, the collection {\{D_x \mid x\in \{0,1\}^*\}} generates {(\Sigma,\mu)} because {A_1,A_2,\dots} does. Let {T(D_x) = [[0,\mu(D_x)]]}; now we have all the pieces of the construction that we asked for earlier.

Putting it all together, let {Q = \{ \mu(D_x) \mid x\in \{0,1\}^* \}}. The following are now relatively straightforward exercises.

  • {Q} is dense in {[0,1]} since {(\Sigma,\mu)} is non-atomic.
  • For each {q\in Q} there is {w\in \{0,1\}^*} with {\mu(D_x) = q}; if {w,x\in \{0,1\}^*} such that this holds, then either {w=x} or {w = x11\cdots 1} (or vice versa). For this step we actually need to choose {A_n} a little more carefully to guarantee that each {C_x} is non-trivial.
  • The previous step guarantees that {T} is a bijection between {\Sigma} and {\mathop{\mathcal L}}.
  • Since {T} preserves the order {\leq} on the generating collections, it preserves the {\sigma}-algebra structure as well.
  • Since {T} carries {\mu} to {m} on the generating collections, it carries {\mu} to {m} on the whole {\sigma}-algebra.

Thus we have produced a {\sigma}-algebra isomorphism {T}, proving Theorem 1. Next time we will discuss conditions under which this can be extended to an isomorphism of the measure spaces themselves, and not just their abstract measured {\sigma}-algebras.

Posted in Uncategorized | Leave a comment

Lebesgue probability spaces, part I

In various areas of mathematics, classification theorems give a more or less complete understanding of what kinds of behaviour are possible. For example, in linear algebra we learn that up to isomorphism, {{\mathbb R}^n} is the only real vector space with dimension {n}, and every linear operator on a finite-dimensional vector space can be put into Jordan normal form via a change of coordinates; this means that many questions in linear algebra can be answered by understanding properties of Jordan normal form. A similar classification result is available in measure theory, but the preliminaries are a little more involved. In this and the next post I will describe the classification result for complete probability spaces, which gives conditions under which such a space is equivalent to the unit interval with Lebesgue measure.

The main original references for these results are a 1942 paper by Halmos and von Neumann [“Operator methods in classical mechanics. II”, Ann. of Math. (2), 43 (1942), 332–350, and a 1949 paper by Rokhlin [“On the fundamental ideas of measure theory”, Mat. Sbornik N.S. 25(67) (1949). 107–150, English translation in Amer. Math. Soc. Translation 1952 (1952). no. 71, 55 pp.]. I will refer to these as [HvN] and [Ro], respectively.

1. Equivalence of measure spaces

First we must establish exactly which class of measure spaces we work with, and under what conditions two measure spaces will be thought of as equivalent. Let {I} be the unit interval and {\mathop{\mathcal B},\mathop{\mathcal L}} the Borel and Lebesgue {\sigma}-algebras, respectively; let {m} be Lebesgue measure (on either of these). To avoid having to distinguish between {(I,\mathop{\mathcal B},m)} and {(I,\mathop{\mathcal L},m)}, let us agree to only work with complete measure spaces; this is no great loss, since given an arbitrary metric space {(X,\mathcal{A},\mu)} we can pass to its completion {(X,\overline{\mathcal{A}},\overline\mu)}.

The most obvious notion of isomorphism is that two complete measure spaces {(X,\mathcal{A},\mu)} and {(X',\mathcal{A}',\mu')} are isomorphic if there is a bijection {f\colon X\rightarrow X'} such that {f,f^{-1}} are measurable and {f_*\mu = \mu'}; that is, given {A\subset X} we have {A\in \mathcal{A}} if and only if {f(A) \in \mathcal{A}'}, and in this case {\mu(A) = \mu'(f(A))}.

In the end we want to loosen this definition a little bit. For example, consider the space {X = \{0,1\}^{\mathbb N}} of all infinite binary sequences, equipped with the Borel {\sigma}-algebra {\mathop{\mathcal B}} associated to the product topology (or if you prefer, the metric {d(x,y) = e^{-\min\{n \mid x_n \neq y_n\}}}). Let {\mu} be the {(\frac 12,\frac 12)}-Bernoulli measure on {(X,\mathop{\mathcal B})}; that is, for each {w\in \{0,1\}^n} the cylinder {[w] = \{x\in X \mid x_1 \cdots x_n = w\}} gets weight {\mu[w] = 2^{-n}}. Then there is a natural correspondence between the completion {(X,\overline{\mathop{\mathcal B}},\overline\mu)} and {(I,\mathop{\mathcal L},m)} given by

\displaystyle  \begin{aligned} f\colon X&\rightarrow I \\ x &\mapsto \sum_{n=1}^\infty x_n 2^{-n}. \end{aligned}

By looking at dyadic intervals {[\frac k{2^n}, \frac{k+1}{2^n}] \subset I} one can readily verify that {f_* \overline\mu = m}; however, {f} is not a bijection because for every {w\in \{0,1\}^n} we have {f(w01^\infty) = f(w10^\infty)}.

The points at which {f} is non-injective form a {\mu}-null set (since there are only countably many of them), so from the point of view of measure theory, it is natural to disregard them. This motivates the following definition.

Definition 1 Two measure spaces {(X,\mathcal{A},\mu)} and {(X',\mathcal{A}',\mu')} are isomorphic mod 0 if there are measurable sets {E \subset X} and {E'\subset X'} such that {\mu(X\setminus E) = \mu'(X'\setminus E') = 0}, together with a bijection {f\colon E\rightarrow E'} such that {f,f^{-1}} are measurable and {f_*(\mu|_E) = \mu'|_{E'}}.

From now on we will be interested in the question of classifying complete measure spaces up to isomorphism mod 0. The example above suggests that {(I,\mathop{\mathcal L},m)} is a reasonable candidate for a `canonical’ complete measure space that many others are equivalent to, and we will see that this is indeed the case.

Notice that the total measure {\mu(X)} is clearly an invariant of isomorphism mod 0, and hence we restrict our attention to probability spaces, for which {\mu(X)=1}.

2. Separability, etc.

Let {(X,\mathcal{A},\mu)} be a probability space. We describe several related conditions that all give senses in which {\mathcal{A}} can be understood via countable objects.

The {\sigma}-algebra {\mathcal{A}} carries a natural pseudo-metric given by {\rho(A,B) = \mu(A\Delta B)}. Write {A\sim B} if {\rho(A,B)=0}; this is an equivalence relation on {\mathcal{A}}, and we write {\hat{\mathcal{A}}} for the space of equivalence classes. The function {\rho} induces a metric {\hat\rho} on {\hat{\mathcal{A}}} in the natural way, and we say that {(X,\mathcal{A},\mu)} is separable if the metric space {(\hat{\mathcal{A}},\hat\rho)} is separable; that is, if it has a countable dense subset.

Another countability condition is this: call {\mathcal{A}} “countably generated” if there is a countable subset {\Gamma \subset \mathcal{A}} such that {\mathcal{A} = \sigma(\Gamma)} is the smallest {\sigma}-algebra containing {\Gamma}. We write (CG) for this property; for example, the Borel {\sigma}-algebra in {[0,1]} satisfies (CG) because we can take {\Gamma} to be the set of all intervals with rational endpoints. (In [HvN], such an {\mathcal{A}} is called “strictly separable”, but we avoid the word “separable” as we have already used it in connection with the metric space {(\hat{\mathcal{A}},\hat\rho)}.)

In and of itself, (CG) is not quite the right sort of property for our current discussion, because it does not hold when we pass to the completion; the Lebesgue {\sigma}-algebra {\mathop{\mathcal L}} is not countably generated (one can prove this using cardinality estimates). Let us say that {\mathcal{A}} satisfies property (CG0) (for “countably generated mod 0”) if there is a countably generated {\sigma}-algebra {\Sigma \subset \mathcal{A}} with the property that for every {E\in \mathcal{A}}, there is {F\in \Sigma} with {\mu(E\Delta F) = 0}. In other words, we have {\hat{\mathcal{A}} = \hat\Sigma}. Note that {\mathop{\mathcal L}} is countably generated mod 0 by taking {\Sigma = \mathop{\mathcal B}}. (In [HvN], such an {\mathcal{A}} is called “separable”; the same property is used in §2.1 of [Ro] with the label {(L')}, rendered in a font that I will not attempt to duplicate here.)

In fact, the approximation of {\mathop{\mathcal L}} by {\mathop{\mathcal B}} satisfies an extra condition. Let us write (CG0+) for the following condition on {\mathcal{A}}: there is a countably generated {\Sigma \subset \mathcal{A}} such that for every {E\in \mathcal{A}}, there is {F\in \Sigma} with {E\subset F} and {\mu(F\setminus E)=0}. This is satisfied for {\mathop{\mathcal L}} and {\mathop{\mathcal B}}. (In [HvN], such an {\mathcal{A}} is called “properly separable”; the same property is used in §2.1 of [Ro] with the label {(L)}.)

The four properties introduced above are related as follows.

\displaystyle  \textbf{(CG)} \Rightarrow \textbf{(CG0+)} \Rightarrow \textbf{(CG0)} \Leftrightarrow \text{separable}

The first two implications are immediate, and their converses fail in general:

  • The Lebesgue {\sigma}-algebra {\mathop{\mathcal L}} satisfies (CG0+) but not (CG).
  • Let {\mathcal{A} = \{ A \subset [0,1] \mid A\in \mathop{\mathcal L}, \mu(A)=0 \text{ or } 1\}}. Then {\mathcal{A}} satisfies (CG0) but not (CG0+).

Now we prove that (CG0) and separability are equivalent. First note that if {\Gamma \subset \mathcal{A}} is a countable subset, then the algebra {\mathop{\mathcal F}} generated by {\Gamma} is also countable; in particular, {(X,\mathcal{A},\mu)} is separable if and only if there is a countable algebra {\mathop{\mathcal F}\subset \mathcal{A}} that is dense with respect to {\rho}, and similarly in the definition of (CG0) the generating set can be taken to be an algebra. To show equivalence of (CG0) and separability it suffices to show that given an algebra {\mathop{\mathcal F} \subset \mathcal{A}} and {E\in \mathcal{A}}, we have

\displaystyle  (E\in \overline{\mathop{\mathcal F}}) \Leftrightarrow (\text{there is } A\in \sigma(\mathop{\mathcal F}) \text{ with } \rho(E,A) = \mu(E\Delta A) = 0). \ \ \ \ \ (1)

First we prove {(\Leftarrow)} by proving that {\overline{\mathop{\mathcal F}}} is a {\sigma}-algebra, and hence contains {\sigma(\mathop{\mathcal F})}; this will show that (CG0) implies separability.

  • Closure under {{}^c}: if {E\in \overline{\mathop{\mathcal F}}} then there are {A_n\in \mathop{\mathcal F}} such that {\rho(A_n,E) \rightarrow 0}. Since {\rho(A_n^c, E^c) = \rho(A_n, E)} and {A_n^c\in \mathop{\mathcal F}} (since it is an algebra), this gives {E^c\in \overline{\mathop{\mathcal F}}}.
  • Closure under {\cup}: given {E_1,E_2,\dots \in \overline{\mathop{\mathcal F}}}, let {E = \bigcup_n E_n}. To show that {E \in \overline{\mathop{\mathcal F}}}, note that given any {\epsilon>0}, there are {A_n\in \mathop{\mathcal F}} such that {\rho(A_n,E_n) < \epsilon 2^{-n}}. Let {F_N = \bigcup_{n=1}^N E_n} and {B_N = \bigcup_{n=1}^N A_n}; note that

    \displaystyle  \rho(F_N,B_N) \leq \sum_{n=1}^N \rho(E_n,A_n) < \epsilon.

    Moreover by continuity from below we have {\lim \mu(F_N) = \mu(E)}, so {\lim \rho(E,F_N)=0}, and thus for sufficiently large {N} we have {\rho(E,B_N) < \rho(E,F_N) + \rho(F_N,B_N) < 2\epsilon}. This holds for all {\epsilon>0}, so {E\in \overline{\mathop{\mathcal F}}}.

Now we prove {(\Rightarrow)}, thus proving that {\overline{\mathop{\mathcal F}}} is “large enough” that separability implies (CG0). Given any {E\in \overline{\mathop{\mathcal F}}}, there are {A_n\in \mathop{\mathcal F}} such that {\rho(A_n,E)\leq 2^{-n}}. Let { A = \bigcap_{N\in {\mathbb N}} \bigcup_{n\geq N} A_n \in \sigma(\mathop{\mathcal F}). } We get

\displaystyle  \begin{aligned} \mu(A \cap E^c) &= \mu\big(\bigcap_N \bigcup_{n\geq N} (A_n \cap E^c)\big) = \lim_{N\rightarrow\infty} \mu\big( \bigcup_{n\geq N} (A_n \cap E^c) \big) \\ &\leq \lim_{N\rightarrow\infty} \sum_{n\geq N} \mu(A_n \cap E^c) \leq \lim_{N\rightarrow\infty} 2^{1-N} = 0, \end{aligned}

and similarly, {A^c \cap E = \bigcup_N \bigcap_{n\geq N} (A_n^c \cap E)}, which gives

\displaystyle  \mu(A^c \cap E) \leq \sum_N \mu\big( \bigcap_{n\geq N} A_n^c \cap E\big) \leq \sum_N \limsup_{n\rightarrow\infty} \mu(A_n^c \cap E) = 0.

Then {\rho(A,E) = 0}, which completes the proof of {(\Rightarrow)}.

The first half of the argument above (the {\Leftarrow} direction) appears in this MathOverflow answer to a question discussing the relationship between different notions of separability, which ultimately inspired this post. That answer (by Joel David Hamkins) also suggests one further notion of “countably generated”, distinct from all of the above; say that {(X,\mathcal{A},\mu)} satisfies (CCG) (for “completion of countably generated”) if there is a countably generated {\sigma}-algebra {\Sigma \subset \mathcal{A}} such that {\mathcal{A} \subset \overline{\Sigma}}, where {\overline{\Sigma}} is the completion of {\Sigma} with respect to the measure {\mu}. One quickly sees that

\displaystyle  \textbf{(CG)} \Rightarrow \textbf{(CCG)} \Rightarrow \textbf{(CG0)}.

Both reverse implications fail; the Lebesgue {\sigma}-algebra satisfies (CCG) but not {\textbf{(CG)}}, and an example satisfying separability (and hence (CG0)) but not (CCG) was given in that same MathOverflow answer (the example involves ordinal numbers and something called the “club filter”, which I will not go into here).

3. Abstract {\sigma}-algebras

It is worth looking at some of the previous arguments through a different lens, that will also appear next time when we discuss the classification problem.

Recall the space of equivalence classes {\hat{\mathcal{A}}} from earlier, where {A\sim B} means that {\mu(A\Delta B) = 0}. Although elements of {\hat{\mathcal{A}}} are not subsets of {X}, we can still speak of the “union” of two such elements by choosing representatives from the respective equivalence classes; that is, given {\hat A, \hat B\in \hat{\mathcal{A}}}, we choose representatives {A\in \hat A} and {B\in \hat B} (so {A,B\in \mathcal{A}}), and consider the “union” of {\hat A} and {\hat B } to be the equivalence class of {A\cup B}; write this as {\hat A\vee \hat B}. One can easily check that this is well-defined; if {A_1\sim A_2} and {B_1\sim B_2}, then {(A_1\cup B_1) \sim (A_2 \cup B_2)}.

This shows that {\cup} induces a binary operation {\vee} on the space {\hat{\mathcal{A}}}; similarly, {\cap} induces a binary operation {\wedge}, complementation {A\mapsto A^c} induces an operation {\hat A \mapsto \hat A'}, and set inclusion {A\subset B} induces a partial order {\hat A \leq \hat B}. These give {\hat{\mathcal{A}}} the structure of a Boolean algebra; say that {\Sigma} is an abstract Boolean algebra if it has a partial order {\leq}, binary operations {\vee}, {\wedge}, and a unary operation {'}, satisfying the same rules as inclusion, union, intersection, and complementation:

  1. {A \vee B} is the join of {A} and {B} (the minimal element such that {A,B \leq A\vee B}), and {A\wedge B} is the meet of {A} and {B} (the maximal element such that {A,B \geq A\wedge B});
  2. the distributive laws {A\vee (B\wedge C) = (A\vee B) \wedge (A\vee C)} and {A\wedge (B\vee C) = (A\wedge B) \vee (A\wedge C)} hold;
  3. there is a maximal element {X} whose complement {X'} is the minimal element;
  4. {A\wedge A' = X'} and {A\vee A' = X}.

For the form of this list I have followed this blog post by Terry Tao, which gives a good in-depth discussion of some other issues relating to concrete and abstract Boolean algebras and {\sigma}-algebras.

Exercise 1 Using the four axioms above, prove the following properties:

  • {A'} is the unique element satisfying (4) — that is, if {A\vee B =X} and {A\wedge B = X'}, then {B=A'};
  • {(A')' = A};
  • de Morgan’s laws: {(A\vee B)' = A' \wedge B'} and {(A\wedge B)' = A' \vee B'}.

If you get stuck, see Chapter IV, Lemma 1.2 in A Course in Universal Algebra by Burris and Sankappanavar.

In fact {\hat{\mathcal{A}}} inherits just a little bit more, since {\vee} (and hence {\wedge}) can be iterated countably many times. We add this as a fifth axiom, and say that an abstract Boolean algebra {\Sigma} is an abstract {\sigma}-algebra if in addition to (1)–(4) it satisfies


  1. any countable family {A_1,A_2,\dots, \in \Sigma} has a least upper bound {\bigvee_n A_n} and a greatest lower bound {\bigwedge_n A_n}.

A measured abstract {\sigma}-algebra is a pair {(\Sigma,\mu)}, where {\Sigma} is an abstract {\sigma}-algebra and {\mu\colon \Sigma\rightarrow [0,\infty]} is a function satisfying the usual properties: {\mu(X')=0} and {\mu(\bigvee_n A_n) = \sum_n \mu(A_n)} whenever {A_i \wedge A_j = X'} for all {i\neq j}. (Note that {X'} is playing the role of {\emptyset}, but we avoid the latter notation to remind ourselves that elements of {\Sigma} do not need to be represented as subsets of some ambient space.)

The operations {\vee,\wedge,'} induce a binary operator {\Delta} on {\Sigma} by

\displaystyle  A \Delta B = (A\wedge B') \vee (B \wedge A'),

which is the abstract analogue of set difference, and so a measured abstract {\sigma}-algebra carries a pseudo-metric {\rho} defined by

\displaystyle  \rho(A,B) = \mu(A \Delta B).

If {(\Sigma,\mu)} has the property that {\mu(A)>0} for all {A\neq X'}, then this becomes a genuine metric.

In particular, if {(X,\mathcal{A},\mu)} is a measure space and {\Sigma = \hat{\mathcal{A}}} is the space of equivalence classes modulo {\sim} (equivalence mod 0), then {\mu} induces a function {\hat{\mathcal{A}} \rightarrow [0,\infty]}, which we continue to denote by {\mu}, such that {(\hat{\mathcal{A}},\mu)} is a measured abstract {\sigma}-algebra; this has the property that {\mu(A)>0} for all non-trivial {A\in \hat{\mathcal{A}}}, and so it defines a metric {\rho} as above.

Given an abstract {\sigma}-algebra {\Sigma} and a subset {\Gamma \subset \Sigma}, the algebra ({\sigma}-algebra) generated by {\Gamma\subset \Sigma} is the smallest algebra ({\sigma}-algebra) in {\Sigma} that contains {\Gamma}. Now we can interpret the equivalence (1) from the previous section (which drove the correspondence between (CG0) and separability) in terms of the measured abstract {\sigma}-algebra {\hat{\mathcal{A}}}.

Proposition 2 Let {(\Sigma,\mu)} be a measured abstract {\sigma}-algebra with no non-trivial null sets. Then for any algebra {\mathop{\mathcal F} \subset \Sigma}, we have {\overline{\mathop{\mathcal F}} = \sigma(\mathop{\mathcal F})}; that is, the {\rho}-closure of {\mathop{\mathcal F}} is equal to the {\sigma}-algebra generated by {\mathop{\mathcal F}}.

Next time we will see how separability (or equivalently, (CG0)) can be used to give a classification result for abstract measured {\sigma}-algebras, which at first requires us to take the abstract point of view introduced in this section. Finally, we will see what is needed to go from there to a similar result for probability spaces.

Posted in Uncategorized | 3 Comments

Unique MMEs with specification – an alternate proof

The variational principle for topological entropy says that if {X} is a compact metric space and {f\colon X\rightarrow X} is a continuous map, then {h_{\mathrm{top}}(f) = \sup_\mu h_\mu(f)}, where {h_{\mathrm{top}}} is the topological entropy, and the supremum is taken over all {f}-invariant Borel probability measures. A measure achieving this supremum is called a measure of maximal entropy (MME for short), and it is interesting to understand when a system has a unique MME.

Let’s look at this question in the setting of symbolic dynamics. Let {A} be a finite set, which we call an alphabet, and let {A^{\mathbb N}} be the set of all infinite sequences of symbols in {A}. This is a compact metric space in a standard way, and the shift map {\sigma} defined by {(\sigma(x))_n = x_{n+1}} is continuous. We consider a compact {\sigma}-invariant set {X\subset A^{\mathbb N}} and ask whether or not {(X,\sigma)} has a unique MME.

When {X} is a mixing subshift of finite type (SFT), this was answered in the 1960’s by Parry; there is a unique MME, and it can be obtained by considering the transition matrix for {X} and using some tools from linear algebra. A different proof was given by Bowen in 1974 using the specification property; this holds for all mixing SFTs, but also for a more general class of shift spaces.

The purpose of this note is to describe a variant of Bowen’s proof; roughly speaking, we follow Bowen’s proof for the first half of the argument, then give a shorter version of the second half of the argument, which follows comments made in conversation with Dan Thompson and Mike Hochman.

1. The strategy

The language of the shift space {X} is the set of all finite words that appear in some element of {X}. That is, given {n\in {\mathbb N}} we write {\mathcal{L}_n = \{w\in A^n \mid [w]\neq\emptyset\}}, where {[w] = \{x\in X \mid x_1\cdots x_n = w\}}. Then we write {\mathcal{L} = \bigcup_n \mathcal{L}_n}. Write {|w|} for the length of {w}.

In the setting of symbolic dynamics, the topological transitivity property takes the following form: {X} is transitive if and only if for every {u,v\in \mathcal{L}} there is {w\in \mathcal{L}} such that {uwv\in \mathcal{L}}. Transitivity by itself gives no control over the length of {w}. We say that shift space {X} has specification if there is {\tau\in {\mathbb N}} such that for every {u,v\in \mathcal{L}} there is {w\in \mathcal{L}} with {|w|=\tau} such that {uwv\in \mathcal{L}}; thus specification can be thought of as a uniform transitivity property.

Let {h = h_{\mathrm{top}}(f) = \lim_{n\rightarrow\infty} \frac 1n \log \#\mathcal{L}_n} be the topological entropy of {(X,\sigma)}. Bowen’s proof of uniqueness has the following structure:

  1. Show that there is {C>0} such that {e^{nh} \leq \#\mathcal{L}_n \leq C e^{nh}} for every {n}.
  2. Prove that for every {\alpha>0} there is {\beta>0} such that if {\mu} is an MME and {\mathcal{D}_n \subset \mathcal{L}_n} is a collection of words with {\mu(\mathcal{D}_n) \geq \alpha}, then we have {\#\mathcal{D}_n \geq \beta e^{nh}}. This can be thought of as a uniform version of the Katok entropy formula.
  3. Follow the proof of the variational principle to explicitly construct an MME {\mu}; then use the specification property and the counting estimates on {\#\mathcal{L}_n} to show that {\mu} has the Gibbs property and is ergodic.
  4. Show that if {\nu} is another ergodic MME, then the uniform Katok entropy formula for {\nu} and the Gibbs property for {\mu} cannot hold simultaneously; this contradiction proves uniqueness.

The proof we give here follows the above structure exactly for steps 1 and 2. Instead of steps 3 and 4, though, we give the following argument; if {\nu\neq \mu} are two distinct ergodic MMEs, then we can use the uniform Katok entropy formula for {\nu,\mu} together with the specification property to create more entropy in the system, a contradiction. In the next section we make all of this precise.

The advantage of this proof is that it allows us to replace step 3 of Bowen’s proof with a different argument that may be easier and less technical (depending on one’s taste). The disadvantage is that it does not include a proof of the Gibbs property for {\mu}, which is useful to know about in various settings.

2. The proof

2.1. Counting estimates

Write {\mathcal{L}_{m} \mathcal{L}_n} for the set of all words of the form {vw}, where {v\in \mathcal{L}_m} and {w\in \mathcal{L}_n}. Then it is clear that {\mathcal{L}_{m+n} \subset \mathcal{L}_m \mathcal{L}_n}, so {\#\mathcal{L}_{m+n} \leq (\#\mathcal{L}_m)(\#\mathcal{L}_n)}. In particular, {\log \#\mathcal{L}_n} is subadditive, so by Fekete’s lemma {h = \lim \frac 1n \log \#\mathcal{L}_n = \inf_n \frac 1n \log \#\mathcal{L}_n}, and we get {\#\mathcal{L}_n \geq e^{nh}} for every {n}.

The upper bound requires the specification property. Define a map {(\mathcal{L}_n)^k \rightarrow \mathcal{L}_{k(n+\tau)}} by sending {w_1,\dots,w_k} to {w_1 u_1 w_2 u_2 \cdots w_k u_k}, where {u_i \in \mathcal{L}_\tau} are provided by specification. This map is 1-1 so {\#\mathcal{L}_{k(n+\tau)} \geq (\#\mathcal{L}_n)^k}. Taking logs gives

\displaystyle  \frac 1{k(n+\tau)} \log\#\mathcal{L}_{k(n+\tau)} \geq \frac 1{n+\tau} \log \#\mathcal{L}_n,

and sending {k\rightarrow\infty} takes the left-hand side to {h}, so {\#\mathcal{L}_n \leq e^{(n+\tau)h}}; this gives the counting bounds we claimed earlier, with {C= e^{\tau h}}.

The gluing construction in the previous paragraph will be used later on when we need to derive a contradiction by producing extra entropy.

2.2. Uniform Katok formula

Now suppose {\mu} is any MME, and given {w\in \mathcal{L}} write {\mu(w) = \mu([w])}. Similarly write {\mu(\mathcal{D}_n) = \mu(\bigcup_{w\in \mathcal{D}_n} [w])}. Applying Fekete’s lemma to the subadditive sequence {H_n(\mu) = \sum_{w\in \mathcal{L}_n} -\mu(w) \log \mu(w)}, we get {nh \leq H_n(\mu)}.

Given {\mathcal{D}_n} with {\mu(\mathcal{D}_n)\geq \alpha}, we have

\displaystyle  \begin{aligned} nh &\leq \sum_{w\in \mathcal{D}_n} - \mu(w)\log \mu(w) + \sum_{w\in \mathcal{D}_n^c} -\mu(w)\log\mu(w) \\ &= \mu(\mathcal{D}_n)\sum_{w\in \mathcal{D}_n} - \frac{\mu(w)}{\mu(\mathcal{D}_n)} \log \frac{\mu(w)}{\mu(\mathcal{D}_n)} + \mu(\mathcal{D}_n^c)\sum_{w\in \mathcal{D}_n^c} - \frac{\mu(w)}{\mu(\mathcal{D}_n^c)} \log \frac{\mu(w)}{\mu(\mathcal{D}_n^c)} \\ &\qquad - \mu(\mathcal{D}_n)\log\mu(\mathcal{D}_n) - \mu(\mathcal{D}_n^c)\log\mu(\mathcal{D}_n^c) \\ &\leq \mu(\mathcal{D}_n) \log\#\mathcal{D}_n + \mu(\mathcal{D}_n^c) \log \#\mathcal{D}_n^c + \log 2 \\ &\leq \mu(\mathcal{D}_n) \log\#\mathcal{D}_n + (1-\mu(\mathcal{D}_n)) (nh + \log C) + \log 2 \\ &= \mu(\mathcal{D}_n) (\log \#\mathcal{D}_n - nh - \log C) + nh + \log (2C). \end{aligned}

Solving for {\#\mathcal{D}_n} gives

\displaystyle  \log\#\mathcal{D}_n \geq nh + \log C - \frac{\log(2C)}{\mu(\mathcal{D}_n)} \geq nh + \log C - \frac{\log(2C)}{\alpha},

so taking {\beta} with {\log\beta = \log C - \log(2C)/\alpha} gives {\log \#\mathcal{D}_n \geq \beta e^{nh}}, verifying the uniform Katok formula.

Note that the argument here does not rely directly on the specification property; it only uses the fact that {\mu} is an MME and that we have the uniform upper bound {\#\mathcal{L}_n \leq Ce^{nh}}. In fact, if one is a little more careful with the computations it is possible to show that we can take {\beta = \beta(\alpha) = C^{1-\frac 1\alpha} e^{-\frac{h(\alpha)}\alpha}}, where {h(\alpha) = -\alpha\log\alpha - (1-\alpha)\log(1-\alpha)}. This has the pleasing property that {\beta\rightarrow 1} as {\alpha\rightarrow 1}, so the lower bound for {\#\mathcal{D}_n} converges to the lower bound for {\#\mathcal{L}_n}.

2.3. Producing more entropy

Now suppose {\mu,\nu} are two distinct ergodic MMEs. Then {\mu\perp \nu} so there are disjoint sets {Y,Z\subset X} with {\mu(Y) = 1} and {\nu(Z)=1}. Fix {\delta>0} and let {Y'\subset Y} and {Z'\subset Z} be compact sets such that {\mu(Y')> 1-\delta} and {\nu(Z')>1-\delta}. Then the distance between {Y',Z'} is positive, so there is {N\in {\mathbb N}} such that for every {n\geq N-\tau}, no {n}-cylinder intersects both {Y',Z'}. In particular, putting {\mathcal{Y}_n = \{w\in \mathcal{L}_n \mid [w] \cap Y' \neq\emptyset\}}, and similarly for {\mathcal{Z}_n} with {Z'}, we have

  • {\mathcal{Y}_n \cap \mathcal{Z}_n = \emptyset} for every {n\geq N-\tau}, and
  • {\mu(\mathcal{Y}_n) \geq 1-\delta} and {\nu(\mathcal{Z}_n) \geq 1-\delta}, so by the uniform Katok formula we have {\#\mathcal{Y}_n,\#\mathcal{Z}_n \geq \beta e^{nh}}, where {\beta = \beta(1-\delta) > 0}.

Up to now all of the arguments that we have made appear in Bowen’s proof; the approximation argument just given appears in step 4 of his proof, where one uses the Gibbs property of the constructed MME to derive a contradiction with the uniform Katok formula. It is at this point that our argument diverges from Bowen’s, since we have not proved a Gibbs property. Instead, we apply the specification property to the collections of words {\mathcal{Y},\mathcal{Z} \subset \mathcal{L}} to get a lower bound on {\#\mathcal{L}_n} that grows more quickly than {e^{nh}}, a contradiction.

Before giving the correct argument, let me describe three incorrect arguments that illustrate the main ideas and also show why certain technical points arise in the final argument; in particular this will describe the process of arriving at the proof, which I think is worthwhile.

First wrong argument

Here is a natural way to proceed. With {N,\mathcal{Y},\mathcal{Z}} as above, consider for each {k\in {\mathbb N}} and each {\omega\in \{0,1\}^k} the set of all words

\displaystyle  w^1 u^1 w^2 u^2 \cdots w^k,

where {w^i \in \mathcal{Y}_{N-\tau}} if {\omega_i = 0} and {\mathcal{Z}_{N-\tau}} if {\omega_i=1}, and {u^i \in \mathcal{L}_\tau} are the gluing words provided by specification. Note that each choice of {\omega} produces at least {(\beta e^{(N-\tau)h})^k} words in {\mathcal{L}_{kN}}. Moreover, the collections of words produced by different choices of {\omega} are disjoint, because {\mathcal{Y}_{N-\tau}} and {\mathcal{Z}_{N-\tau}} are disjoint. We conclude that

\displaystyle  \#\mathcal{L}_{kN} \geq 2^k \beta^k e^{k(N-\tau)h},


\displaystyle  \tfrac 1{kN}\log \#\mathcal{L}_{kN} \geq h + \tfrac 1N(\log(2\beta) - \tau h).

If {\log(2\beta) > \tau h} then this would be enough to show that {h_\mathrm{top} > h}, a contradiction. Unfortunately since {\beta< 1} this argument does not work if {\tau h \geq 2}, so we must try again…

Second wrong argument

Looking at the above attempted proof, one may describe the problem as follows: each of the words {u^i} makes us lose a factor of {e^{-\tau h}} from our estimate on {\#\mathcal{L}_{kN}}. If {\omega_i = \omega_{i+1}}, then instead of letting {w^i} and {w^{i+1}} range over {\mathcal{Y}_{N-\tau}} and then gluing them, we could replace the words {w^i u^i w^{i+1}} with the words {v\in \mathcal{Y}_{2N-\tau}}. In particular, this would replace the estimate {\beta^2 e^{2(N-\tau)}} with the estimate {\beta e^{2N - \tau}}.

This suggests that given {\omega\in \{0,1\}^k}, we should only keep track of the set {J \subset \{1,\dots, k\}} for which {\omega_{j+1} \neq \omega_j}, since if {\omega_{i+1} = \omega_i} we can avoid losing the factor of {e^{-\tau h}} by avoiding the gluing process.

So, let’s try it. Given {J = \{j_1 < \cdots < j_\ell \} \subset \{1, \dots, k\}}, let {m_i = j_{i+1} - j_i} (with {m_0=j_1} and {m_\ell = k - j_\ell}), and consider the map

\displaystyle  \pi_J \colon \mathcal{Y}_{m_0 N - \tau} \times \mathcal{Z}_{m_1 N - \tau} \times \mathcal{Y}_{m_2 N - \tau} \times \cdots \times \mathcal{Y}_{m_{\ell-1}N - \tau} \times \mathcal{Z}_{m_\ell N} \rightarrow \mathcal{L}_{kN}

given by specification (whether the product ends with {\mathcal{Y}} or {\mathcal{Z}} depends on the parity of {\ell}).

Let {\mathcal{D}^J_{kN}} be the image of {\pi_J}, and note that

\displaystyle  \#\mathcal{D}_{kN}^J \geq \prod_{i=0}^\ell \beta e^{(m_iN - \tau) h} \geq \beta^{\ell + 1} e^{(kN - \ell\tau)h}. \ \ \ \ \ (1)

If the collections {\mathcal{D}_{kN}^J}, {\mathcal{D}_{kN}^{J'}} were disjoint for different choices of {J,J'}, then we could fix {\gamma>0} and sum (1) over all {J} with {\#J \leq \gamma k} to get

\displaystyle  \#\mathcal{L}_{kN} \geq \left(\sum_{\ell = 0}^{\gamma k} {k \choose \ell} \right) \beta^{\gamma k} e^{(kN - \gamma k \tau)h} \geq e^{(-\gamma \log \gamma)k} e^{k(\gamma\log\beta + Nh - \gamma\tau h)},

where we use Sitrling’s approximation for the last inequality. Taking logs and dividing by {kN} gives

\displaystyle  \tfrac 1{kN} \log \#\mathcal{L}_{kN} \geq h + \tfrac \gamma N ( -\log \gamma + \log\beta - \tau h).

For {\gamma>0} sufficiently small this is {> h}, which gives {h_\mathrm{top} > h}, a contradiction.

The problem with this argument is that the collections {\mathcal{D}_{kN}^J} need not be disjoint for different choices of {J}; this is because {w\in \mathcal{Y}_{mN}} may have {w_{[jN, (j+1)N)} \in \mathcal{Z}} for some value of {j}, so that we cannot necessarily recover {J} uniquely from knowing {w\in \mathcal{D}_{kN}^J}.

Third wrong argument

Let’s try to address the issue just raised, that we cannot recover {J} uniquely from {w\in \mathcal{D}_{kN}^J} because {w\in \mathcal{Y}_{mN}} may have subwords in {\mathcal{Z}}. We address this by only using words {w} where there are `not many’ such subwords. More precisely, given {w\in \mathcal{Y}_{n}} for {n\geq N-\tau}, let

\displaystyle  B(w) = \{ 0 \leq j < \tfrac{n+\tau}N \mid w_{[jN, (j+1)N-\tau)}\in \mathcal{Z} \}

be the set of `bad’ times, and similarly with the roles of {\mathcal{Y}} and {\mathcal{Z}} reversed. Let {b(w) = \#B(w)}, and observe that since {\mu(Z') < \delta}, invariance of {\mu} gives

\displaystyle  \mu\{w\in \mathcal{Y}_{n} \mid j\in B(w)\} \leq \delta

for every {0\leq j < \frac{n+\tau}N}. We conclude that

\displaystyle  \sum_{w\in \mathcal{Y}_{n}} b(w)\mu(w) \leq \sum_{j=0}^{\lfloor\frac{n+\tau}N\rfloor-1} \mu\{w\in \mathcal{Y}_{n} \mid j\in B(w)\} \leq \delta \tfrac{n+\tau}N.

Let {\mathcal{Y}'_{n} = \{w\in \mathcal{Y}_{n} \mid b(w) \leq 2\delta \frac{n+\tau}N\}}, and note that

\displaystyle  \delta \tfrac{n+\tau}N \geq \sum_{w\in \mathcal{Y}_{n} \setminus \mathcal{Y}_{n}'} \mu(w) 2\delta \tfrac{n+\tau}N,


\displaystyle  \tfrac 12 \geq \mu(\mathcal{Y}_n) - \mu(\mathcal{Y}_n') \geq 1-\delta - \mu(\mathcal{Y}_n').

We conclude that

\displaystyle  \mu(\mathcal{Y}_{n}') \geq \tfrac 12 - \delta,

so taking {\beta = \beta(\frac 12 - \delta)}, the uniform Katok estimate gives {\#\mathcal{Y}_{n}' \geq \beta e^{nh}}. A similar definition and argument gives {\mathcal{Z}_{n}'}.

Now we repeat the argument from the previous section (the second wrong argument) using {\mathcal{Y}',\mathcal{Z}'} in place of {\mathcal{Y},\mathcal{Z}}. Given {J\subset \{1,\dots, k\}}, let {\mathcal{D}_{kN}^J} be the image of the map {\pi_J} with the corresponding restricted domain, and note that the estimate (1) still holds (with the new value of {\beta}).

The final piece of the puzzle is to take {v \in \mathcal{D}_{kN}^J} and estimate how many other collections {\mathcal{D}_{kN}^{J'}} can contain it; that is, how many possibilities there are for {J} once we know {v}. We would like to do the following: write {v = w^1 u^1 w^2 u^2 \cdots w^\ell} and then argue that {J'} must be contained in the union of the sets of times corresponding to the {B(w^i)}. The problem is that this only works when there is a disagreement between which of the sets {\mathcal{Y}',\mathcal{Z}'} the maps {\pi_J,\pi_{J'}} are trying to use, and so I cannot quite make this give us the bounds we want.

The correct argument

To fix this final issue we change the construction slightly; instead of letting {J} mark the times where we transition between {\mathcal{Y}',\mathcal{Z}'}, we do a construction where at each {j\in J} we transition from {\mathcal{Y}'} to {\mathcal{Z}} and then immediately back to {\mathcal{Y}'}. Then {v = \mathcal{D}_{kN}^J \cap \mathcal{D}_{kN}^{J'}} will impose strong conditions on the set {J'}.

Let’s make everything precise and give the complete argument. As in the very beginning of this section we fix {\delta>0} and let {Y',Z'} be disjoint compact sets with {\mu(Y'),\nu(Z')>1-\delta}, where {\mu,\nu} are distinct ergodic MMEs. Let {N} be such that for every {n\geq N-\tau}, no {n}-cylinder intersects both {Y',Z'}.

Let {\mathcal{Y}_n = \{w\in \mathcal{L}_n \mid [w] \cap Y' \neq \emptyset\}}, and similarly for {\mathcal{Z}_n}. We repeat verbatim part of the argument from the last section; given {w\in \mathcal{Y}_{n}} for {n\geq N-\tau}, let

\displaystyle  B(w) = \{ 0 \leq j < \tfrac{n+\tau}N \mid w_{[jN, (j+1)N-\tau)}\in \mathcal{Z} \}

be the set of `bad’ times. Let {b(w) = \#B(w)}, and observe that since {\mu(Z') < \delta}, invariance of {\mu} gives

\displaystyle  \mu\{w\in \mathcal{Y}_{n} \mid j\in B(w)\} \leq \delta

for every {0\leq j < \frac{n+\tau}N}. We conclude that

\displaystyle  \sum_{w\in \mathcal{Y}_{n}} b(w)\mu(w) \leq \sum_{j=0}^{\lfloor\frac{n+\tau}N\rfloor-1} \mu\{w\in \mathcal{Y}_{n} \mid j\in B(w)\} \leq \delta \tfrac{n+\tau}N.

Let {\mathcal{Y}'_{n} = \{w\in \mathcal{Y}_{n} \mid b(w) \leq 2\delta \frac{n+\tau}N\}}, and note that

\displaystyle  \delta \tfrac{n+\tau}N \geq \sum_{w\in \mathcal{Y}_{n} \setminus \mathcal{Y}_{n}'} \mu(w) 2\delta \tfrac{n+\tau}N,


\displaystyle  \tfrac 12 \geq \mu(\mathcal{Y}_n) - \mu(\mathcal{Y}_n') \geq 1-\delta - \mu(\mathcal{Y}_n').

We conclude that

\displaystyle  \mu(\mathcal{Y}_{n}') \geq \tfrac 12 - \delta,

so taking {\beta = \beta(\frac 12 - \delta)}, the uniform Katok estimate gives {\#\mathcal{Y}_{n}' \geq \beta e^{nh}}.

Now given {k\in {\mathbb N}} and {J = \{j_1 < j_2< \cdots < j_\ell\} \subset \{1,\dots,k\}}, let {m_i = j_{i+1} - j_i} for {0\leq i\leq \ell} (putting {j_0=0} and {j_{\ell+1} = k}) and define a map

\displaystyle  \pi_J \colon \prod_{i=0}^{\ell+1} (\mathcal{Y}_{(m_i-1)N - \tau}' \times \mathcal{Z}_{N-\tau}) \rightarrow \mathcal{L}_{kN}

by the specification property, so that for {v^i \in \mathcal{Y}'_{(m_i-1)N-\tau}} and {w^i\in \mathcal{Z}_{N-\tau}} we have

\displaystyle  \pi_J(v^0,w^0,\dots,v^\ell,w^\ell) = v^0 u^0 w^0 \hat u^0 v^1 \cdots v^\ell u^\ell w^\ell,

where {u^i,\hat u^i} have length {\tau} and are provided by specification. Let {\mathcal{D}_{kN}^J} be the image of {\pi_J} and note that

\displaystyle  \#\mathcal{D}_{kN}^J \geq \beta^{2\ell+2} e^{(kN - (2\ell +1)\tau)h}. \ \ \ \ \ (2)

Given {J} and {x\in \mathcal{D}_{kN}^J}, let {\mathcal{J}(x,J)} denote the set of {J' \subset \{1,\dots, k\}} such that {x\in \mathcal{D}_{kN}^{J'}}. Writing {x = v^0 u^0 w^0 \hat u^0 \cdots v^\ell u^\ell w^\ell}, note that {x\in \mathcal{D}_{kN}^{J'}} implies that for each {j\in J'}, either {j\in J} or there are consecutive elements {j_i < j_{i+1}} of {J} such that {j_i < j < j_{i+1}}, and in this latter case we have that {v^i_{[(j - j_i)N, (j-j_i + 1)N-\tau)} \in \mathcal{Z}_{N-\tau}}, so {j-j_i \in B(v^i)}. We conclude that

\displaystyle  J' \subset J \cup \left(\bigcup_{i=0}^\ell j_i + B(v^i)\right).

By our choice of {\mathcal{Y}'}, for each {x} the set on the right has at most {\ell + 2\delta k} elements. In particular, we have

\displaystyle  \#\mathcal{J}(x,J) \leq 2^{\ell + 2\delta k}.

This bounds the number of {\mathcal{D}_{kN}^J} that can contain a given {x\in \mathcal{L}_{kN}}, and since there are {\geq e^{(-\gamma\log\gamma)k}} distinct choices of {J} with {\#J \leq \gamma k}, the bound in (2) gives

\displaystyle  \#\mathcal{L}_{kN} \geq 2^{-(\gamma + 2\delta)k} e^{(-\gamma\log\gamma)k}\beta^{2\gamma k+2} e^{(kN - (2\gamma k+1)\tau)h}.

Taking logs gives

\displaystyle  \log \#\mathcal{L}_{kN} \geq kNh -(\gamma + 2\delta)k\log 2 - (\gamma\log\gamma)k + (2\gamma k + 2)\log\beta - (2\gamma k + 1)\tau h.

Dividing by {kN} and sending {k\rightarrow\infty} gives

\displaystyle  h_\mathrm{top} \geq h + \tfrac 1N(-\gamma\log\gamma - (\gamma + 2\delta)\log 2 + 2\gamma\log\beta - 2\gamma\tau h).

Putting {\gamma = \delta} this gives

\displaystyle  h_\mathrm{top} \geq h + \tfrac \delta N(-\log\delta - \log 8 + 2\log\beta - 2\tau h).

Thus it suffices to make the appropriate choice of {\delta} at the beginning of the proof. More precisely, let {\beta' = \beta(\frac 14)} be as in the uniform Katok lemma, and let {\delta\in (0,\frac 14)} be small enough that {-\log\delta > \log 8 - 2\log\beta' + 2\tau h}. Then {\beta(\frac 12-\delta) \geq \beta'} and so the estimate above gives

\displaystyle  h_\mathrm{top} \geq h + \tfrac\delta N(-\log\delta - \log 8 + 2\log\beta' - 2\tau h) > h,

which contradicts our original assumption that {h} was the topological entropy. This contradiction shows that there is a unique measure of maximal entropy.

Posted in Uncategorized | Leave a comment

De Bruijn graphs and entropy at finite scales

Let {\mathcal{A}} be a finite set, which we call an alphabet, and let {x \in \mathcal{A}^{\mathbb N}} be an infinite sequence of letters from {A}. It is natural to ask how complex the sequence {x} is: for example, if the alphabet is {\{H,T\}}, then we expect a typical sequence produced by flipping a coin to be in some sense more complex than the sequence {x=HHHHH\dots}.

One important way of making this notion precise is the entropy of the shift space generated by {x}, a notion coming from symbolic dynamics. Let {a_k(x)} be the number of words of length {k} (that is, elements of {A^k}) that appear as subwords of {x}. Clearly we have {1\leq a_k(x) \leq (\#\mathcal{A})^k}. Roughly speaking, the entropy of {x} is the exponential growth rate of {a_k(x)}. More precisely, we write

\displaystyle  h(x) := \varlimsup_{k\rightarrow\infty} \tfrac 1k \log a_k(x).

Of course, in practice it is often the case that one does not have an infinite sequence, but merely a very long one. For example, it has been suggested that entropy (and a related quantity, the topological pressure) can play a role in the analysis of DNA sequences; see [D. Koslicki, “Topological entropy of DNA sequences”, Bioinformatics 27(8), 2011, p. 1061–1067] and [D. Koslicki and D.J. Thompson, “Coding sequence density estimation via topological pressure”, J. Math. Biol. 70, 2015, p. 45–69]. In this case we have {\mathcal{A} = \{A,C,G,T\}} and are dealing with sequences {x} whose length is large, but finite.

Given a sequence {x} with length {n}, one can try to get some reasonable `entropy-like’ quantity by fixing some {k} and putting {h(x) = \frac 1k\log a_k(x)}. But what should we take {k} to be? If we take {k} to be too small we will get an overestimate (with {k=1} we will probably just find out that {x} contains every letter of {\mathcal{A}}), but if we take {k} too small we get an underestimate (with {k=n} we have {a_k(x)=1} so {h=0}).

The convention proposed by Koslicki in the first paper above is to let {k} be the largest number such that there is some word of length {n} that contains every word of length {k}. If this actually happens, then {h(x)} achieves its maximum value {\log \#\mathcal{A}}; if some words do not appear, then {h(x)<\log \#\mathcal{A}}.

What is the relationship between {n} and {k} that guarantees existence of a word of length {n} containing every word of length {k}? Let {p=\#\mathcal{A}} and note that there are {p^k} words of length {k}; if {w\in \mathcal{A}^n} contains every such word then we must have {n\geq p^k + k-1}, since the length-{k} subwords of {w} are precisely {(w_1\cdots w_k)}, {(w_2\cdots w_k)}, \dots, {(w_{n-k+1} \cdots w_n)}, so we must have {n-k+1 \geq p^k}.

The converse implication is a little harder, though. Given {k}, let {n=p^k + k-1}. Is it necessarily true that there is a word {w\in \mathcal{A}^n} that contains every subword of length {k}? After all, once {w_1\cdots w_k} is determined, there are not many possibilities for the word {w_2\cdots w_{k+1}}; can we navigate these restrictions successfully?

It is useful to rephrase the problem in the language of graph theory (what follows can be found in the proof of Lemma 1 in Koslicki’s paper). Let {G_k} be the directed graph defined as follows:

  • the vertex set is {\mathcal{A}^k}, so each vertex corresponds to a word of length {k};
  • there is an edge from {u\in \mathcal{A}^k} to {v\in \mathcal{A}^k} if and only if {u_1 v = uv_k}, that is, if {u_2\cdots u_k = v_1\cdots v_{k-1}}.

The graph {G_k} is the {k}-dimensional De Bruijn graph of {p} symbols. Recall that a Hamiltonian path in a graph is a path that visits each vertex exactly once. Thus the question above, regarding existence of a word in {\mathcal{A}^n} that contains every word in {\mathcal{A}^k}, where {n=p^k + k-1}, is equivalent to asking for the existence of a Hamiltonian path in the De Bruijn graph.

There is a correspondence between {G_k} and {G_{k-1}}; vertices in {G_k} correspond to edges in {G_{k-1}} (since both are labeled by elements of {\mathcal{A}^k}). Thus a Hamiltonian path in {G_k} corresponds to an Eulerian path in {G_{k-1}}; that is, a path that visits every edge exactly once.

This correspondence is very useful, since in general the problem of determining whether a Hamiltonian path exists is hard (NP-complete), while it is easy to check existence of an Eulerian path in a directed graph: a sufficient condition is that every vertex have in-degree equal to its out-degree (and a slight weakening of this condition is both necessary and sufficient). This is the case for De Bruijn graphs, where every vertex has {p=\#\mathcal{A}} edges coming in and going out. Thus {G_{k-1}} has an Eulerian path, which corresponds to a Hamiltonian path for {G_k}. This answers the original question, demonstrating that for every {k}, there is a word {w} of length {n=p^k+k-1} such that {w} contains every word of length {k} as a subword.

Posted in Uncategorized | 1 Comment

Entropy of S-gap shifts

1. S-gap shifts

S-gap shifts are a useful example for studying dynamics of shift spaces that are not subshifts of finite type but still exhibit some strong mixing properties. They are defined as follows: given {S\subset \{0,1,2,\dots,\}}, let {G\subset \{0,1\}^*} be the set of all words on two symbols of the form {0^n1} — that is, {n} 0s followed by a single 1. (Edit 8/2/15: As Steve Kass pointed out in his comment, we need to specify here that {0^n1\in G \Leftrightarrow n\in S}.) Then let {G^{{\mathbb Z}}} be the set of all bi-infinite sequences of 0s and 1s that can be written as an infinite concatenation of words in {G}, and let {X\subset \{0,1\}^{\mathbb Z}} be the smallest closed shift-invariant set containing {G^{\mathbb Z}}.

Equivalently, {X} is the collection of bi-infinite sequences {x\in \{0,1\}^{\mathbb Z}} for which every subword of the form {10^n1} has {n\in S}. If {S} is finite then {X} is a shift of finite type. We are usually most interested in the case where {S} is infinite — for example, in this paper (arXiv) where Dan Thompson and I considered questions of uniqueness of the measure of maximal entropy. For purposes of this post, {S} may be finite or infinite, it will not matter.

Recall that if {\mathcal{L}_n} denotes the set of words of length {n} that appear somewhere in the shift {X}, then the topological entropy of {X} is {h(X) = \lim \frac 1n \log \#\mathcal{L}_n}. The following result is well-known and often quoted.

Theorem 1 Given {S\subset \{0,1,2,\dots\}}, the topological entropy of the corresponding {S}-gap shift is {h(X) = \log\lambda}, where {\lambda>1} is the unique solution to {\sum_{n\in S} \lambda^{-(n+1)} = 1}.

Note that when {S=\{0,1,2,\dots\}}, the {S}-gap shift is the full shift on two symbols, and the equation has solution {\lambda=2}.

Despite the fact that Theorem 1 is well-known, I am not aware of a complete proof written anywhere in the literature. In a slightly different language, this result already appears in B. Weiss’ 1970 paper “Intrinsically ergodic systems” [Bull. AMS 76, 1266–1269] as example 3.(3), but no details of the proof are given. It is exercise 4.3.7 in Lind and Marcus’ book “Symbolic dynamics and coding”, and the section preceding it gives ideas as to how the proof may proceed. Finally, a more detailed proof appears in Spandl’s “Computing the topological entropy of subshifts” [Math. Log. Quart. 53 (2007), 493–510], but there is a gap in the proof. The goal of this post is to explain where the gap in Spandl’s proof is, and then to give two other proofs of Theorem 1, one more combinatorial, the other more ergodic theoretic.

2. An incomplete proof

The idea of Spandl’s proof is this. Given an {S}-gap shift, let {\mathcal{R}_n} be the set of words of length {n} that end in the symbol {1}. Every such word is either {0^{n-1}1} or is of the form {w0^k1}, where {w\in \mathcal{R}_{n-(k+1)}} and {k\in S}. Thus we have

\displaystyle \#\mathcal{R}_n = 1 + \sum_{k=0}^{n-1} {\mathbf{1}}_S(k) \#\mathcal{R}_{n-(k+1)}. \ \ \ \ \ (1)

Moreover, every {w\in \mathcal{L}_n} is either {0^n} or is of the form {v0^{n-k}} for some {v\in \mathcal{R}_k}, so {\#\mathcal{L}_n = 1 + \sum_{k=1}^n \#\mathcal{R}_k}. With a little more work, one can use this together with (1) to get

\displaystyle \#\mathcal{L}_n = c_n + \sum_{k=0}^{n-1} {\mathbf{1}}_S(k) \#\mathcal{L}_{n-(k+1)},

where {c_n\in [0,n+2]} for all {n}. Dividing through by {\#\mathcal{L}_n} gives

\displaystyle 1 = \frac{c_n}{\#\mathcal{L}_n} + \sum_{k=0}^{n-1} {\mathbf{1}}_S(k)\frac{\#\mathcal{L}_{n-(k+1)}}{\#\mathcal{L}_n} \ \ \ \ \ (2)

Writing {\lambda = e^{h(X)}}, Spandl now says that {\#\mathcal{L}_n} is asymptotically proportional to {\lambda^n}, and so for each {k} the ratio inside the sum converges to {\lambda^{-(k+1)}} as {n\rightarrow\infty}. Since {c_n} is subexponential, this would prove Theorem 1.

The problem is that the ratio {\frac{\#\mathcal{L}_{n-(k+1)}}{\#\mathcal{L}_n}} may not converge as {n\rightarrow\infty}. Indeed, taking {S = \{1,3,5,7,\dots\}} it is not hard to show that when {k} is even, the limit taken along odd values of {n} differs from the limit taken along even values of {n}.

One might observe that in this specific example, the terms where {k} is even do not contribute to the sum in (2), since {S} contains no even numbers. Thus it is plausible to make the following conjecture.

Conjecture. For any {S}, let {d = \mathrm{gcd}(S+1)} and let {h = \log\lambda} be the topological entropy of the corresponding {S}-gap shift. Then for every {j=0,1,\dots,d-1} the limit

\displaystyle a_j := \lim_{n\rightarrow\infty} \frac{\#\mathcal{L}_{j+dn}}{\lambda^{j+dn}}

exists. In particular, if {k\in S} then

\displaystyle \frac{\#\mathcal{L}_{n-(k+1)}}{\#\mathcal{L}_n} = \lambda^{-(k+1)}\frac{\#\mathcal{L}_{n-(k+1)}}{\lambda^{n-(k+1)}} \frac{\lambda^n}{\#\mathcal{L}_n} \rightarrow \lambda^{-(k+1)} \text{ as }n\rightarrow\infty.

If the conjecture is true, this would complete Spandl’s proof of Theorem 1. I expect that the conjecture is true but do not know how to prove it. In general, any shift space with entropy {h=\log \lambda} has the property that {\#\mathcal{L}_n\geq \lambda^n}. There are examples of shift spaces where {\#\mathcal{L}_n /\lambda^n} is not bounded above; however, it can be shown that every {S}-gap shift admits an upper bound, so that {\#\mathcal{L}_n/\lambda^n} is bounded away from 0 and {\infty} (this is done in my paper with Dan Thompson). I don’t see how those techniques can extend to a proof of the conjecture.

So instead of patching the hole in this proof, we investigate two others.

3. A combinatorial proof

Given a shift space {X} with language {\mathcal{L}}, let {a_n = \#\mathcal{L}_n} be the number of words of length {n}. Consider the following generating function:

\displaystyle H(z) = \sum_{n=1}^\infty a_n z^n.

(This is similar to the dynamical zeta function but is somewhat different since we consider all words of length {n} and not just ones that represent periodic orbits of period {n}.) Observe that the radius of convergence of {H} is {\xi := e^{-h(X)}}. Indeed, for {|z|<\xi} we can put {\beta = |z|/\xi < 1} and observe that the quantity {|a_n z^n| = \frac{\#\mathcal{L}_n}{e^{nh(X)}} \beta^n} decays exponentially; similarly, for {|z|>\xi} the terms in the sum grow exponentially.

Now fix an {S}-gap shift and consider the function

\displaystyle F_1(z) = \sum_{n\in S} z^{n+1}.

Our goal is to find a relationship between {H} and {F_1} allowing us to show that the radius of convergence of {H} is given by the positive solution to {F_1(z)=1}.

First recall the set {G = \{0^n1\mid n\in S\}}. Given integers {k,n}, let {A_n^k} be the number of words in {\{0,1\}^n} that can be written as a concatenation of exactly {k} words from {G}. Note that {A_n^1 = {\mathbf{1}}_S(n-1)}, so that

\displaystyle F_1(z) = \sum_{n=1}^\infty A_n^1 z^n.

More generally, we consider the power series

\displaystyle F_k(z) = \sum_{n=1}^\infty A_n^k z^n.

A little thought reveals that {F_k(z)F_\ell(z) = F_{k+\ell}(z)}, since the coefficient of {z^n} in {F_k(z)F_\ell(z)} is given by

\displaystyle \sum_{m=1}^n A_{n-m}^k A_m^\ell,

which is equal to {A_n^{k+\ell}} (here {m} represents the location where the {\ell}th element of {G} ends). In particular, we get

\displaystyle F_k(z) = (F_1(z))^k.

At this point, the natural thing to do is to say that {\#\mathcal{L}_n = \sum_{k\geq 1} A_n^k} and hence {H(z) = \sum_{k\geq 1} F_k(z)}. However, this is not quite correct because {\mathcal{L}_n} includes words that are not complete concatenations of words from {G} and so are not counted by any {A_n^k}. We return to this in a moment, but first point out that if this were true then we would have {H(z) = \sum_{k\geq 1} F_1(z)^k}, and so {H(z)} converges if {F_1(z)<1} and diverges if {F_1(z)>1}, which was our goal.

To make this precise, we observe that every word in {\mathcal{L}_n} is either {0^n} or is of the form {0^i w 0^j} where {w\in A_{n-(i+j)}^k} for some {k}. Thus we have the bounds

\displaystyle \sum_{k\geq 1} A_n^k \leq \#\mathcal{L}_n \leq 1 + \sum_{k\geq 1} \sum_{i\geq 0} \sum_{j\geq 0} A_{n-(i+j)}^k.

In particular, for all {z\geq 0} we get

\displaystyle \sum_{k\geq 1} F_k (z) \leq H(z) \leq \sum_{n\geq 1} z^n + \sum_{k\geq 1} F_k (z) \left(\sum_{i\geq 0} z^i \right) \left(\sum_{i\geq 0} z^j\right) \ \ \ \ \ (3)

Writing {\hat H(z) = \sum_k F_k(z) = \sum_k F_1(z)^k}, we note that for every {z\in [0,1)} we have {\sum_n z^n < \infty} and so {\hat H(z)} converges if and only if {H(z)} converges. Thus {\hat H} and {H} have the same radius of convergence; in particular, the radius of convergence of {H(z)} is the unique {z} such that {F_1(z) = \sum_{n\in S} z^{n+1} = 1}, and by the earlier discussion we have {h(X) = \log(1/z)}, proving Theorem 1.

4. An ergodic theory proof

Now we sketch a proof that goes via the variational principle, relating an {S}-gap shift (on the finite alphabet {\{0,1\}}) to a full shift on a (possibly countable) alphabet. The combinatorial proof above is elementary and requires little advanced machinery; this proof, on the other hand, requires a number of (rather deep) results from ergodic theory and thermodynamic formalism, but has the advantage of illuminating various aspects of the structure of {S}-gap shifts.

Let {X} be an {S}-gap shift and let {Y} be the set of sequences {x\in X} which contain the symbol 1 infinitely often both forwards and backwards. By the Poincaré recurrence theorem, {\mu(Y)=1} for every ergodic measure other than the delta-measure on the fixed point {0^\infty}. Note that {Y} is {\sigma}-invariant but not compact (unless {S} is finite).

Let {Z = Y \cap [1]}, and let {F\colon Z\rightarrow Z} be the first return map. Thus {F(x) = \sigma^n(x)} for all {x\in [10^n 1]}. Note that {F} is topologically conjugate to the full shift {S^{\mathbb Z}} on the alphabet {S}, which we allow to be finite or countable. The conjugacy is given by {\pi\colon S^{\mathbb Z}\rightarrow Z} that takes {\vec n = \cdots n_{-1}.n_0n_1\cdots} to {\cdots 10^{n_{-1}}.10^{n_0}10^{n_1}1\cdots}.

Given an ergodic {\sigma}-invariant probability measure {\mu} on {X} with {\mu(Y)=1}, let {\mu_Z} be the induced {F}-invariant measure on {Z}. Then by Abramov’s formula, we have

\displaystyle h_\mu(X,\sigma) = \mu(Z) h_{\mu_Z}(Z,F).

Associate to each {\mu} the shift-invariant measure on {S^{\mathbb Z}} given by {h^{-1}_*\mu_Z}. Then we have

\displaystyle h_\mu(X,\sigma) = \mu(Z) h_{\nu}(S^{\mathbb Z},\sigma).

Our goal is to relate the topological entropy on {(X,\sigma)} to the topological pressure of a suitable function on {(S^{\mathbb Z},\sigma)}. Let {\varphi\colon S^{\mathbb Z}\rightarrow {\mathbb R}} be the function taking {\vec n} to {n_0 + 1}, and observe that if {\mu} and {\nu} are identified as above, we have {\mu(Z) = \mu([1]) = 1/\int \varphi\,d\nu}, so that

\displaystyle h_\nu = \left(\int \varphi\,d\nu \right) h_\mu.

At this point we elide some technical details regarding Gurevich pressure, etc., and simply remark that for {t>0} we have

\displaystyle \begin{aligned} P(S^{\mathbb Z},-t\varphi) &= \lim_{k\rightarrow\infty} \frac 1k \log \sum_{(n_0,\dots,n_{k-1})\in S^k} \sup_{\vec m\in [n_0\cdots n_{k-1}]} e^{-t \sum_{i=0}^{k-1} \varphi(\sigma^i\vec m)} \\ &= \lim_{k\rightarrow\infty} \frac 1k \log \left(\sum_{n_0,\dots,n_{k-1}} e^{-t\sum_{i=0}^{k-1} (n_i + 1)} \right) \\ &= \lim_{k\rightarrow\infty} \frac 1k \log \left( \sum_{n\in S} e^{-t(n+1)} \right)^k = \log \sum_{n\in S} e^{-t(n+1)}, \end{aligned}

while by the variational principle

\displaystyle P(S^{\mathbb Z},-t\varphi) = \sup_\nu \left(h_\nu - t \int\varphi\,d\nu\right) = \sup_\nu \left(\int\varphi\,d\nu\right) (h_\mu - t).

We conclude that

\displaystyle \sup_\mu (h_\mu - t) \mu[1]^{-1} = \log \sum_{n\in S} e^{-t(n+1)}. \ \ \ \ \ (4)

Let {t} be such that the right-hand side is equal to 0; to prove Theorem 1 we need to prove that {h(X,\sigma) = t}. First observe that for every {\mu} we have {h_\mu - t \leq 0}, thus {h(X,\sigma) \leq t}. For the other inequality, let {\nu} be the Bernoulli measure on {S^{\mathbb Z}} that assigns weight {e^{-t(n+1)}} to the symbol {n}. Then {\nu} is an equilibrium state for {-t\varphi} on {S^{\mathbb Z}}, and by our choice of {t}, we have

\displaystyle h_\nu - \int \varphi\,d\nu = 0,

so that in particular the left hand side of (4) vanishes and we get {h_\mu=t} for the measure {\mu} that corresponds to {\nu}. This shows that {h(X,\sigma) = t} and completes the proof of the theorem.

We note that this proof has the advantage of giving an explicit description of the MME. With a little more work it can be used to show that the MME is unique and has good statistical properties (central limit theorem, etc.).

Posted in ergodic theory, examples | Tagged , | 2 Comments