This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Invariant measures for random piecewise convex maps

Tomoki INOUE Graduate School of Science and Engineering
Ehime University
3, Bunkyo-cho, Matsuyama, Ehime
790-8577, JAPAN
[email protected]
 and  Hisayoshi TOYOKAWA Faculty of Engineering
Kitami Institute of Technology
165 Koen-cho, Kitami, Hokkaido
090-8507, JAPAN
[email protected]
Abstract.

We show the existence of Lebesgue-equivalent conservative and ergodic σ\sigma-finite invariant measures for a wide class of one-dimensional random maps consisting of piecewise convex maps. We also estimate the size of invariant measures around a small neighborhood of a fixed point where the invariant density functions may diverge. Application covers random intermittent maps with critical points or flat points. We also illustrate that the size of invariant measures tends to infinite for random maps whose right branches exhibit a strongly contracting property on average, so that they have a strong recurrence to a fixed point.

Key words and phrases:
Invariant Measures; Infinite Invariant Measures; Random Dynamical Systems; Piecewise Convex Maps; Random Piecewise Convex Maps
2020 Mathematics Subject Classification:
Primary 37A40; Secondary 37H12, 37A05

1. Introduction

For a given non-singular map on a probability space, the question whether an invariant measure, which is absolutely continuous with respect to the reference measure, exists or not is one of the fundamental problems in ergodic theory. The same question for a random map (in terms of both annealed and quenched sense) makes sense and is also important passing through ergodic theory for Markov operators, skew-product transformations or Markov operator cocycles. See [4, 13, 22, 21] and references therein. On the one hand, if a system, whether deterministic or random, admits an absolutely continuous finite invariant measure, some classical ergodic theorems such as the Birkhoff ergodic theorem are applicable. Moreover some limit theorems such as the central limit theorem may be expected [3, 10, 11]. On the other hand, if a system possesses only an absolutely continuous σ\sigma-finite and infinite invariant measure, such a system is within the scope of infinite ergodic theory and has been paid attention for recent decades [2, 25, 28]. Typical examples of such systems have indifferent or neutral, but weakly repelling, fixed points and are known as an intermittent model, as well as a model of non-uniformly hyperbolic systems. The existence of absolutely continuous σ\sigma-finite invariant measures and several statistical properties for random versions of intermittent maps thus have been recently enthusiastically studied [5, 6, 7, 18, 29, 30].

The subject of the present paper is a certain class of one-dimensional random dynamical systems called random piecewise convex maps in annealed (or i.i.d.) sense (see the conditions (2)–(2) and (A) or (B) precisely). The existence of invariant measures and their ergodic properties of the deterministic piecewise convex maps were firstly studied by Lasota and Yorke in [23] for the case when maps are uniformly expanding on the first branch and other branches have positive derivative, and then studied by Inoue in [15, 16] for more general cases. The aim of this paper is to generalize them, demonstrating that random piecewise convex maps admit Lebesgue-equivalent ergodic σ\sigma-finite invariant measures. (For some interesting studies of random generalization of piecewise convex maps with ``position dependent'' probability measures, we refer to [8] (cf., [20]), while we do not deal with position dependent random maps but we handle random maps consisting of potentially uncountably many maps with infinite invariant measures.) We also estimate the size of invariant measures, from which it is revealed whether the σ\sigma-finite invariant measures for random piecewise convex maps are finite or infinite. The phenomenon that an invariant measure varies from finite to infinite as a parameter of a system varies is well-known for (deterministic) intermittent maps employed by Thaler in [27] or Liverani–Saussol–Vaienti (see (1.1) below) in [24]. Although our random piecewise convex maps also admit both finite and σ\sigma-finite infinite invariant measures depending on parameters and probabilities of choice of maps, some examples of them (e.g., Example 4.1) have neither a common indifferent fixed point nor a common critical point, which is very different from deterministic cases. The mechanism is, roughly speaking, derived from strong contracting property on average, which never occurs for deterministic systems and is unique to random dynamical systems.

We then briefly review the LSV map, named after Liverani–Saussol–Vaienti from [24], which has been analyzed as a simple model of intermittency, and we will compare them (and known random versions of them, see also [5, 6, 7]) with our random piecewise convex maps. For α>0\alpha>0, the LSV map Tα:[0,1][0,1]T_{\alpha}:[0,1]\to[0,1] is defined by

Tαx={x(1+2αxα)x[0,12],2x1x(12,1]\displaystyle T_{\alpha}x=\begin{cases}x\left(1+2^{\alpha}x^{\alpha}\right)&x\in[0,\frac{1}{2}],\\ 2x-1&x\in(\frac{1}{2},1]\end{cases} (1.1)

which has an indifferent fixed point 0 and Lebesgue-typical orbits would be trapped around a small neighborhood of 0 for a long time. For this map TαT_{\alpha}, it is well-known that a Lebesgue-equivalent ergodic invariant measure exists and that the invariant density function is of order xαx^{-\alpha} as x0x\to 0 [24, 27, 32]. Thus TαT_{\alpha} possesses an equivalent finite invariant measure for 0<α<10<\alpha<1 and an equivalent σ\sigma-finite and infinite invariant measure for α1\alpha\geq 1. The order of invariant measures radically affects their statistical properties, such as the central limit theorem or the mixing rate in the finite measure-preserving case (cf., [7, 14, 26]) and the wandering rate in the Darling–Kac theorem or the arcsine law in the infinite measure-preserving case (cf., [2, 25, 28]). Therefore it is certainly worth establishing invariant measures and analyzing the asymptotics of the invariant measures for given systems. The asymptotics of the invariant density for TαT_{\alpha} is also tightly related to the decay of the inverse images of the disconnected point 12\frac{1}{2} by the left branch. If we set x1(α)=12x_{1}(\alpha)=\frac{1}{2} and xn(α)[0,12)x_{n}(\alpha)\in[0,\frac{1}{2}) for n2n\geq 2 such that Tα(xn+1(α))=xn(α)T_{\alpha}(x_{n+1}(\alpha))=x_{n}(\alpha) for n1n\geq 1, then it follows from the results in [32, 27] that

xn(α)Cn1α,\displaystyle x_{n}(\alpha)\sim Cn^{-\frac{1}{\alpha}}, (1.2)

for some C>0C>0, where anbna_{n}\sim b_{n} for positive sequences {an}n=1,{bn}n=1\{a_{n}\}_{n=1}^{\infty},\{b_{n}\}_{n=1}^{\infty}\subset\mathbb{R} stands for limnanbn=1\lim_{n\to\infty}\frac{a_{n}}{b_{n}}=1. In this paper, for random piecewise convex maps, we establish the existence of Lebesgue-equivalent, conservative and ergodic σ\sigma-finite invariant measures and evaluate the asymptotics of the invariant measures, which is a generalization of results for random LSV maps as in [5, 6]. The advantage of our results is that we do not restrict ourselves to constituent maps of random dynamical systems to be at most countable nor expanding on average outside of a small neighborhood of the common indifferent fixed point. As an application, we can modify a random LSV map to admit uniformly contracting branches and moreover to admit a critical or flat point around the inverse image of an indifferent fixed point (see Example 4.34.6). The key point in the estimate of the invariant measures for random piecewise convex maps is, on the contrary to the LSV maps, the decay of random inverse images of the disconnected point by the right branches (see Definition 2.1, Theorem 3.2 and Theorem 3.3 precisely). That is, one needs to take the contracting effect by right branches into account. Indeed, the induced (random) map/the first return (random) map for (random) LSV maps satisfy uniformly expanding property (on average), whereas those for our random piecewise convex maps do not in general. Hence we cannot expect so-called spectral decomposition method from the Lasota–Yorke type inequality any longer. We refer to [30] for similar arguments and some background.

1.1. Notation

Throughout the paper, all sets and functions mentioned are measurable and any difference on null sets with respect to a measure under consideration is ignored. As usual, L1(X,λ)L^{1}(X,\lambda), for a set XX with measurable structure and a measure λ\lambda over XX, stands for the set of all λ\lambda-integrable functions over XX where functions differing only on λ\lambda-null sets are identified. For a measurable set AA, 1A1_{A} always denotes the indicator function on AA.

Let {an}n=1,{bn}n=1\{a_{n}\}_{n=1}^{\infty},\{b_{n}\}_{n=1}^{\infty}\subset\mathbb{R} be positive sequences. The notation anbna_{n}\sim b_{n} is explained below (1.2). For the notational convention, we further use anbna_{n}\gtrapprox b_{n}, equivalently bnanb_{n}\lessapprox a_{n}, by meaning that there exists a constant M>0M>0 independent of nn\in\mathbb{N} such that anMbna_{n}\geq Mb_{n} holds. anbna_{n}\approx b_{n} is used when anbna_{n}\gtrapprox b_{n} and anbna_{n}\lessapprox b_{n} hold.

1.2. Organization

The present paper is organized as follows. In §2, we give necessary preliminaries and define random piecewise convex maps. §3 is devoted to our main result. We establish in Theorem 3.1 the existence of Lebesgue-equivalent, conservative and ergodic σ\sigma-finite invariant measures for random piecewise convex maps. Theorem 3.2 and Theorem 3.3 show the asymptotics of the invariant measures given in Theorem 3.1. We illustrate in §4 several examples of random piecewise convex maps. §4 also provides some counterexample which possesses an infinite derivative.

2. The model of random piecewise convex maps

In this section, we define random picewise convex maps. Let X=[0,1]X=[0,1] be the unit interval equipped with the Lebsgue measure λ\lambda over the Borel σ\sigma-algebra \mathcal{B} and let 𝔸\mathbb{A} and 𝔹\mathbb{B} be some parameter regions with some measurable structure (usually they are subspaces of \mathbb{N} or \mathbb{R}). For each α𝔸\alpha\in\mathbb{A} and β𝔹\beta\in\mathbb{B}, we define a non-singular maps Tα,βT_{\alpha,\beta}, i.e., λ(Tα,β1N)=0\lambda(T_{\alpha,\beta}^{-1}N)=0 whenever λ(N)\lambda(N)=0, on XX by

Tα,βx={τα(x)x[0,12],Sβ(x)x(12,1]T_{\alpha,\beta}x=\begin{cases}\tau_{\alpha}(x)&x\in[0,\frac{1}{2}],\\ S_{\beta}(x)&x\in(\frac{1}{2},1]\end{cases}

where τα:[0,12]X\tau_{\alpha}:[0,\frac{1}{2}]\to X and Sβ:(12,1]XS_{\beta}:(\frac{1}{2},1]\to X for α𝔸\alpha\in\mathbb{A} and β𝔹\beta\in\mathbb{B} are injective and continuous maps with some conditions (see the conditions (2)–(2), (A) and (B) precisely). The standing assumption on {τα:α𝔸}\{\tau_{\alpha}:\alpha\in\mathbb{A}\} and {Sβ:β𝔹}\{S_{\beta}:\beta\in\mathbb{B}\} is the following:

  1. (0)

    The map T:𝔸×𝔹×XXT:\mathbb{A}\times\mathbb{B}\times X\to X; (α,β,x)Tα,β(x)(\alpha,\beta,x)\mapsto T_{\alpha,\beta}(x) is measurable with respect to each variable.

Note that the above condition (2) is fulfilled if 𝔸\mathbb{A} and 𝔹\mathbb{B} are topological spaces endowed with their Borel structures and the maps 𝔸ατα\mathbb{A}\ni\alpha\mapsto\tau_{\alpha} and 𝔹βSβ\mathbb{B}\ni\beta\mapsto S_{\beta} are continuous.

Our random dynamical systems are defined as random compositions of maps {Tα,β:α𝔸,β𝔹}\{T_{\alpha,\beta}:\alpha\in\mathbb{A},\ \beta\in\mathbb{B}\} with the condition (2) in the annealed sense. In order to define our random dynamical systems, we set probability measures ν𝔸\nu_{\mathbb{A}} and ν𝔹\nu_{\mathbb{B}} on 𝔸\mathbb{A} and 𝔹\mathbb{B}, respectively. ν𝔸\nu_{\mathbb{A}}^{\infty} denotes the infinite product of the probability measure of ν𝔸\nu_{\mathbb{A}} over 𝔸\mathbb{A}^{\mathbb{N}}. Then, for the family of maps {Tα,β:α𝔸,β𝔹}\{T_{\alpha,\beta}:\alpha\in\mathbb{A},\ \beta\in\mathbb{B}\} and probability measures ν𝔸\nu_{\mathbb{A}} and ν𝔹\nu_{\mathbb{B}} over the parameter spaces 𝔸\mathbb{A} and 𝔹\mathbb{B}, we consider the following transition function

(x,A)=𝔸×𝔹1A(Tα,βx)𝑑ν𝔸(α)𝑑ν𝔹(β)\displaystyle\mathbb{P}\left(x,A\right)=\int_{\mathbb{A}\times\mathbb{B}}1_{A}\left(T_{\alpha,\beta}x\right)d\nu_{\mathbb{A}}(\alpha)d\nu_{\mathbb{B}}(\beta) (2.1)

for each AA\in\mathcal{B} and λ\lambda-almost every xXx\in X. By the condition (2) and non-singularity of each Tα,βT_{\alpha,\beta} with respect to λ\lambda, it is straightforward to see that this transition function is null-preserving, i.e., λ(N)=0\lambda(N)=0 implies (x,N)=0\mathbb{P}(x,N)=0 for λ\lambda-almost every xXx\in X. Thus, we can define the corresponding Markov operator P:L1(X,λ)L1(X,λ)P:L^{1}(X,\lambda)\to L^{1}(X,\lambda) (i.e., Pf0Pf\geq 0 and PfL1=fL1\lVert Pf\rVert_{L^{1}}=\lVert f\rVert_{L^{1}} for each fL1(X,λ)f\in L^{1}(X,\lambda) non-negative) by

APf𝑑λ=Xf(,A)𝑑λ\int_{A}Pfd\lambda=\int_{X}f\cdot\mathbb{P}(\,\cdot\,,A)d\lambda

for each fL1(X,λ)f\in L^{1}(X,\lambda) and AA\in\mathcal{B}. The adjoint operator of PP acting on L(X,λ)L^{\infty}(X,\lambda) is denoted by PP^{*} which is characterized by

XPfg𝑑λ=XfPg𝑑λ\int_{X}Pf\cdot gd\lambda=\int_{X}f\cdot P^{*}gd\lambda

for each fL1(X,λ)f\in L^{1}(X,\lambda) and gL(X,λ)g\in L^{\infty}(X,\lambda).

In order to make more precise assumptions on random piecewise convex maps, we introduce some notations. As in the previous section (note that τα\tau_{\alpha} is not necessarily the same as the LSV map Tα|[0,12]T_{\alpha}|_{[0,\frac{1}{2}]} from §1), for 𝜶=(α1,α2,)𝔸\bm{\alpha}=(\alpha_{1},\alpha_{2},\dots)\in\mathbb{A}^{\mathbb{N}}, let x1𝜶=x112x_{1}^{\bm{\alpha}}=x_{1}\coloneqq\frac{1}{2} and xn+1𝜶ταn11τα11(x1𝜶)x_{n+1}^{\bm{\alpha}}\coloneqq\tau_{\alpha_{n-1}}^{-1}\circ\cdots\circ\tau_{\alpha_{1}}^{-1}(x_{1}^{\bm{\alpha}}) for n1n\geq 1. For simplicity, let x0𝜶=x01x_{0}^{\bm{\alpha}}=x_{0}\coloneqq 1 and set Xn𝜶(xn+1𝜶,xn𝜶]X_{n}^{\bm{\alpha}}\coloneqq(x_{n+1}^{\bm{\alpha}},x_{n}^{\bm{\alpha}}] for 𝜶𝔸\bm{\alpha}\in\mathbb{A}^{\mathbb{N}} and n0n\geq 0.

For considering the inverses by the right branch of xn𝜶x_{n}^{\bm{\alpha}}'s as well, we need the following definition:

Definition 2.1.

η\eta is defined by a map from 𝔸×𝔹\mathbb{A}^{\mathbb{N}}\times\mathbb{B} to {0}\mathbb{N}\cup\{0\} satisfying Sβ(1)Xη(𝜶,β)𝜶S_{\beta}(1)\in X_{\eta(\bm{\alpha},\beta)}^{\bm{\alpha}}.

We always assume that η\eta is measurable as a standing hypothesis. Then, for 𝜶𝔸\bm{\alpha}\in\mathbb{A} and β𝔹\beta\in\mathbb{B}, let y1𝜶,β=y11y_{1}^{\bm{\alpha},\beta}=y_{1}\coloneqq 1 and yn+1𝜶,βy_{n+1}^{\bm{\alpha},\beta} be the inverse of xη(𝜶,β)+n𝜶x_{\eta(\bm{\alpha},\beta)+n}^{\bm{\alpha}} by the right branch of Tα,βT_{\alpha,\beta}, namely,

yn+1𝜶,βSβ1(xη(𝜶,β)+n𝜶)for n1.y_{n+1}^{\bm{\alpha},\beta}\coloneqq S_{\beta}^{-1}\left(x_{\eta(\bm{\alpha},\beta)+n}^{\bm{\alpha}}\right)\quad\text{for $n\geq 1$.}

We set Yn𝜶,β(yn+1𝜶,β,yn𝜶,β]Y_{n}^{\bm{\alpha},\beta}\coloneqq(y_{n+1}^{\bm{\alpha},\beta},y_{n}^{\bm{\alpha},\beta}] for n1n\geq 1 and Y[12,1]Y\coloneqq[\frac{1}{2},1].

Throughout the paper we assume, together with the condition (2), that a family of maps {Tα,β:α𝔸,β𝔹}\{T_{\alpha,\beta}:\alpha\in\mathbb{A},\ \beta\in\mathbb{B}\} satisfies the following conditions (piecewise convex property, see also Figure 1): for ν𝔸\nu_{\mathbb{A}}-almost every α𝔸\alpha\in\mathbb{A} and ν𝔹\nu_{\mathbb{B}}-almost every β𝔹\beta\in\mathbb{B},

  1. (1)

    τα\tau_{\alpha} and SβS_{\beta} are C1C^{1}-functions and SβS_{\beta} can be extended to a continuous function on YY (the extension is also denoted by the same symbol SβS_{\beta}) with τα(0)=0\tau_{\alpha}(0)=0, τα(12)=1\tau_{\alpha}(\frac{1}{2})=1 and Sβ(12)=0S_{\beta}(\frac{1}{2})=0;

  2. (2)

    τα\tau_{\alpha}^{\prime} and SβS_{\beta}^{\prime} are non-decreasing on (0,12)(0,\frac{1}{2}) and (12,1)(\frac{1}{2},1), respectively, with τα(0)1\tau_{\alpha}^{\prime}(0)\geq 1, τα(x)>1\tau_{\alpha}^{\prime}(x)>1 for x(0,12)x\in(0,\frac{1}{2}), Sβ(12)0S_{\beta}^{\prime}(\frac{1}{2})\geq 0 and Sβ(x)>0S_{\beta}^{\prime}(x)>0 for x(12,1)x\in(\frac{1}{2},1), where τα(0)\tau_{\alpha}^{\prime}(0) and Sβ(12)S_{\beta}^{\prime}(\frac{1}{2}) are taken as the right derivatives.

By our assumptions (1) and (2), for ν𝔸\nu_{\mathbb{A}}^{\infty}-almost every 𝜶=(α1,α2,)𝔸\bm{\alpha}=(\alpha_{1},\alpha_{2},\dots)\in\mathbb{A}^{\mathbb{N}} and ν𝔹\nu_{\mathbb{B}}-almost every β𝔹\beta\in\mathbb{B}, we have Tαn,βXn𝜶=Xn1𝜶T_{\alpha_{n},\beta}X_{n}^{\bm{\alpha}}=X_{n-1}^{\bm{\alpha}} and Tα,βYn+1𝜶,β=Xη(𝜶,β)+n𝜶T_{\alpha,\beta}Y_{n+1}^{\bm{\alpha},\beta}=X_{\eta(\bm{\alpha},\beta)+n}^{\bm{\alpha}} for any n1n\geq 1 and Tα0,βY1𝜶,β=(xη(𝜶,β)+1𝜶,Sβ(1)]Xη(𝜶,β)𝜶T_{\alpha_{0},\beta}Y_{1}^{\bm{\alpha},\beta}=(x_{\eta(\bm{\alpha},\beta)+1}^{\bm{\alpha}},S_{\beta}(1)]\subset X_{\eta(\bm{\alpha},\beta)}^{\bm{\alpha}}, where α0𝔸\alpha_{0}\in\mathbb{A} is arbitrary.

Remark 2.1.

(I) The phase space XX is of course not necessarily [0,1][0,1] but just needs to be a bounded interval in \mathbb{R}. Similarly, the choice of the discontinuous point 12\frac{1}{2} is just for simplicity, that is, we can take an arbitrary c(0,1)c\in(0,1) instead of 12\frac{1}{2} so that τα\tau_{\alpha} and SβS_{\beta} are defined on [0,c][0,c] and (c,1](c,1] respectively. Other similar generalizations, such as increasing the number of partitions to more than two or the case when τα\tau_{\alpha}'s are not surjective too, may be considered without big difficulty. For instance, if we decompose XX into {Xi}i=0n\{X_{i}\}_{i=0}^{n} with Xi=[ai,ai+1)X_{i}=[a_{i},a_{i+1}) with 0=a0<a1<a2<<an<an+1=10=a_{0}<a_{1}<a_{2}<\cdots<a_{n}<a_{n+1}=1 for some n2n\geq 2, where maps on X0X_{0} satisfy the conditions on τα\tau_{\alpha} and maps on XiX_{i} for i=1,,ni=1,\dots,n satisfy the conditions on SβS_{\beta} from (1) and (2), then the strongest contracting property in {Xi}i=1n\{X_{i}\}_{i=1}^{n} would dominate the statistical laws of the random system.

(II) In the condition (1), the assumption that τα\tau_{\alpha} and SβS_{\beta} are C1C^{1} can be relaxed to the following condition: there are families of countable open subintervals {InL}n\{I_{n}^{L}\}_{n} and {InR}n\{I_{n}^{R}\}_{n}, with the closure of their union being XX, such that, for ν𝔸\nu_{\mathbb{A}}-almost every α𝔸\alpha\in\mathbb{A} and ν𝔹\nu_{\mathbb{B}}-almost every β𝔹\beta\in\mathbb{B}, τα\tau_{\alpha} and SβS_{\beta} are C1C^{1} on InLI_{n}^{L} and InRI_{n}^{R}, respectively for each nn. Hence some (but not all) examples from [30] are also in sight of the present paper.

(III) In the above conditions (1) and (2), we do not exclude τα(0)=1\tau_{\alpha}^{\prime}(0)=1 nor Sβ(12)=0S_{\beta}^{\prime}(\frac{1}{2})=0 for α𝔸\alpha\in\mathbb{A} and β𝔹\beta\in\mathbb{B}. Furthermore, we will consider a random map with the common indifferent fixed point and the common flat point i.e., τα(0)=1\tau_{\alpha}^{\prime}(0)=1 and Sβ(n)(12)=0S_{\beta}^{(n)}(\frac{1}{2})=0 for any α𝔸\alpha\in\mathbb{A}, β𝔹\beta\in\mathbb{B} and n1n\geq 1 (see Example 4.4 and 4.5).

Refer to caption
Figure 1. A graph of a possible Tα,βT_{\alpha,\beta}: in this case η(𝜶,β)=1\eta(\bm{\alpha},\beta)=1 where 𝜶=(α,α,)\bm{\alpha}=(\alpha,\alpha,\dots).

Recall that for a Markov operator PP, a measure μ\mu over (X,)(X,\mathcal{B}) is called an absolutely continuous (resp. equivalent) σ\sigma-finite invariant measure if μ\mu is a σ\sigma-finite measure which is absolutely continuous (resp. equivalent) with respect to λ\lambda and its Radon–Nikodým derivative dμdλ\frac{d\mu}{d\lambda} is a (non-trivial) fixed point of PP. Note that by positivity of Markov operators the domain of any Markov operator can be naturally extended to the set of non-negative and locally integrable functions and hence the definition of absolutely continuous σ\sigma-finite infinite invariant measures makes sense.

We then consider the following technical conditions on our random dynamical systems, which are important in establishing the existence of equivalent σ\sigma-finite invariant measures.

  1. (A)

    esssupα𝔸τα(12)<\displaystyle\operatorname*{ess\,sup}_{\alpha\in\mathbb{A}}\tau^{\prime}_{\alpha}\left(\tfrac{1}{2}\right)<\infty;

  2. (B)

    𝔸1x1x2𝜶𝑑ν𝔸(α)<\displaystyle\int_{\mathbb{A}}\frac{1}{x_{1}-x_{2}^{\bm{\alpha}}}d\nu_{\mathbb{A}}(\alpha)<\infty.

Obviously the condition (A) implies the condition (B).

Lemma 2.1.

Under the assumption (2)–(2), the condition (B) implies the following: for any δ>0\delta>0, there exists N01N_{0}\geq 1 such that

𝔸×𝔹yN0+1𝜶,β12x1𝜶x2𝜶𝑑ν𝔸(𝜶)ν𝔹(β)<δ.\int_{\mathbb{A}^{\mathbb{N}}\times\mathbb{B}}\frac{y_{N_{0}+1}^{\bm{\alpha},\beta}-\frac{1}{2}}{x_{1}^{\bm{\alpha}}-x_{2}^{\bm{\alpha}}}d\nu_{\mathbb{A}}^{\infty}(\bm{\alpha})\nu_{\mathbb{B}}(\beta)<\delta.
Proof.

It follows from (1) and (2) that yN+1𝜶,β120y_{N+1}^{\bm{\alpha},\beta}-\frac{1}{2}\to 0 as NN\to\infty for ν𝔸\nu_{\mathbb{A}}^{\infty}-almost every 𝜶𝔸\bm{\alpha}\in\mathbb{A}^{\mathbb{N}} and ν𝔹\nu_{\mathbb{B}}-almost every β𝔹\beta\in\mathbb{B}. Then we have

limN𝔸×𝔹yN+1𝜶,β12x1𝜶x2𝜶𝑑ν𝔸(𝜶)ν𝔹(β)\displaystyle\lim_{N\to\infty}\int_{\mathbb{A}^{\mathbb{N}}\times\mathbb{B}}\frac{y_{N+1}^{\bm{\alpha},\beta}-\frac{1}{2}}{x_{1}^{\bm{\alpha}}-x_{2}^{\bm{\alpha}}}d\nu_{\mathbb{A}}^{\infty}(\bm{\alpha})\nu_{\mathbb{B}}(\beta) =𝔸×𝔹limNyN+1𝜶,β12x1𝜶x2𝜶dν𝔸(𝜶)ν𝔹(β)\displaystyle=\int_{\mathbb{A}^{\mathbb{N}}\times\mathbb{B}}\lim_{N\to\infty}\frac{y_{N+1}^{\bm{\alpha},\beta}-\frac{1}{2}}{x_{1}^{\bm{\alpha}}-x_{2}^{\bm{\alpha}}}d\nu_{\mathbb{A}}^{\infty}(\bm{\alpha})\nu_{\mathbb{B}}(\beta)
=0\displaystyle=0

from the Lebesgue dominated convergence theorem, which proves the lemma. ∎

In what follows, {Tα,β;ν𝔸,ν𝔹:α𝔸,β𝔹}\{T_{\alpha,\beta};\nu_{\mathbb{A}},\nu_{\mathbb{B}}:\alpha\in\mathbb{A},\ \beta\in\mathbb{B}\} denotes a random dynamical system given by the transition function (2.1) with conditions (2)–(2) and is referred as a random piecewise convex map. In the next section, we prove the existence and uniqueness of equivalent σ\sigma-finite invariant measures for random piecewise convex maps. Furthermore, we show the asymptotics of the invariant measures.

3. Equivalent σ\sigma-finite invariant measures

Before stating the main results in the paper, some basic definitions are listed. Let μ\mu be an absolutely continuous measure with respect to λ\lambda. Recall that an invariant set for a Markov operator PP is a measurable set EE\in\mathcal{B} with the property P1E=1EP^{*}1_{E}=1_{E} λ\lambda-almost everywhere and PP is called ergodic with respect to μ\mu if each invariant set EE satisfies either μ(E)=0\mu(E)=0 or μ(XE)=0\mu(X\setminus E)=0. PP is called conservative with respect to μ\mu if any function hh supported on suppμ\operatorname*{supp}\mu with hPhh\geq P^{*}h satisfies h=Phh=P^{*}h. Other equivalent characterization of conservativity are found in [22]*§3.1.

The following main theorem establishes the existence of Lebesgue-equivalent, conservative and ergodic σ\sigma-finite invariant measures for random piecewise convex maps defined in the previous section.

Theorem 3.1.

Let {Tα,β;ν𝔸,ν𝔹:α𝔸,β𝔹}\{T_{\alpha,\beta};\nu_{\mathbb{A}},\nu_{\mathbb{B}}:\alpha\in\mathbb{A},\ \beta\in\mathbb{B}\} be a random piecewise convex map satisfying the conditions (2)–(2) and (B) in §2. Then, for the random piecewise convex map, there exists a conservative and ergodic σ\sigma-finite invariant measure μ\mu which is equivalent to the Lebesgue measure λ\lambda. Moreover, the invariant density function of μ\mu, dμdλ\frac{d\mu}{d\lambda}, satisfies

  1. (D)

    dμdλ\frac{d\mu}{d\lambda} restricted on (0,12)(0,\frac{1}{2}) is non-increasing λ\lambda-almost everywhere, and

  2. (U)

    for any 0<ε<120<\varepsilon<\frac{1}{2}, there is a constant C=C(ε)>0C=C(\varepsilon)>0 such that dμdλC\frac{d\mu}{d\lambda}\leq C, λ\lambda-almost everywhere on X[0,ε)X\setminus[0,\varepsilon).

If we suppose (A) (and hence (B) is automatically fulfilled), then it also holds

  1. (L)

    there is a constant c>0c>0 such that dμdλc\frac{d\mu}{d\lambda}\geq c, λ\lambda-almost everywhere on XX.

Remark 3.1.

(I) Since an equivalent σ\sigma-finite measure in Theorem 3.1 is conservative and ergodic, it is unique (up to a multiplicative constant) by [12]*Theorem A in Chapter VI.

(II) When we do not suppose the condition (A), there does no longer exist lower bound for dμdλ\frac{d\mu}{d\lambda}, namely the condition (L), in general. See Example 4.7 for a counterexample.

We further state two theorems of Theorem 3.1 which tell us when the invariant measure becomes an infinite measure. The first one deals with a specific case when 𝔸\mathbb{A} is a point set, from which we can show the general case as in Theorem 3.3.

Theorem 3.2.

Let {Tα,β;ν𝔸,ν𝔹:α𝔸,β𝔹}\{T_{\alpha,\beta};\nu_{\mathbb{A}},\nu_{\mathbb{B}}:\alpha\in\mathbb{A},\ \beta\in\mathbb{B}\} be as in Theorem 3.1 and assume (A). Suppose 𝔸={α}\mathbb{A}=\{\alpha\} is a singleton, and set Xn=Xn𝛂X_{n}=X_{n}^{\bm{\alpha}} and η(β)=η(𝛂,β)\eta(\beta)=\eta(\bm{\alpha},\beta) where 𝛂=(α,α,)\bm{\alpha}=(\alpha,\alpha,\dots). Then the asymptotics of the invariant measure μ\mu given in Theorem 3.1 is of order

μ(Xn){β𝔹:η(β)<n}(ynη(β)β12)𝑑ν𝔹(β)+ν𝔹{β𝔹:η(β)n}\mu\left(X_{n}\right)\approx\int_{\{\beta\in\mathbb{B}:\eta(\beta)<n\}}\left(y_{n-\eta(\beta)}^{\beta}-\tfrac{1}{2}\right)d\nu_{\mathbb{B}}(\beta)+\nu_{\mathbb{B}}\left\{\beta\in\mathbb{B}:\eta(\beta)\geq n\right\}

for nn large enough.

Remark 3.2.

In Theorem 3.2, if SβS_{\beta} is surjective for ν𝔹\nu_{\mathbb{B}}-almost every β𝔹\beta\in\mathbb{B} then by the definition of η\eta we have η(β)=0\eta(\beta)=0 and the invariant measure μ\mu is of order

μ(Xn)𝔹(yn+1β12)𝑑ν𝔹(β).\mu\left(X_{n}\right)\approx\int_{\mathbb{B}}\left(y_{n+1}^{\beta}-\tfrac{1}{2}\right)d\nu_{\mathbb{B}}(\beta).

Simultaneously, the second term ν𝔹{β𝔹:η(β)n}\nu_{\mathbb{B}}\{\beta\in\mathbb{B}:\eta(\beta)\geq n\} is negligible when essinfβ𝔹Sβ(1)>0\operatorname*{ess\,inf}_{\beta\in\mathbb{B}}S_{\beta}(1)>0 and hence esssupβ𝔹η(β)<\operatorname*{ess\,sup}_{\beta\in\mathbb{B}}\eta(\beta)<\infty (e.g., when #𝔹<\#\mathbb{B}<\infty).

When 𝔸\mathbb{A} is an uncountable set, the form of the invariant density is complicated in general. However, combining Theorem 3.2 and the comparison theorem from [19], we can estimate the size of the σ\sigma-finite invariant measure μ\mu in Theorem 3.1 even when 𝔸\mathbb{A} is not singleton, by reducing to the case of singleton. In order to clarify our statement, we need to introduce the following condition. A random piecewise convex map {Tα,β;ν𝔸,ν𝔹:α𝔸,β𝔹}\{T_{\alpha,\beta};\nu_{\mathbb{A}},\nu_{\mathbb{B}}:\alpha\in\mathbb{A},\ \beta\in\mathbb{B}\} is said to satisfy the condition (\dagger) if there are some c(0,12)c\in(0,\frac{1}{2}) and α1,α2𝔸\alpha_{1},\alpha_{2}\in\mathbb{A} such that

ν𝔸{α𝔸:τα(0,ε)τα1(0,ε) for any ε(0,c)}=1 and\displaystyle\nu_{\mathbb{A}}\left\{\alpha\in\mathbb{A}:\tau_{\alpha}(0,\varepsilon)\subset\tau_{\alpha_{1}}(0,\varepsilon)\text{ for any }\varepsilon\in\left(0,c\right)\right\}=1\text{ and}
ν𝔸{α𝔸:τα(0,ε)τα2(0,ε) for any ε(0,c)}>0.\displaystyle\nu_{\mathbb{A}}\left\{\alpha\in\mathbb{A}:\tau_{\alpha}(0,\varepsilon)\supset\tau_{\alpha_{2}}(0,\varepsilon)\text{ for any }\varepsilon\in\left(0,c\right)\right\}>0.

These conditions are of course equivalent to that

ν𝔸{α𝔸:τατα1 on (0,c)}=1 and\displaystyle\nu_{\mathbb{A}}\left\{\alpha\in\mathbb{A}:\tau_{\alpha}\leq\tau_{\alpha_{1}}\text{ on }\left(0,c\right)\right\}=1\text{ and}
ν𝔸{α𝔸:τατα2 on (0,c)}>0.\displaystyle\nu_{\mathbb{A}}\left\{\alpha\in\mathbb{A}:\tau_{\alpha}\geq\tau_{\alpha_{2}}\text{ on }\left(0,c\right)\right\}>0.

With some abuse of notation, for a fixed α¯𝔸\bar{\alpha}\in\mathbb{A}, {Tα¯,β;ν𝔹:β𝔹}\{T_{\bar{\alpha},\beta};\nu_{\mathbb{B}}:\beta\in\mathbb{B}\} denotes a random piecewise convex map {Tα,β;να¯,ν𝔹:α{α¯},β𝔹}\{T_{\alpha,\beta};\nu_{\bar{\alpha}},\nu_{\mathbb{B}}:\alpha\in\{\bar{\alpha}\},\beta\in\mathbb{B}\} where να¯\nu_{\bar{\alpha}} is the Dirac measure on α¯\bar{\alpha}.

Theorem 3.3.

Let {Tα,β;ν𝔸,ν𝔹:α𝔸,β𝔹}\{T_{\alpha,\beta};\nu_{\mathbb{A}},\nu_{\mathbb{B}}:\alpha\in\mathbb{A},\ \beta\in\mathbb{B}\} and μ\mu be as in Theorem 3.1 with the assumption (A) and satisfy the condition ()(\dagger) with some α1,α2𝔸\alpha_{1},\alpha_{2}\in\mathbb{A}. Let μi\mu_{i}'s be σ\sigma-finite invariant measures for random piecewise convex maps {Tαi,β;ν𝔹:β𝔹}\{T_{\alpha_{i},\beta};\nu_{\mathbb{B}}:\beta\in\mathbb{B}\} (i=1,2)(i=1,2) given in Theorem 3.2. Then there is a constant M>0M>0 such that for any aa and bb with 0a<b120\leq a<b\leq\frac{1}{2}

M1μ1([a,b])μ([a,b])Mμ2([a,b]).M^{-1}\mu_{1}\left([a,b]\right)\leq\mu\left([a,b]\right)\leq M\mu_{2}\left([a,b]\right).

Consequently, if μ1(X)=\mu_{1}(X)=\infty then μ(X)=\mu(X)=\infty, and if μ2(X)<\mu_{2}(X)<\infty then μ(X)<\mu(X)<\infty.

Remark 3.3.

(I) In Theorem 3.3, α1\alpha_{1} is chosen to be a parameter for which τα1\tau_{\alpha_{1}} dominates any other τα\tau_{\alpha} for α𝔸\alpha\in\mathbb{A} from above and α2\alpha_{2} should be chosen in the way that τα2\tau_{\alpha_{2}} is close to τα1\tau_{\alpha_{1}} as much as possible so that the inequality becomes sharper. For instance, see Example 4.2 for the choice of parameters.

(II) If #𝔸<\#\mathbb{A}<\infty and ν𝔸(α)>0\nu_{\mathbb{A}}(\alpha)>0 for all α𝔸\alpha\in\mathbb{A}, then one can have α1=α2\alpha_{1}=\alpha_{2} in Theorem 3.3. Similarly, if there is a parameter α𝔸\alpha^{\prime}\in\mathbb{A} such that τατα\tau_{\alpha}\leq\tau_{\alpha^{\prime}} on (0,12)(0,\frac{1}{2}) for ν𝔸\nu_{\mathbb{A}}-almost every α𝔸\alpha\in\mathbb{A} and ν𝔸({α})>0\nu_{\mathbb{A}}(\{\alpha^{\prime}\})>0, then both α1\alpha_{1} and α2\alpha_{2} in Theorem 3.3 can be taken as α\alpha^{\prime}. That is, the invariant measure μ\mu in Theorem 3.3 is of same order of μα\mu_{\alpha^{\prime}}, where μα\mu_{\alpha^{\prime}} is the invariant measure for {Tα,β;ν𝔹:β𝔹}\{T_{\alpha^{\prime},\beta};\nu_{\mathbb{B}}:\beta\in\mathbb{B}\}.

Before proving Theorem 3.1, we recall the key tool, called the induced operator (or the first return map in the sense of [18]), to construct an absolutely continuous σ\sigma-finite invariant measure and we also prepare lemmas.

As in the previous section, we let Y=[12,1]Y=[\frac{1}{2},1] and recall (see also [13, 29]) that the induced operator (on YY) PYP_{Y} is defined by

PY=IYPn=0(IYcP)n\displaystyle P_{Y}=I_{Y}P\sum_{n=0}^{\infty}\left(I_{Y^{c}}P\right)^{n} (3.1)

where IYI_{Y} and IYcI_{Y^{c}} are the restriction operators on YY (i.e., IYf=1YfI_{Y}f=1_{Y}f for each measurable function ff) and YcY^{c}, respectively. The operator PYP_{Y} is a well-defined Markov operator over L1(X,λ)L^{1}(X,\lambda) since YY is a PP-sweep-out set with respect to λ\lambda (see Lemma 4.7 in [29] precisely). The induced operator for a Markov operator is a generalization of the induced map for a non-singular map.

For (𝜶,β)𝔸×𝔹(\bm{\alpha},\beta)\in\mathbb{A}^{\mathbb{N}}\times\mathbb{B}, TY(𝜶,β)\mathcal{L}_{T_{Y}^{(\bm{\alpha},\beta)}} denotes the Perron–Frobenius operator associated with the induced (random) map TY(𝜶,β)xτα1ταn(x)SβxT_{Y}^{(\bm{\alpha},\beta)}x\coloneqq\tau_{\alpha_{1}}\circ\cdots\circ\tau_{\alpha_{n(x)}}\circ S_{\beta}x where n(x)1n(x)\geq 1 is the minimum number satisfying τα1ταn(x)SβxY\tau_{\alpha_{1}}\circ\cdots\circ\tau_{\alpha_{n(x)}}\circ S_{\beta}x\in Y (such n(x)n(x) exists for xY{12}x\in Y\setminus\{\frac{1}{2}\}).

Lemma 3.1.

The induced operator PYP_{Y} defined by the equation (3.1) satisfies

PYf=𝔸×𝔹TY(𝜶,β)f𝑑ν𝔸(𝜶)ν𝔹(β)P_{Y}f=\int_{\mathbb{A}^{\mathbb{N}}\times\mathbb{B}}\mathcal{L}_{T_{Y}^{(\bm{\alpha},\beta)}}fd\nu_{\mathbb{A}}^{\infty}(\bm{\alpha})\nu_{\mathbb{B}}(\beta)

for each fL1(Y,λ)f\in L^{1}(Y,\lambda) with f=0f=0 λ\lambda-almost everywhere on YcY^{c}.

Proof.

As in the equality (3.1), the induced operator on YY and its adjoint operator are defined by

PY=IYPn=0(IYCP)nandPY=n=0(PIYc)n(PIY).P_{Y}=I_{Y}P\sum_{n=0}^{\infty}(I_{Y^{C}}P)^{n}\quad\text{and}\quad P^{*}_{Y}=\sum_{n=0}^{\infty}\left({P^{*}}I_{Y^{c}}\right)^{n}\left(P^{*}I_{Y}\right).

On the other hand, by Proposition 4.1 (iv) of [18], PY1A(x)P^{*}_{Y}1_{A}(x) equals to the transition function from xx into AA which defines the induced map on YY of the original random map. ∎

We then prove the following key lemma.

Lemma 3.2.

Suppose the condition (B). If ff is non-negative, bounded and non-increasing on YY and satisfies f=0f=0 λ\lambda-almost everywhere on YcY^{c}, then so is PYfP_{Y}f. Moreover, if fL11\lVert f\rVert_{L^{1}}\leq 1 then there is some positive constant K>0K>0, independent of ff, such that for any δ(0,1)\delta\in(0,1) and λ\lambda-almost every xYx\in Y,

PYf(x)<δf(12)+K.\displaystyle P_{Y}f(x)<\delta f\left(\tfrac{1}{2}\right)+K. (3.2)
Proof.

We follow the proof of Proposition 5.1 in [16]. Let Xn𝜶(xn+1𝜶,xn𝜶]X_{n}^{\bm{\alpha}}\coloneqq(x_{n+1}^{\bm{\alpha}},x_{n}^{\bm{\alpha}}] and Yn𝜶,β(yn+1𝜶,β,yn𝜶,β]Y_{n}^{\bm{\alpha},\beta}\coloneqq(y_{n+1}^{\bm{\alpha},\beta},y_{n}^{\bm{\alpha},\beta}] as before. Then for each (𝜶,β)𝔸×𝔹(\bm{\alpha},\beta)\in\mathbb{A}^{\mathbb{N}}\times\mathbb{B}, the induced map TY(𝜶,β)T_{Y}^{(\bm{\alpha},\beta)} is piecewise convex such that TY(𝜶,β)|Y1𝜶,β=τα1ταη(𝜶,β)Sβ|Y1𝜶,βT_{Y}^{(\bm{\alpha},\beta)}|_{Y_{1}^{\bm{\alpha},\beta}}=\tau_{\alpha_{1}}\circ\cdots\circ\tau_{\alpha_{\eta(\bm{\alpha},\beta)}}\circ S_{\beta}|_{Y_{1}^{\bm{\alpha},\beta}} maps from Y1𝜶,βY_{1}^{\bm{\alpha},\beta} onto [12,TY(𝜶,β)(1)]Y[\frac{1}{2},T_{Y}^{(\bm{\alpha},\beta)}(1)]\subset Y and TY(𝜶,β)|Yn+1𝜶,β=τα1τη(𝜶,β)+nSβ|Yn+1𝜶,βT_{Y}^{(\bm{\alpha},\beta)}|_{Y_{n+1}^{\bm{\alpha},\beta}}=\tau_{\alpha_{1}}\circ\cdots\circ\tau_{\eta(\bm{\alpha},\beta)+n}\circ S_{\beta}|_{Y_{n+1}^{\bm{\alpha},\beta}} maps from Yn+1𝜶,βY_{n+1}^{\bm{\alpha},\beta} onto YY for n1n\geq 1 by construction.

If we set

φ1(𝜶,β)(x)={1(TY(𝜶,β))(TY(𝜶,β)|Y1𝜶,β)1(x)(xTY(𝜶,β)(Y1𝜶,β)),0(otherwise)\varphi_{1}^{(\bm{\alpha},\beta)}(x)=\begin{cases}\dfrac{1}{\left(T_{Y}^{(\bm{\alpha},\beta)}\right)^{\prime}\circ\left(T_{Y}^{(\bm{\alpha},\beta)}\big{|}_{Y_{1}^{\bm{\alpha},\beta}}\right)^{-1}(x)}&\left(x\in T_{Y}^{(\bm{\alpha},\beta)}\left(Y_{1}^{\bm{\alpha},\beta}\right)\right),\\ 0&\left(\text{otherwise}\right)\end{cases}

and

φn+1(𝜶,β)(x)=1(TY(𝜶,β))(TY(𝜶,β)|Yn+1𝜶,β)1(x)(xX)\varphi_{n+1}^{(\bm{\alpha},\beta)}(x)=\frac{1}{\left(T_{Y}^{(\bm{\alpha},\beta)}\right)^{\prime}\circ\left(T_{Y}^{(\bm{\alpha},\beta)}\big{|}_{Y_{n+1}^{\bm{\alpha},\beta}}\right)^{-1}(x)}\quad(x\in X)

for (𝜶,β)𝔸×𝔹(\bm{\alpha},\beta)\in\mathbb{A}^{\mathbb{N}}\times\mathbb{B} and n1n\geq 1, then φn(𝜶,β)\varphi_{n}^{(\bm{\alpha},\beta)} is non-increasing on YY for each (𝜶,β)𝔸×𝔹(\bm{\alpha},\beta)\in\mathbb{A}^{\mathbb{N}}\times\mathbb{B} and n1n\geq 1. Since for any non-negative and non-increasing function ff on YY we have

PYf\displaystyle P_{Y}f =𝔸×𝔹(n2φn(𝜶,β)f(TY(𝜶,β)|Yn𝜶,β)1\displaystyle=\int_{\mathbb{A}^{\mathbb{N}}\times\mathbb{B}}\Bigg{(}\sum_{n\geq 2}\varphi_{n}^{(\bm{\alpha},\beta)}f\circ\left(T_{Y}^{(\bm{\alpha},\beta)}\big{|}_{Y_{n}^{\bm{\alpha},\beta}}\right)^{-1}
+φ1(𝜶,β)f(TY(𝜶,β)|Y1𝜶,β)11TY(𝜶,β)(Y1𝜶,β))dν𝔸(𝜶)ν𝔹(β)\displaystyle\qquad\qquad\qquad+\varphi_{1}^{(\bm{\alpha},\beta)}f\circ\left(T_{Y}^{(\bm{\alpha},\beta)}\big{|}_{Y_{1}^{\bm{\alpha},\beta}}\right)^{-1}1_{T_{Y}^{(\bm{\alpha},\beta)}\left(Y_{1}^{\bm{\alpha},\beta}\right)}\Bigg{)}d\nu_{\mathbb{A}}^{\infty}(\bm{\alpha})\nu_{\mathbb{B}}(\beta)

from Lemma 3.1, PYfP_{Y}f is also non-negative and non-increasing and the former part of the lemma is proven.

Now from the convexity of Tα,βT_{\alpha,\beta} we can easily see that

Tαη(𝜶,β)+n,β|Yn𝜶,βλ(Xη(𝜶,β)+n𝜶)λ(Yn+1𝜶,β)andTαn,β|Xn𝜶λ(Xn𝜶)λ(Xn+1𝜶)\displaystyle T_{\alpha_{\eta(\bm{\alpha},\beta)+n},\beta}^{\prime}\big{|}_{Y_{n}^{\bm{\alpha},\beta}}\geq\frac{\lambda\left(X_{\eta(\bm{\alpha},\beta)+n}^{\bm{\alpha}}\right)}{\lambda\left(Y_{n+1}^{\bm{\alpha},\beta}\right)}\quad\text{and}\quad T_{\alpha_{n},\beta}^{\prime}\big{|}_{X_{n}^{\bm{\alpha}}}\geq\frac{\lambda\left(X_{n}^{\bm{\alpha}}\right)}{\lambda\left(X_{n+1}^{\bm{\alpha}}\right)} (3.3)

for any (𝜶,β)𝔸×𝔹(\bm{\alpha},\beta)\in\mathbb{A}^{\mathbb{N}}\times\mathbb{B} and n1n\geq 1. Thus it follows from (3.3) that for any (𝜶,β)𝔸×𝔹(\bm{\alpha},\beta)\in\mathbb{A}^{\mathbb{N}}\times\mathbb{B} and n1n\geq 1

(TY(𝜶,β))|Yn𝜶,β\displaystyle\left(T_{Y}^{(\bm{\alpha},\beta)}\right)^{\prime}\bigg{|}_{Y_{n}^{\bm{\alpha},\beta}} =Tαη(𝜶,β)+n,β|Yn𝜶,βk=1η(𝜶,β)+n1Tαη(𝜶,β)+nk,β|Xη(𝜶,β)+nk\displaystyle=T^{\prime}_{\alpha_{\eta(\bm{\alpha},\beta)+n},\beta}\big{|}_{Y_{n}^{\bm{\alpha},\beta}}\prod_{k=1}^{\eta(\bm{\alpha},\beta)+n-1}T^{\prime}_{\alpha_{\eta(\bm{\alpha},\beta)+n-k},\beta}\big{|}_{X_{\eta(\bm{\alpha},\beta)+n-k}}
Tαη(𝜶,β)+nk+1,βTαη(𝜶,β)+n1,β\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\circ T_{\alpha_{\eta(\bm{\alpha},\beta)+n-k+1},\beta}\circ\cdots\circ T_{\alpha_{\eta(\bm{\alpha},\beta)+n-1},\beta}
λ(X1𝜶)λ(Yn+1𝜶,β).\displaystyle\geq\frac{\lambda\left(X_{1}^{\bm{\alpha}}\right)}{\lambda\left(Y_{n+1}^{\bm{\alpha},\beta}\right)}.

Then it holds for each N1N\geq 1 that

n=Nφn(𝜶,β)(12)\displaystyle\sum_{n=N}^{\infty}\varphi_{n}^{(\bm{\alpha},\beta)}\left(\tfrac{1}{2}\right) =n=N1(TY(𝜶,β))(yn+1𝜶,β)\displaystyle=\sum_{n=N}^{\infty}\frac{1}{\left(T_{Y}^{(\bm{\alpha},\beta)}\right)^{\prime}\left(y_{n+1}^{\bm{\alpha},\beta}\right)}
n=Nλ(Yn+1𝜶,β)λ(X1𝜶)\displaystyle\leq\sum_{n=N}^{\infty}\frac{\lambda\left(Y_{n+1}^{\bm{\alpha},\beta}\right)}{\lambda\left(X_{1}^{\bm{\alpha}}\right)}
=yN+1𝜶,β12x1𝜶x2𝜶.\displaystyle=\frac{y_{N+1}^{\bm{\alpha},\beta}-\frac{1}{2}}{x_{1}^{\bm{\alpha}}-x_{2}^{\bm{\alpha}}}.

By Lemma 2.1, for any fixed 0<δ<10<\delta<1 there exists N01N_{0}\geq 1 such that we have

𝔸×𝔹n=N0φn(𝜶,β)(12)dν𝔸(𝜶)ν𝔹(β)<δ.\int_{\mathbb{A}^{\mathbb{N}}\times\mathbb{B}}\sum_{n=N_{0}}^{\infty}\varphi_{n}^{(\bm{\alpha},\beta)}\left(\tfrac{1}{2}\right)d\nu_{\mathbb{A}}^{\infty}(\bm{\alpha})\nu_{\mathbb{B}}(\beta)<\delta.

Since any non-increasing density function on YY cannot exceed (x12)1(x-\frac{1}{2})^{-1} (see also [23, Step III in Proof of Theorem 4]), it holds that, for any non-negative, bounded and non-increasing function ff on YY with fL11\lVert f\rVert_{L^{1}}\leq 1,

PYf(x)\displaystyle P_{Y}f(x) PYf(12)\displaystyle\leq P_{Y}f\left(\tfrac{1}{2}\right)
=𝔸×𝔹(n=N0φn(𝜶,β)(12)f(yn+1𝜶,β)+n=1N01φn(𝜶,β)(12)f(yn+1𝜶,β))𝑑ν𝔸(𝜶)𝑑ν𝔹(β)\displaystyle=\int_{\mathbb{A}^{\mathbb{N}}\times\mathbb{B}}\left(\sum_{n=N_{0}}^{\infty}\varphi_{n}^{(\bm{\alpha},\beta)}\left(\tfrac{1}{2}\right)f\left(y_{n+1}^{\bm{\alpha},\beta}\right)+\sum_{n=1}^{N_{0}-1}\varphi_{n}^{(\bm{\alpha},\beta)}\left(\tfrac{1}{2}\right)f\left(y_{n+1}^{\bm{\alpha},\beta}\right)\right)d\nu_{\mathbb{A}}^{\infty}(\bm{\alpha})d\nu_{\mathbb{B}}(\beta)
<δf(12)+𝔸×𝔹n=1N01φn(𝜶,β)(12)yn+1𝜶,β12dν𝔸(𝜶)dν𝔹(β)\displaystyle<\delta f\left(\tfrac{1}{2}\right)+\int_{\mathbb{A}^{\mathbb{N}}\times\mathbb{B}}\sum_{n=1}^{N_{0}-1}\frac{\varphi_{n}^{(\bm{\alpha},\beta)}\left(\frac{1}{2}\right)}{y_{n+1}^{\bm{\alpha},\beta}-\tfrac{1}{2}}d\nu_{\mathbb{A}}^{\infty}(\bm{\alpha})d\nu_{\mathbb{B}}(\beta)

for λ\lambda-almost every xYx\in Y. Therefore, putting K=𝔸×𝔹n=1N01φn(𝜶,β)(12)yn+1𝜶,β12dν𝔸(𝜶)dν𝔹(β)<K=\int_{\mathbb{A}^{\mathbb{N}}\times\mathbb{B}}\sum_{n=1}^{N_{0}-1}\frac{\varphi_{n}^{(\bm{\alpha},\beta)}(\frac{1}{2})}{y_{n+1}^{\bm{\alpha},\beta}-\frac{1}{2}}d\nu_{\mathbb{A}}^{\infty}(\bm{\alpha})d\nu_{\mathbb{B}}(\beta)<\infty, we have obtained the inequality (3.2). ∎

We now emphasize that the left branches τα\tau_{\alpha}'s map points surjectively onto [0,1][0,1]. This together with the condition (B) guarantees that an invariant density for the induced operator PYP_{Y} is fully supported on YY as well as bounded above. Furthermore, (A) ensures the invariant density to be bounded away from zero on YY. Henceforth λ|Y\lambda|_{Y} denotes the measure λ\lambda restricted on YY.

Lemma 3.3.

Under the assumption (B), the induced operator PYP_{Y} is ergodic with respect to the Lebesgue measure λ\lambda and admits a unique λ|Y\lambda|_{Y}-equivalent invariant probability measure whose density is non-increasing and bounded above on YY. Moreover, if we assume (A) then the density function is bounded away from zero on YY.

Proof.

First of all, ergodicity follows from the following argument. For each (𝜶,β)𝔸×𝔹(\bm{\alpha},\beta)\in\mathbb{A}^{\mathbb{N}}\times\mathbb{B}, the map TY(𝜶,β)T_{Y}^{(\bm{\alpha},\beta)} satisfies the conditions in [16, Proposition 5.1] and is ergodic (or moreover exact) with respect to λ|Y\lambda|_{Y}. If DD is an invariant set for PYP_{Y}, then

PY1D=𝔸×𝔹1DTY(𝜶,β)𝑑ν𝔸(𝜶)𝑑ν𝔹(β)=1D.P^{*}_{Y}1_{D}=\int_{\mathbb{A}^{\mathbb{N}}\times\mathbb{B}}1_{D}\circ T_{Y}^{(\bm{\alpha},\beta)}d\nu_{\mathbb{A}}^{\infty}(\bm{\alpha})d\nu_{\mathbb{B}}(\beta)=1_{D}.

Thus DD is a TY(𝜶,β)T_{Y}^{(\bm{\alpha},\beta)}-invariant set for ν𝔸×ν𝔹\nu_{\mathbb{A}}^{\infty}\times\nu_{\mathbb{B}}-almost everywhere. Now it is straightforward to see that D=D=\emptyset or X(modλ)X\pmod{\lambda} since each TY(𝜶,β)T_{Y}^{(\bm{\alpha},\beta)} is ergodic.

From Lemma 3.2, PYn1YP_{Y}^{n}1_{Y} is non-increasing for n0n\geq 0 and we apply (3.2) repeatedly to get for any fixed δ(0,1)\delta\in(0,1) and for xYx\in Y

PYn1Y(x)<δPYn11Y(12)+K<δ2PYn21Y(12)+δK+KP_{Y}^{n}1_{Y}(x)<\delta P_{Y}^{n-1}1_{Y}\left(\tfrac{1}{2}\right)+K<\delta^{2}P_{Y}^{n-2}1_{Y}\left(\tfrac{1}{2}\right)+\delta K+K

and so on. Eventually, we have for n1n\geq 1

PYn1Y<δn+K1δ,\displaystyle P_{Y}^{n}1_{Y}<\delta^{n}+\frac{K}{1-\delta}, (3.4)

that is, PYn1YP_{Y}^{n}1_{Y} is bounded above by C01+K(1δ)1<C_{0}\coloneqq 1+K(1-\delta)^{-1}<\infty for any n1n\geq 1. Therefore, by [29, Theorem 3.1 and Proposition 3.9], the limiting point

h01λ(Y)limn1ni=0n1PYi1Y=2limn1ni=0n1PYi1Yh_{0}\coloneqq\frac{1}{\lambda(Y)}\lim_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1}P_{Y}^{i}1_{Y}=2\lim_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1}P_{Y}^{i}1_{Y}

exists and is an invariant density of PYP_{Y}, which is conservative and ergodic implying uniqueness of the invariant density.

We then show this h0h_{0} satisfies the conditions in the statement of the lemma. From Lemma 3.2 and (3.4), h0h_{0} is non-increasing and bounded above by C12C0C_{1}\coloneqq 2C_{0} on YY. For the lower bound, notice that from the fact that PYn1YP_{Y}^{n}1_{Y} is non-increasing and the above inequality (3.4)

PYn1Y1P_{Y}^{n}1_{Y}\geq 1

on [12,12+1C1][\frac{1}{2},\frac{1}{2}+\frac{1}{C_{1}}] for n1n\geq 1 so that

h02on [12,12+1C1].\displaystyle h_{0}\geq 2\quad\text{on }\left[\tfrac{1}{2},\tfrac{1}{2}+\tfrac{1}{C_{1}}\right]. (3.5)

On the other hand, it follows from the Lebsgue dominated convergence theorem (see also the proof to Lemma 2.1), we get N02N_{0}\geq 2 such that

𝔸×𝔹(yN0𝜶,β12)𝑑ν𝔸(𝜶)𝑑ν𝔹(β)<1C1.\int_{\mathbb{A}^{\mathbb{N}}\times\mathbb{B}}\left(y_{N_{0}}^{\bm{\alpha},\beta}-\tfrac{1}{2}\right)d\nu_{\mathbb{A}}^{\infty}(\bm{\alpha})d\nu_{\mathbb{B}}(\beta)<\frac{1}{C_{1}}.

We define

E{(𝜶,β)𝔸×𝔹:n=N0Yn𝜶,β[12,12+1C1]}.E\coloneqq\left\{(\bm{\alpha},\beta)\in\mathbb{A}^{\mathbb{N}}\times\mathbb{B}:\bigcup_{n=N_{0}}^{\infty}Y_{n}^{\bm{\alpha},\beta}\subset\left[\frac{1}{2},\frac{1}{2}+\frac{1}{C_{1}}\right]\right\}.

Then since it holds that

yN0𝜶,β12=λ(n=N0Yn𝜶,β),y_{N_{0}}^{\bm{\alpha},\beta}-\frac{1}{2}=\lambda\left(\bigcup_{n=N_{0}}^{\infty}Y_{n}^{\bm{\alpha},\beta}\right),

we have ν𝔸×ν𝔹(E)>0\nu_{\mathbb{A}}^{\infty}\times\nu_{\mathbb{B}}(E)>0. Combining (3.5) with the above argument, we have

h02n=N01Yn𝜶,βfor any (𝜶,β)E.h_{0}\geq 2\sum_{n=N_{0}}^{\infty}1_{Y_{n}^{\bm{\alpha},\beta}}\quad\text{for any }(\bm{\alpha},\beta)\in E.

Thus, taking φn\varphi_{n} defined in the proof to Lemma 3.2 into account, we have by Lemma 3.1

h0\displaystyle h_{0} =PYh0=𝔸×𝔹TY(𝜶,β)h0𝑑ν𝔸(𝜶)𝑑ν𝔹(β)\displaystyle=P_{Y}h_{0}=\int_{\mathbb{A}^{\mathbb{N}}\times\mathbb{B}}\mathcal{L}_{T_{Y}^{(\bm{\alpha},\beta)}}h_{0}d\nu_{\mathbb{A}}^{\infty}(\bm{\alpha})d\nu_{\mathbb{B}}(\beta)
2En=N0TY(𝜶,β)1Yn(𝜶,β)dν𝔸(𝜶)dν𝔹(β)\displaystyle\geq 2\int_{E}\sum_{n=N_{0}}^{\infty}\mathcal{L}_{T_{Y}^{(\bm{\alpha},\beta)}}1_{Y_{n}^{(\bm{\alpha},\beta)}}d\nu_{\mathbb{A}}^{\infty}(\bm{\alpha})d\nu_{\mathbb{B}}(\beta)
=2En=N0φn(𝜶,β)dν𝔸(𝜶)dν𝔹(β)\displaystyle=2\int_{E}\sum_{n=N_{0}}^{\infty}\varphi_{n}^{(\bm{\alpha},\beta)}d\nu_{\mathbb{A}}^{\infty}(\bm{\alpha})d\nu_{\mathbb{B}}(\beta)

on YY since TY(𝜶,β)|Yn(𝜶,β)T_{Y}^{(\bm{\alpha},\beta)}|_{Y_{n}^{(\bm{\alpha},\beta)}} is surjective for each n2n\geq 2. The conditions (1) and (2) imply that

C2(x)2En=N0φn(𝜶,β)(x)dν𝔸(𝜶)dν𝔹(β)>0C_{2}(x)\coloneqq 2\int_{E}\sum_{n=N_{0}}^{\infty}\varphi_{n}^{(\bm{\alpha},\beta)}(x)d\nu_{\mathbb{A}}^{\infty}(\bm{\alpha})d\nu_{\mathbb{B}}(\beta)>0

λ|Y\lambda|_{Y}-almost every xYx\in Y. Therefore, we conclude C1h0(x)C2(x)C_{1}\geq h_{0}(x)\geq C_{2}(x) on YY. Moreover, if we assume (A) then essinfxYC2(x)>0\operatorname*{ess\,inf}_{x\in Y}C_{2}(x)>0. The proof is completed. ∎

Proof of Theorem 3.1.

The well-known formula of invariant measures via the induced operators (see Proposition 4.14 in [29] for example) shows that

h=n=0(IYcP)nh0=h0+IYcn=0(PIYc)nPh0.h=\sum_{n=0}^{\infty}(I_{Y^{c}}P)^{n}h_{0}=h_{0}+I_{Y^{c}}\sum_{n=0}^{\infty}(PI_{Y^{c}})^{n}Ph_{0}.

gives an invariant density function of an absolutely continuous σ\sigma-finite invariant measure μ\mu for PP where h0h_{0} is the invariant density of PYP_{Y} obtained in Lemma 3.3. Then it follows from the fact that h0h_{0} is supported on YY that

h=h0+𝔸×𝔹n=0IYcτα0ταn1Sβh0dν𝔸(α0,α1,)dν𝔹(β).\displaystyle h=h_{0}+\int_{\mathbb{A}^{\mathbb{N}}\times\mathbb{B}}\sum_{n=0}^{\infty}I_{Y^{c}}\mathcal{L}_{\tau_{\alpha_{0}}}\cdots\mathcal{L}_{\tau_{\alpha_{n-1}}}\mathcal{L}_{S_{\beta}}h_{0}d\nu_{\mathbb{A}}^{\infty}(\alpha_{0},\alpha_{1},\dots)d\nu_{\mathbb{B}}(\beta). (3.6)

Since τα\tau_{\alpha} for each α𝔸\alpha\in\mathbb{A} is surjective and the support of h0h_{0} is YY up to λ\lambda-measure zero sets, hh is evidently fully supported on XX and thus the invariant measure is equivalent to λ\lambda. Now that h0h_{0} is non-increasing and so is (PIYc)nh0(PI_{Y^{c}})^{n}h_{0} for n0n\geq 0 from the similar argument of the proof to Lemma 3.2 together with the assumption (2), we have (D). Then (U) follows from (D) and the fact that μ\mu is σ\sigma-finite.

If we assume (A), then we have C11Yh0C1YC^{-1}1_{Y}\leq h_{0}\leq C1_{Y} for some C1C\geq 1 by Lemma 3.3. Or equivalently, for each 𝜶𝔸\bm{\alpha}\in\mathbb{A}^{\mathbb{N}} and β𝔹\beta\in\mathbb{B}

C1n=11Yn𝜶,βh0Cn=11Yn𝜶,β.C^{-1}\sum_{n=1}^{\infty}1_{Y_{n}^{\bm{\alpha},\beta}}\leq h_{0}\leq C\sum_{n=1}^{\infty}1_{Y_{n}^{\bm{\alpha},\beta}}.

Note that for the bound above (or the desired consequence (U)), we only need the condition (B).

We first observe that for n2n\geq 2 and β𝔹\beta\in\mathbb{B},

Tα,β1Yn𝜶,β\displaystyle\mathcal{L}_{T_{\alpha,\beta}}1_{Y_{n}^{\bm{\alpha},\beta}} =1Yn𝜶,βSβ|Yn𝜶,β1SβSβ|Yn𝜶,β1λ(Yn+1𝜶,β)λ(Xη(𝜶,β)+n)1Xη(𝜶,β)+n1, and\displaystyle=\frac{1_{Y_{n}^{\bm{\alpha},\beta}}\circ S_{\beta}|_{Y_{n}^{\bm{\alpha},\beta}}^{-1}}{S_{\beta}^{\prime}\circ S_{\beta}|_{Y_{n}^{\bm{\alpha},\beta}}^{-1}}\leq\dfrac{\lambda\left(Y_{n+1}^{\bm{\alpha},\beta}\right)}{\lambda\left(X_{\eta(\bm{\alpha},\beta)+n}\right)}1_{X_{\eta(\bm{\alpha},\beta)+n-1}},\text{ and}
Tα,β1Yn𝜶,β\displaystyle\mathcal{L}_{T_{\alpha,\beta}}1_{Y_{n}^{\bm{\alpha},\beta}} λ(Yn1𝜶,β)λ(Xη(𝜶,β)+n2)1Xη(𝜶,β)+n1\displaystyle\geq\dfrac{\lambda\left(Y_{n-1}^{\bm{\alpha},\beta}\right)}{\lambda\left(X_{\eta(\bm{\alpha},\beta)+n-2}\right)}1_{X_{\eta(\bm{\alpha},\beta)+n-1}}

by the convexity of SβS_{\beta}. Hence, taking it into account that

n=η(𝜶,β)+1Xn𝜶SβYn=η(𝜶,β)Xn𝜶,\bigcup_{n=\eta(\bm{\alpha},\beta)+1}^{\infty}X_{n}^{\bm{\alpha}}\subset S_{\beta}Y\subset\bigcup_{n=\eta(\bm{\alpha},\beta)}^{\infty}X_{n}^{\bm{\alpha}},

for each 𝜶𝔸\bm{\alpha}\in\mathbb{A}^{\mathbb{N}} we have

Ph0\displaystyle Ph_{0} =𝔹Tα,βh0𝑑ν𝔹(β)C𝔹n=1λ(Yn+1𝜶,β)λ(Xη(𝜶,β)+n𝜶)1Xη(𝜶,β)+n1𝜶dν𝔹(β), and\displaystyle=\int_{\mathbb{B}}\mathcal{L}_{T_{\alpha,\beta}}h_{0}d\nu_{\mathbb{B}}(\beta)\leq C\int_{\mathbb{B}}\sum_{n=1}^{\infty}\frac{\lambda\left(Y_{n+1}^{\bm{\alpha},\beta}\right)}{\lambda\left(X_{\eta(\bm{\alpha},\beta)+n}^{\bm{\alpha}}\right)}1_{X_{\eta(\bm{\alpha},\beta)+n-1}^{\bm{\alpha}}}d\nu_{\mathbb{B}}(\beta),\text{ and} (3.7)
Ph0\displaystyle Ph_{0} C1𝔹n=2λ(Yn1𝜶,β)λ(Xη(𝜶,β)+n2𝜶)1Xη(𝜶,β)+n1𝜶dν𝔹(β)\displaystyle\geq C^{-1}\int_{\mathbb{B}}\sum_{n=2}^{\infty}\frac{\lambda\left(Y_{n-1}^{\bm{\alpha},\beta}\right)}{\lambda\left(X_{\eta(\bm{\alpha},\beta)+n-2}^{\bm{\alpha}}\right)}1_{X_{\eta(\bm{\alpha},\beta)+n-1}^{\bm{\alpha}}}d\nu_{\mathbb{B}}(\beta) (3.8)

where we define X0𝜶YX_{0}^{\bm{\alpha}}\coloneqq Y. Note that for each n2n\geq 2, τ𝜶kταnk+1ταnk+2ταn|Xn𝜶:Xn𝜶Xnk𝜶\tau_{\bm{\alpha}}^{k}\coloneqq\tau_{\alpha_{n-k+1}}\circ\tau_{\alpha_{n-k+2}}\circ\cdots\circ\tau_{\alpha_{n}}|_{X_{n}^{\bm{\alpha}}}:X_{n}^{\bm{\alpha}}\to X_{n-k}^{\bm{\alpha}} is convex for 1kn11\leq k\leq n-1 where 𝜶=(α1,α2,)\bm{\alpha}=(\alpha_{1},\alpha_{2},\dots). Thus we have

λ(Xn1𝜶)λ(Xn1k𝜶)1Xnk𝜶τ𝜶k1Xn𝜶λ(Xn+1𝜶)λ(Xn+1k𝜶)1Xnk𝜶\displaystyle\frac{\lambda\left(X_{n-1}^{\bm{\alpha}}\right)}{\lambda\left(X_{n-1-k}^{\bm{\alpha}}\right)}1_{X_{n-k}^{\bm{\alpha}}}\leq\mathcal{L}_{\tau_{\bm{\alpha}}^{k}}1_{X_{n}^{\bm{\alpha}}}\leq\frac{\lambda\left(X_{n+1}^{\bm{\alpha}}\right)}{\lambda\left(X_{n+1-k}^{\bm{\alpha}}\right)}1_{X_{n-k}^{\bm{\alpha}}} (3.9)

for 1kn21\leq k\leq n-2 and

1τα1(12)λ(Xn1𝜶)λ(X1𝜶)1X1𝜶τ𝜶n11Xn𝜶λ(Xn+1𝜶)λ(X1𝜶)1X1𝜶.\displaystyle\frac{1}{\tau_{\alpha_{1}}^{\prime}\left(\frac{1}{2}\right)}\cdot\frac{\lambda\left(X_{n-1}^{\bm{\alpha}}\right)}{\lambda\left(X_{1}^{\bm{\alpha}}\right)}1_{X_{1}^{\bm{\alpha}}}\leq\mathcal{L}_{\tau_{\bm{\alpha}}^{n-1}}1_{X_{n}^{\bm{\alpha}}}\leq\frac{\lambda\left(X_{n+1}^{\bm{\alpha}}\right)}{\lambda\left(X_{1}^{\bm{\alpha}}\right)}1_{X_{1}^{\bm{\alpha}}}. (3.10)

Note that hh0h-h_{0} is supported on Yc=n=1Xn𝜶(modλ)Y^{c}=\bigcup_{n=1}^{\infty}X_{n}^{\bm{\alpha}}\pmod{\lambda} for any 𝜶𝔸\bm{\alpha}\in\mathbb{A}^{\mathbb{N}}, where hh is a PP-invariant locally integrable function given in (3.6). Then by combining the inequality (3.7) and (3.8) with (3.9) and (3.10), it also follows from IYcτα1X1𝜶=0I_{Y^{c}}\mathcal{L}_{\tau_{\alpha}1_{X_{1}^{\bm{\alpha}}}}=0 for any α𝔸\alpha\in\mathbb{A} that for each N1N\geq 1

hh0\displaystyle h-h_{0} C𝔸×𝔹n=1+δ0,η(𝜶,β)k=0η(𝜶,β)+n2λ(Yn+1𝜶,β)λ(Xη(𝜶,β)+nk𝜶)1Xη(𝜶,β)+nk1𝜶dν𝔸(𝜶)dν𝔹(β), and\displaystyle\leq C\int_{\mathbb{A}^{\mathbb{N}}\times\mathbb{B}}{\sum_{n=1+\delta_{0,\eta(\bm{\alpha},\beta)}}^{\infty}}\sum_{k=0}^{\eta(\bm{\alpha},\beta)+n-2}\frac{\lambda\left(Y_{n+1}^{\bm{\alpha},\beta}\right)}{\lambda\left(X_{\eta(\bm{\alpha},\beta)+n-k}^{\bm{\alpha}}\right)}1_{X_{\eta(\bm{\alpha},\beta)+n-k-1}^{\bm{\alpha}}}d\nu_{\mathbb{A}}^{\infty}(\bm{\alpha})d\nu_{\mathbb{B}}(\beta),\text{ and}
hh0\displaystyle h-h_{0} C1𝔸×𝔹n=2k=0η(𝜶,β)+n21τα1(12)λ(Yn1𝜶,β)λ(Xη(𝜶,β)+nk1𝜶)1Xη(𝜶,β)+nk1𝜶dν𝔸(𝜶)dν𝔹(β)\displaystyle\geq C^{-1}\int_{\mathbb{A}^{\mathbb{N}}\times\mathbb{B}}{\sum_{n=2}^{\infty}}\sum_{k=0}^{\eta(\bm{\alpha},\beta)+n-2}\frac{1}{\tau_{\alpha_{1}}^{\prime}\left(\frac{1}{2}\right)}\cdot\frac{\lambda\left(Y_{n-1}^{\bm{\alpha},\beta}\right)}{\lambda\left(X_{\eta(\bm{\alpha},\beta)+n-k-1}^{\bm{\alpha}}\right)}1_{X_{\eta(\bm{\alpha},\beta)+n-k-1}^{\bm{\alpha}}}d\nu_{\mathbb{A}}^{\infty}(\bm{\alpha})d\nu_{\mathbb{B}}(\beta)

where δ0,η(𝜶,β)\delta_{0,\eta(\bm{\alpha},\beta)} is the Dirac delta function. Here nn for the summand of the upper bound for hh0h-h_{0} runs from 1+δ0,η(𝜶,β)1+\delta_{0,\eta(\bm{\alpha},\beta)} in order that the union of Xη(𝜶,β)+nk1𝜶X_{\eta(\bm{\alpha},\beta)+n-k-1}^{\bm{\alpha}}'s coincides with YcY^{c}. Comparing the coefficients of 1Xm1_{X_{m}} above, we have

hh0\displaystyle h-h_{0} C𝔸×𝔹m=1n1+δ0,η(𝜶,β)η(𝜶,β)+nm+1λ(Yn+1𝜶,β)λ(Xm+1𝜶)1Xm𝜶dν𝔸(𝜶)dν𝔹(β), and\displaystyle\leq C\int_{\mathbb{A}^{\mathbb{N}}\times\mathbb{B}}\sum_{m=1}^{\infty}\underset{\eta(\bm{\alpha},\beta)+n\geq m+1}{\sum_{n\geq 1+\delta_{0,\eta(\bm{\alpha},\beta)}}}\frac{\lambda\left(Y_{n+1}^{\bm{\alpha},\beta}\right)}{\lambda\left(X_{m+1}^{\bm{\alpha}}\right)}1_{X_{m}^{\bm{\alpha}}}d\nu_{\mathbb{A}}^{\infty}(\bm{\alpha})d\nu_{\mathbb{B}}(\beta),\text{ and}
hh0\displaystyle h-h_{0} C1𝔸×𝔹m=1n2η(𝜶,β)+nm1τα1(12)λ(Yn1𝜶,β)λ(Xm𝜶)1Xm𝜶dν𝔸(𝜶)dν𝔹(β).\displaystyle\geq C^{-1}\int_{\mathbb{A}^{\mathbb{N}}\times\mathbb{B}}\sum_{m=1}^{\infty}\underset{\eta(\bm{\alpha},\beta)+n\geq m}{\sum_{n\geq 2}}\frac{1}{\tau_{\alpha_{1}}^{\prime}\left(\frac{1}{2}\right)}\cdot\frac{\lambda\left(Y_{n-1}^{\bm{\alpha},\beta}\right)}{\lambda\left(X_{m}^{\bm{\alpha}}\right)}1_{X_{m}^{\bm{\alpha}}}d\nu_{\mathbb{A}}^{\infty}(\bm{\alpha})d\nu_{\mathbb{B}}(\beta).

For fixed m1m\geq 1, we have that

n2η(𝜶,β)+nmλ(Yn1𝜶,β)λ(Xm𝜶)=n=max{2,mη(𝜶,β)}λ(Yn1𝜶,β)λ(Xm𝜶)=ymax{1,mη(𝜶,β)1}𝜶,β12λ(Xm𝜶).\underset{\eta(\bm{\alpha},\beta)+n\geq m}{\sum_{n\geq 2}}\frac{\lambda\left(Y_{n-1}^{\bm{\alpha},\beta}\right)}{\lambda\left(X_{m}^{\bm{\alpha}}\right)}=\sum_{n=\max\{2,m-\eta(\bm{\alpha},\beta)\}}^{\infty}\frac{\lambda\left(Y_{n-1}^{\bm{\alpha},\beta}\right)}{\lambda\left(X_{m}^{\bm{\alpha}}\right)}=\frac{y_{\max\{1,m-\eta(\bm{\alpha},\beta)-1\}}^{\bm{\alpha},\beta}-\frac{1}{2}}{\lambda\left(X_{m}^{\bm{\alpha}}\right)}.

Note that

ymax{1,mη(𝜶,β)1}𝜶,β={1if η(𝜶,β)+2>m,Sβ1(xm1𝜶)if m2+η(𝜶,β).y_{\max\{1,m-\eta(\bm{\alpha},\beta)-1\}}^{\bm{\alpha},\beta}=\begin{cases}1&\text{if }\eta(\bm{\alpha},\beta)+2>m,\\ S_{\beta}^{-1}\left(x_{m-1}^{\bm{\alpha}}\right)&\text{if }m\geq 2+\eta(\bm{\alpha},\beta).\end{cases}

If η(𝜶,β)+2>m\eta(\bm{\alpha},\beta)+2>m then

n2η(𝜶,β)+nmλ(Yn1𝜶,β)λ(Xm𝜶)=12λ(Xm𝜶)12\underset{\eta(\bm{\alpha},\beta)+n\geq m}{\sum_{n\geq 2}}\frac{\lambda\left(Y_{n-1}^{\bm{\alpha},\beta}\right)}{\lambda\left(X_{m}^{\bm{\alpha}}\right)}=\frac{1}{2\lambda\left(X_{m}^{\bm{\alpha}}\right)}\geq\frac{1}{2}

and if mη(𝜶,β)+2m\geq\eta(\bm{\alpha},\beta)+2 then

n2η(𝜶,β)+nmλ(Yn1𝜶,β)λ(Xm𝜶)\displaystyle\underset{\eta(\bm{\alpha},\beta)+n\geq m}{\sum_{n\geq 2}}\frac{\lambda\left(Y_{n-1}^{\bm{\alpha},\beta}\right)}{\lambda\left(X_{m}^{\bm{\alpha}}\right)} =Sβ1(xm1𝜶)Sβ1(0)λ(Xm𝜶)\displaystyle=\frac{S_{\beta}^{-1}\left(x_{m-1}^{\bm{\alpha}}\right)-S_{\beta}^{-1}(0)}{\lambda\left(X_{m}^{\bm{\alpha}}\right)}
xm1𝜶02(xm𝜶xm+1𝜶)xm1𝜶2xm𝜶12\displaystyle\geq\dfrac{x_{m-1}^{\bm{\alpha}}-0}{2\left(x_{m}^{\bm{\alpha}}-x_{m+1}^{\bm{\alpha}}\right)}\geq\dfrac{x_{m-1}^{\bm{\alpha}}}{2x_{m}^{\bm{\alpha}}}\geq\dfrac{1}{2}

again from the convexity of SβS_{\beta}. Under the assumption (A), we also have a lower bound 12C\frac{1}{2C} for hh. Therefore, we conclude hh is bounded above on the complement of each small neighborhood of 0 and, under the assumption (A), bounded away from zero on XX as well.

The conservativity of μ\mu follows from [29, Remark 12] and ergodicity follows from Lemma 3.3 and [30, Proposition 2.1]. ∎

Proof of Theorem 3.2.

In this proof, since 𝔸\mathbb{A} is a singleton, we write XnX_{n} and YnβY_{n}^{\beta} instead of Xn𝜶X_{n}^{\bm{\alpha}} and Yn𝜶,βY_{n}^{\bm{\alpha},\beta}. As shown in the proof of Theorem 3.1, the invariant density function of μ\mu satisfies

hh0\displaystyle h-h_{0} C𝔹m=1n1+δ0,η(β)η(β)+nm+1λ(Yn+1β)λ(Xm+1)1Xmdν𝔹(β), and\displaystyle\leq C\int_{\mathbb{B}}\sum_{m=1}^{\infty}\underset{\eta(\beta)+n\geq m+1}{\sum_{n\geq 1+\delta_{0,\eta(\beta)}}}\frac{\lambda\left(Y_{n+1}^{\beta}\right)}{\lambda\left(X_{m+1}\right)}1_{X_{m}}d\nu_{\mathbb{B}}(\beta),\text{ and}
hh0\displaystyle h-h_{0} C1𝔹m=1n2η(β)+nmλ(Yn1β)λ(Xm)1Xmdν𝔹(β)\displaystyle\geq C^{-1}\int_{\mathbb{B}}\sum_{m=1}^{\infty}\underset{\eta(\beta)+n\geq m}{\sum_{n\geq 2}}\frac{\lambda\left(Y_{n-1}^{\beta}\right)}{\lambda\left(X_{m}\right)}1_{X_{m}}d\nu_{\mathbb{B}}(\beta)

for some C>0C>0. Therefore, integrating the above inequalities over XmX_{m}, for m1m\geq 1 large enough, we have

μ(Xm)\displaystyle\mu(X_{m}) 𝔹n2η(β)+nmλ(Yn1β)𝑑ν𝔹(β)\displaystyle\gtrapprox\int_{\mathbb{B}}\underset{\eta(\beta)+n\geq m}{\sum_{n\geq 2}}\lambda\left(Y_{n-1}^{\beta}\right)d\nu_{\mathbb{B}}(\beta)
{β𝔹:η(β)<m}(ymη(β)β12)𝑑ν𝔹(β)+ν𝔹{β𝔹:η(β)m}.\displaystyle\gtrapprox\int_{\left\{\beta\in\mathbb{B}:\eta(\beta)<m\right\}}\left(y_{m-\eta(\beta)}^{\beta}-\tfrac{1}{2}\right)d\nu_{\mathbb{B}}(\beta)+\nu_{\mathbb{B}}\left\{\beta\in\mathbb{B}:\eta(\beta)\geq m\right\}.

The upper estimate of asymptotics of μ(Xm)\mu(X_{m}) is almost same and omitted. The proof is completed. ∎

Proof of Theorem 3.3.

Since the density functions of μ\mu, μ1\mu_{1} and μ2\mu_{2} restricted on YY are all bounded above and away from zero, the assumptions of comparison theorems (Theorem 6.2 and Theorem 6.5) in [19] are fulfilled. ∎

4. Examples

In this section, we apply our result to several random piecewsise convex maps.

4.1. Random piecewise linear maps with low slopes

Let 𝔹\mathbb{B}\subset\mathbb{N} and pβν𝔹({β})p_{\beta}\coloneqq\nu_{\mathbb{B}}(\{\beta\}) a point mass on 𝔹\mathbb{B}. We define for β𝔹\beta\in\mathbb{B}

Tβx={2xx[0,12],2β(2x1)x(12,1].\displaystyle T_{\beta}x=\begin{cases}2x&x\in\left[0,\frac{1}{2}\right],\\ 2^{-\beta}(2x-1)&x\in\left(\frac{1}{2},1\right].\end{cases} (4.1)

This obviously satisfies (2)–(2) and (A). Note that the left branch of Tβx=2xT_{\beta}x=2x does not vary at all, and 𝔸\mathbb{A} is interpreted as a singleton. By the definition of TβT_{\beta}, xn=12nx_{n}=\frac{1}{2^{n}} and Xn=(12n+1,12n]X_{n}=(\tfrac{1}{2^{n+1}},\tfrac{1}{2^{n}}] for n1n\geq 1. Thus we have η(β)=β\eta(\beta)=\beta and

yn+1β=12+12n+1.y_{n+1}^{\beta}=\frac{1}{2}+\frac{1}{2^{n+1}}.

Then we can apply Theorem 3.1 and Theorem 3.2 to get

Proposition 4.1.

The random piecewise convex map given by (4.1) admits a λ\lambda-equivalent, conservative and ergodic σ\sigma-finite invariant measure μ\mu such that

μ((12n+1,12n])β𝔹:β<n2βpβ2n+β𝔹:βnpβ\mu\left(\left(\tfrac{1}{2^{n+1}},\tfrac{1}{2^{n}}\right]\right)\approx\frac{\sum_{\beta\in\mathbb{B}:\beta<n}2^{\beta}p_{\beta}}{2^{n}}+\sum_{\beta\in\mathbb{B}:\beta\geq n}p_{\beta}

for nn large enough.

Remark 4.1.

By the above proposition, for example, when 𝔹={[kn]:n1}\mathbb{B}=\{[k^{n}]:n\geq 1\} and p[kn]=2np_{[k^{n}]}=2^{-n} for some k2k\geq 2, μ\mu is infinite if k2k\geq 2, where [][\,\cdot\,] denotes its integer part. Indeed, in this setting, we can write

μ((12n+1,12n])\displaystyle\mu\left(\left(\tfrac{1}{2^{n+1}},\tfrac{1}{2^{n}}\right]\right) 2n1s<logkn2[ks]2s+slogkn2s\displaystyle\approx 2^{-n}\sum_{1\leq s<\log_{k}n}2^{[k^{s}]}2^{-s}+\sum_{s\geq\log_{k}n}2^{-s}
n1\displaystyle\gtrapprox n^{-1}

and this shows the desired conclusion.

4.2. Random weakly expanding map with positive derivative

We first note that this example contains random LSV maps. Let 𝔸=[α0,α1]\mathbb{A}=[\alpha_{0},\alpha_{1}] for some 0<α0<α1<0<\alpha_{0}<\alpha_{1}<\infty and 𝔹\mathbb{B} be some parameter space. We set probability measures ν𝔸\nu_{\mathbb{A}} and ν𝔹\nu_{\mathbb{B}} on the parameter spaces 𝔸\mathbb{A} and 𝔹\mathbb{B} respectively. For α𝔸\alpha\in\mathbb{A} and β𝔹\beta\in\mathbb{B}, we define

Tα,βx={x(1+2αxα)x[0,12],Sβxx(12,1]\displaystyle T_{\alpha,\beta}x=\begin{cases}x\left(1+2^{\alpha}x^{\alpha}\right)&x\in\left[0,\frac{1}{2}\right],\\ S_{\beta}x&x\in\left(\frac{1}{2},1\right]\end{cases} (4.2)

where we assume the conditions (2)–(2). The condition (A) holds since α1<\alpha_{1}<\infty. Suppose further that there exists γ>0\gamma>0 such that for ν𝔹\nu_{\mathbb{B}}-almost every β𝔹\beta\in\mathbb{B} it holds Sβx>γS_{\beta}^{\prime}x>\gamma. This implies esssup𝔹η(β)<\operatorname*{ess\,sup}_{\mathbb{B}}\eta(\beta)<\infty. Moreover, since γ(x12)T|(12,1]2(x12)\gamma(x-\frac{1}{2})\leq T|_{(\frac{1}{2},1]}\leq 2(x-\frac{1}{2}) by the convexity of SβS_{\beta}, for each 𝜶=(α,α,)𝔸\bm{\alpha}=(\alpha,\alpha,\dots)\in\mathbb{A}^{\mathbb{N}} we have

xn𝜶2yn+1𝜶,β12xn𝜶γ\frac{x_{n}^{\bm{\alpha}}}{2}\leq y_{n+1}^{\bm{\alpha},\beta}-\frac{1}{2}\leq\frac{x_{n}^{\bm{\alpha}}}{\gamma}

for large n1n\geq 1. According to the asymptotic approximation (1.2), we have

yn+1𝜶,β12n1αy_{n+1}^{\bm{\alpha},\beta}-\frac{1}{2}\approx n^{-\frac{1}{\alpha}}

for each 𝜶=(α,α,)𝔸\bm{\alpha}=(\alpha,\alpha,\dots)\in\mathbb{A}^{\mathbb{N}} and β𝔹\beta\in\mathbb{B}. Note that if α¯α\bar{\alpha}\geq\alpha then Tα,β(x)Tα¯,β(x)T_{\alpha,\beta}(x)\geq T_{\bar{\alpha},\beta}(x) for any x[0,12]x\in[0,\frac{1}{2}] and β𝔹\beta\in\mathbb{B}. Then applying Theorem 3.3 to this model, we have the following.

Proposition 4.2.

The random piecewise convex map derived from (4.2) admits a λ\lambda-equivalent, conservative and ergodic σ\sigma-finite invariant measure μ\mu such that for any α𝔸\alpha^{\prime}\in\mathbb{A} with ν𝔸{α𝔸:αα}>0\nu_{\mathbb{A}}\{\alpha\in\mathbb{A}:\alpha^{\prime}\geq\alpha\}>0

n1α0μ(Xn𝜶𝟎) and μ(Xn𝜶)n1αn^{-\frac{1}{\alpha_{0}}}\lessapprox\mu\left(X_{n}^{\bm{\alpha_{0}}}\right)\text{ and }\mu\left(X_{n}^{\bm{\alpha^{\prime}}}\right)\lessapprox n^{-\frac{1}{\alpha^{\prime}}}

for nn large enough, where 𝛂0=(α0,α0,)\bm{\alpha}_{0}=(\alpha_{0},\alpha_{0},\dots) and 𝛂=(α,α,)\bm{\alpha^{\prime}}=(\alpha^{\prime},\alpha^{\prime},\dots).

As consequences of Proposition 4.2, μ([0,1])=\mu([0,1])=\infty if α01\alpha_{0}\geq 1. Also if there is some α[α0,1)\alpha^{\prime}\in[\alpha_{0},1) such that ν𝔸{α𝔸:αα}>0\nu_{\mathbb{A}}\{\alpha\in\mathbb{A}:\alpha^{\prime}\geq\alpha\}>0 then μ([0,1])<\mu([0,1])<\infty.

4.3. Random weakly expanding maps with uniformly contracting branches

Let 𝔸=[α0,α1]\mathbb{A}=[\alpha_{0},\alpha_{1}] for some 0<α0<α1<0<\alpha_{0}<\alpha_{1}<\infty and 𝔹=[0,1]\mathbb{B}=[0,1]. We set probability measures ν𝔸\nu_{\mathbb{A}} and ν𝔹\nu_{\mathbb{B}} on the parameter spaces 𝔸\mathbb{A} and 𝔹\mathbb{B} respectively. For α𝔸\alpha\in\mathbb{A} and β𝔹\beta\in\mathbb{B}, we define

Tα,βx={x(1+2αxα)x[0,12],β(x12)x(12,1].\displaystyle T_{\alpha,\beta}x=\begin{cases}x\left(1+2^{\alpha}x^{\alpha}\right)&x\in\left[0,\frac{1}{2}\right],\\ \beta\left(x-\frac{1}{2}\right)&x\in\left(\frac{1}{2},1\right].\end{cases} (4.3)

Then (2)–(2) and (A) are satisfied. If we set 𝔹k(𝜶){β𝔹:η(𝜶,β)=k}\mathbb{B}_{k}(\bm{\alpha})\coloneqq\{\beta\in\mathbb{B}:\eta(\bm{\alpha},\beta)=k\} for 𝜶=(α,α,)𝔸\bm{\alpha}=(\alpha,\alpha,\dots)\in\mathbb{A}^{\mathbb{N}}, then 𝔹=k=1𝔹k(𝜶)\mathbb{B}=\bigcup_{k=1}^{\infty}\mathbb{B}_{k}(\bm{\alpha}) (disjoint) for each 𝜶𝔸\bm{\alpha}\in\mathbb{A}^{\mathbb{N}}. For β𝔹k(𝜶)\beta\in\mathbb{B}_{k}(\bm{\alpha}), by (1.1), we have

yn+1𝜶,β12(n+k)1αβandynη(𝜶,β)+1𝜶,β12n1αβ.y_{n+1}^{\bm{\alpha},\beta}-\frac{1}{2}\approx\frac{(n+k)^{-\frac{1}{\alpha}}}{\beta}\quad\text{and}\quad y_{n-\eta(\bm{\alpha},\beta)+1}^{\bm{\alpha},\beta}-\frac{1}{2}\approx\frac{n^{-\frac{1}{\alpha}}}{\beta}.

Hence, Theorem 3.2 ensures a random piecewise convex map {Tα,β;ν𝔹:β𝔹}\{T_{\alpha,\beta};\nu_{\mathbb{B}}:\beta\in\mathbb{B}\}, where α𝔸\alpha\in\mathbb{A} is fixed with 𝜶=(α,α,)𝔸\bm{\alpha}=(\alpha,\alpha,\dots)\in\mathbb{A}^{\mathbb{N}}, to have an invariant measure μ𝜶\mu_{\bm{\alpha}} such that

μ𝜶(Xn𝜶)\displaystyle\mu_{\bm{\alpha}}\left(X_{n}^{\bm{\alpha}}\right) k=1n1{β:η(β)<n}𝔹k(𝜶)n1αβ𝑑ν𝔹(β)+ν𝔹{β𝔹:η(β)n}\displaystyle\approx\sum_{k=1}^{n-1}\int_{\{\beta:\eta(\beta)<n\}\cap\mathbb{B}_{k}(\bm{\alpha})}\frac{n^{-\frac{1}{\alpha}}}{\beta}d\nu_{\mathbb{B}}(\beta)+\nu_{\mathbb{B}}\left\{\beta\in\mathbb{B}:\eta(\beta)\geq n\right\}
n1αk=1n1𝔹k(𝜶)1β𝑑ν𝔹(β)+k=nν𝔹(𝔹k(𝜶))\displaystyle\approx n^{-\frac{1}{\alpha}}\sum_{k=1}^{n-1}\int_{\mathbb{B}_{k}(\bm{\alpha})}\frac{1}{\beta}d\nu_{\mathbb{B}}(\beta)+\sum_{k=n}^{\infty}\nu_{\mathbb{B}}\left(\mathbb{B}_{k}(\bm{\alpha})\right)

for nn large enough. Thus we have

Proposition 4.3.

The random piecewise convex map derived from (4.3) admits a λ\lambda-equivalent, conservative and ergodic σ\sigma-finite invariant measure μ\mu such that for any α𝔸\alpha^{\prime}\in\mathbb{A} with ν𝔸{α𝔸:αα}>0\nu_{\mathbb{A}}\{\alpha\in\mathbb{A}:\alpha^{\prime}\geq\alpha\}>0

n1α0k=1n1𝔹k(𝜶0)1β𝑑ν𝔹(β)+k=nν𝔹(𝔹k(𝜶0))μ(Xn𝜶𝟎)and\displaystyle n^{-\frac{1}{\alpha_{0}}}\sum_{k=1}^{n-1}\int_{\mathbb{B}_{k}(\bm{\alpha}_{0})}\frac{1}{\beta}d\nu_{\mathbb{B}}(\beta)+\sum_{k=n}^{\infty}\nu_{\mathbb{B}}\left(\mathbb{B}_{k}(\bm{\alpha}_{0})\right)\lessapprox\mu\left(X_{n}^{\bm{\alpha_{0}}}\right)\quad\text{and}
μ(Xn𝜶)n1αk=1n1𝔹k(𝜶)1β𝑑ν𝔹(β)+k=nν𝔹(𝔹k(𝜶))\displaystyle\qquad\qquad\qquad\mu\left(X_{n}^{\bm{\alpha^{\prime}}}\right)\lessapprox n^{-\frac{1}{\alpha^{\prime}}}\sum_{k=1}^{n-1}\int_{\mathbb{B}_{k}(\bm{\alpha^{\prime}})}\frac{1}{\beta}d\nu_{\mathbb{B}}(\beta)+\sum_{k=n}^{\infty}\nu_{\mathbb{B}}\left(\mathbb{B}_{k}(\bm{\alpha^{\prime}})\right)

for nn large enough, where 𝛂0=(α0,α0,)\bm{\alpha}_{0}=(\alpha_{0},\alpha_{0},\dots) and 𝛂=(α,α,)\bm{\alpha^{\prime}}=(\alpha^{\prime},\alpha^{\prime},\dots).

Remark 4.2.

As an example, let ν𝔸\nu_{\mathbb{A}} be the normalized Lebesgue measure on 𝔸=[12,2]\mathbb{A}=[\frac{1}{2},2] (that is, α0=12\alpha_{0}=\frac{1}{2}, α1=2\alpha_{1}=2 and α\alpha^{\prime} can be taken an arbitrary number in (α0,α1](\alpha_{0},\alpha_{1}]) and dν𝔹(β)=(1)βd\nu_{\mathbb{B}}(\beta)=(1-\ell)\beta^{-\ell} on 𝔹=[0,1]\mathbb{B}=[0,1] for some (0,1)\ell\in(0,1). Then Proposition 4.3 tells us that μ([0,1])=\mu([0,1])=\infty if >12\ell>\frac{1}{2}. Indeed, for each 𝜶=(α,α,)\bm{\alpha}=(\alpha,\alpha,\dots) where α(12,2]\alpha\in(\frac{1}{2},2], since sup𝔹k+1(𝜶)β=inf𝔹k(𝜶)β\sup_{\mathbb{B}_{k+1}(\bm{\alpha})}\beta=\inf_{\mathbb{B}_{k}(\bm{\alpha})}\beta for k1k\geq 1 and sup𝔹1(𝜶)β=1\sup_{\mathbb{B}_{1}(\bm{\alpha})}\beta=1, we have

n1αk=1n1𝔹k(𝜶)1β𝑑ν𝔹(β)\displaystyle n^{-\frac{1}{\alpha}}\sum_{k=1}^{n-1}\int_{\mathbb{B}_{k}(\bm{\alpha})}\frac{1}{\beta}d\nu_{\mathbb{B}}(\beta) =n1αinf𝔹n1(𝜶)β11β1+𝑑β\displaystyle=n^{-\frac{1}{\alpha}}\int_{\inf_{\mathbb{B}_{n-1}(\bm{\alpha})}\beta}^{1}\frac{1-\ell}{\beta^{1+\ell}}d\beta
n1α(supβ𝔹n(𝜶)β1)\displaystyle\approx n^{-\frac{1}{\alpha}}\left(\sup_{\beta\in\mathbb{B}_{n}(\bm{\alpha})}\beta^{-\ell}-1\right)
n1α\displaystyle\approx n^{-\frac{1-\ell}{\alpha}}

and

k=nν𝔹(𝔹k(𝜶))\displaystyle\sum_{k=n}^{\infty}\nu_{\mathbb{B}}\left(\mathbb{B}_{k}(\bm{\alpha})\right) =k=n𝔹k(𝜶)1β𝑑β=k=n(supβ𝔹k(𝜶)β1infβ𝔹k(𝜶)β1)\displaystyle=\sum_{k=n}^{\infty}\int_{\mathbb{B}_{k}(\bm{\alpha})}\frac{1-\ell}{\beta^{\ell}}d\beta=\sum_{k=n}^{\infty}\left(\sup_{\beta\in\mathbb{B}_{k}(\bm{\alpha})}\beta^{1-\ell}-\inf_{\beta\in\mathbb{B}_{k}(\bm{\alpha})}\beta^{1-\ell}\right)
=supβ𝔹n(𝜶)β1\displaystyle=\sup_{\beta\in\mathbb{B}_{n}(\bm{\alpha})}\beta^{1-\ell}
n1α\displaystyle\approx n^{-\frac{1-\ell}{\alpha}}

for large nn. Here we used that any point β\beta in 𝔹k(𝜶)\mathbb{B}_{k}(\bm{\alpha}) can be approximated by

(k+1)1αβk1α(k+1)^{-\frac{1}{\alpha}}\lessapprox\beta\lessapprox k^{-\frac{1}{\alpha}}

asymptotically. Then, since we can choose α\alpha arbitrary close to 12\frac{1}{2}, the claim follows.

4.4. Random weakly expanding maps with a critical point

Let 𝔸(0,+)\mathbb{A}\subset(0,+\infty) and 𝔹(1,+)\mathbb{B}\subset(1,+\infty) be compact sets and ν𝔸\nu_{\mathbb{A}} and ν𝔹\nu_{\mathbb{B}} be probability measures on 𝔸\mathbb{A} and 𝔹\mathbb{B}, respectively. We let α0min𝔸α\alpha_{0}\coloneqq\min_{\mathbb{A}}\alpha. For α𝔸\alpha\in\mathbb{A} and β𝔹\beta\in\mathbb{B}, define

Tα,βx={x(1+2αxα)x[0,12],2β(x12)βx(12,1].\displaystyle T_{\alpha,\beta}x=\begin{cases}x\left(1+2^{\alpha}x^{\alpha}\right)&x\in\left[0,\frac{1}{2}\right],\\ 2^{\beta}\left(x-\frac{1}{2}\right)^{\beta}&x\in\left(\frac{1}{2},1\right].\end{cases} (4.4)
Refer to caption
Figure 2. The graph of Tα,βT_{\alpha,\beta} from (4.4)

Note that Tα,βT_{\alpha,\beta} has an indifferent fixed point at 0 for each α𝔸\alpha\in\mathbb{A} and β𝔹\beta\in\mathbb{B}. Tα,βT_{\alpha,\beta} also has derivative 0 at 12\frac{1}{2}, around an inverse image of the indifferent fixed point, for any α>0\alpha>0 and β>1\beta>1 (see also Figure 2). According to the asymptotic equation (1.2) we have

yn+1𝜶,β12n1αβ.y_{n+1}^{\bm{\alpha},\beta}-\frac{1}{2}\approx n^{-\frac{1}{\alpha\beta}}.

Then applying Theorem 3.3 to this model, since η(𝜶,β)=0\eta(\bm{\alpha},\beta)=0 for each 𝜶𝔸\bm{\alpha}\in\mathbb{A}^{\mathbb{N}} and β𝔹\beta\in\mathbb{B}, we have the following.

Proposition 4.4.

The random piecewise convex map derived from (4.4) admits a λ\lambda-equivalent, conservative and ergodic σ\sigma-finite invariant measure μ\mu such that for any α𝔸\alpha^{\prime}\in\mathbb{A} with ν𝔸{α𝔸:αα}>0\nu_{\mathbb{A}}\{\alpha\in\mathbb{A}:\alpha^{\prime}\leq\alpha\}>0

𝔹n1α0β𝑑ν𝔹(β)μ(Xn𝜶𝟎)andμ(Xn𝜶)𝔹n1αβ𝑑ν𝔹(β)\int_{\mathbb{B}}n^{-\frac{1}{\alpha_{0}\beta}}d\nu_{\mathbb{B}}(\beta)\lessapprox\mu\left(X_{n}^{\bm{\alpha_{0}}}\right)\quad\text{and}\quad\mu\left(X_{n}^{\bm{\alpha^{\prime}}}\right)\lessapprox\int_{\mathbb{B}}n^{-\frac{1}{\alpha^{\prime}\beta}}d\nu_{\mathbb{B}}(\beta)

for nn large enough.

Remark 4.3.

(I) From Proposition 4.4, if ν𝔹{β𝔹:α0β1}>0\nu_{\mathbb{B}}\left\{\beta\in\mathbb{B}:\alpha_{0}\beta\geq 1\right\}>0 then μ([0,1])=\mu([0,1])=\infty and if α<(max𝔹β)1=1\alpha^{\prime}<(\max_{\mathbb{B}}\beta)^{-1}=1 for some α𝔸\alpha^{\prime}\in\mathbb{A} then μ([0,1])<\mu([0,1])<\infty. We remark that the invariant measure of (4.4) tends to become infinite rather than that of (4.2).

(II) [9, Theorem 1.1] showed that an upper bound for the invariant density

dμdλ(x)x(1+α1β)\frac{d\mu}{d\lambda}(x)\lessapprox x^{-(1+\alpha-\frac{1}{\beta})}

holds for the deterministic map (4.4) with a fixed parameter such that 1<β<1α1<\beta<\frac{1}{\alpha} (and hence only finite invariant measures are dealt with in [9]). This also implies that μ(Xn𝜶)n1αβ\mu(X_{n}^{\bm{\alpha}})\lessapprox n^{-\frac{1}{\alpha\beta}}. Thus Proposition 4.4 is a random generalization of [9] (note that our result can admit parameter β1α\beta\geq\frac{1}{\alpha}) as well as showing lower bound of μ\mu for large n1n\geq 1.

4.5. Random weakly expanding maps with a flat point

Let 𝔸(0,+)\mathbb{A}\subset(0,+\infty) and 𝔹[1,+)\mathbb{B}\subset[1,+\infty)111If β<1\beta<1, then the convex property of SβS_{\beta} may be violated. compact sets and ν𝔸\nu_{\mathbb{A}} and ν𝔹\nu_{\mathbb{B}} be probability measures on 𝔸\mathbb{A} and 𝔹\mathbb{B}, respectively. For α𝔸\alpha\in\mathbb{A} and β𝔹\beta\in\mathbb{B}, define

Tα,βx={x(1+2αxα)x[0,12],exp(2β(x12)β)x(12,1].\displaystyle T_{\alpha,\beta}x=\begin{cases}x\left(1+2^{\alpha}x^{\alpha}\right)&x\in\left[0,\frac{1}{2}\right],\\ \exp\left(2^{\beta}-\left(x-\frac{1}{2}\right)^{-\beta}\right)&x\in\left(\frac{1}{2},1\right].\end{cases} (4.5)

Then we can see that 12\frac{1}{2}, the inverse image of 0, is a flat point in the sense that Tα,β(n)(12)=0T_{\alpha,\beta}^{(n)}(\frac{1}{2})=0 for any α>0\alpha>0 and β1\beta\geq 1. Using the same notation, we have

yn+1𝜶,β12(logn)1βy_{n+1}^{\bm{\alpha},\beta}-\frac{1}{2}\approx\left(\log n\right)^{-\frac{1}{\beta}}

for large n1n\geq 1. We can again apply Theorem 3.3 to this model.

Proposition 4.5.

The random piecewise convex map derived from (4.5) admits a λ\lambda-equivalent, conservative and ergodic σ\sigma-finite invariant measure μ\mu such that for any 𝛂=(α,α,)𝔸\bm{\alpha}=(\alpha,\alpha,\dots)\in\mathbb{A}

μ(Xn𝜶)𝔹(logn)1β𝑑ν𝔹(β)\mu\left(X_{n}^{\bm{\alpha}}\right)\approx\int_{\mathbb{B}}\left(\log n\right)^{-\frac{1}{\beta}}d\nu_{\mathbb{B}}(\beta)

for nn large enough. Consequently, we always have μ([0,1])=\mu([0,1])=\infty for any 𝔸\mathbb{A}, 𝔹\mathbb{B}, ν𝔸\nu_{\mathbb{A}} and ν𝔹\nu_{\mathbb{B}}.

Remark 4.4.

We remark that even a modification of α0\alpha\to 0 leaves no space for μ\mu to be finite. That is, if 𝔸={0}\mathbb{A}=\{0\} (so we drop a symbol α\alpha henceforth in this remark) and ν𝔸\nu_{\mathbb{A}} is a point mass on 𝔸\mathbb{A} then Tβx=2xT_{\beta}x=2x for x[0,12]x\in[0,\frac{1}{2}] and yn+1β12n1βy_{n+1}^{\beta}-\frac{1}{2}\approx n^{-\frac{1}{\beta}}. This means that μ(Xn)𝔹n1β𝑑ν𝔹(β)\mu(X_{n})\approx\int_{\mathbb{B}}n^{-\frac{1}{\beta}}d\nu_{\mathbb{B}(\beta)} and we still always have μ([0,1])=\mu([0,1])=\infty because min𝔹β1\min_{\mathbb{B}}\beta\geq 1.

4.6. Random weakly expanding maps with a wide entrance

Our example is defined as follows, which is similar to the examples (4.1) and (4.4). Let 𝔸=[0,α1]\mathbb{A}=[0,\alpha_{1}] for some 0<α1<0<\alpha_{1}<\infty and 𝔹=[1,+)\mathbb{B}=[1,+\infty) and ν𝔸\nu_{\mathbb{A}} and ν𝔹\nu_{\mathbb{B}} be probability measures on 𝔸\mathbb{A} and 𝔹\mathbb{B}, respectively. For α𝔸\alpha\in\mathbb{A} and β𝔹\beta\in\mathbb{B}, define

Tβx={x(1+2αxα)x[0,12],(x12)βx(12,1].\displaystyle T_{\beta}x=\begin{cases}x\left(1+2^{\alpha}x^{\alpha}\right)&x\in\left[0,\frac{1}{2}\right],\\ \left(x-\frac{1}{2}\right)^{\beta}&x\in\left(\frac{1}{2},1\right].\end{cases} (4.6)

From the definition Tβ(1)=2βT_{\beta}(1)=2^{-\beta} and hence the image of the right half part (12,1](\frac{1}{2},1] will vanish as β\beta tends to infinity. Again, let 𝔹k(𝜶)={β𝔹:η(𝜶,β)=k}\mathbb{B}_{k}(\bm{\alpha})=\{\beta\in\mathbb{B}:\eta(\bm{\alpha},\beta)=k\} and 𝔹=k=1𝔹k(𝜶)\mathbb{B}=\bigcup_{k=1}^{\infty}\mathbb{B}_{k}(\bm{\alpha}) (disjoint) for each 𝜶=(α,α,)𝔸\bm{\alpha}=(\alpha,\alpha,\dots)\in\mathbb{A}^{\mathbb{N}}.

We consider two cases of α=0\alpha=0 and α>0\alpha>0, and we first observe when α=0\alpha=0 which gives a lower bound for the invariant measure for the random piecewise convex map given by (4.6). In this case, it is straightforward to see that for each β𝔹k(𝟎)\beta\in\mathbb{B}_{k}(\bm{0})

xn𝟎=2nandyn+1𝟎,β12=2n+kβx_{n}^{\bm{0}}=2^{-n}\quad\text{and}\quad y_{n+1}^{\bm{0},\beta}-\tfrac{1}{2}=2^{-\frac{n+k}{\beta}}

with notation 𝟎=(0,0,)𝔸\bm{0}=(0,0,\dots)\in\mathbb{A}^{\mathbb{N}}. We also have 𝔹k(𝟎)=[k,k+1)\mathbb{B}_{k}(\bm{0})=[k,k+1) for k1k\geq 1. Thus the invariant measure μ𝟎\mu_{\bm{0}} for a random piecewise convex map {T0,β;ν𝔹:β𝔹}\{T_{0,\beta};\nu_{\mathbb{B}}:\beta\in\mathbb{B}\} satisfies that

μ𝟎(Xn𝟎)\displaystyle\mu_{\bm{0}}\left(X_{n}^{\bm{0}}\right) k=1n1𝔹k(𝟎)(ynk+1𝟎,β12)𝑑ν𝔹(β)+k=nν𝔹(𝔹k(𝟎))\displaystyle\approx\sum_{k=1}^{n-1}\int_{\mathbb{B}_{k}(\bm{0})}\left(y_{n-k+1}^{\bm{0},\beta}-\tfrac{1}{2}\right)d\nu_{\mathbb{B}}(\beta)+\sum_{k=n}^{\infty}\nu_{\mathbb{B}}\left(\mathbb{B}_{k}(\bm{0})\right)
=k=1n1[k,k+1)2nβ𝑑ν𝔹(β)+ν𝔹([n,))\displaystyle=\sum_{k=1}^{n-1}\int_{[k,k+1)}2^{-\frac{n}{\beta}}d\nu_{\mathbb{B}}(\beta)+\nu_{\mathbb{B}}\left([n,\infty)\right)
=[1,n)2nβ𝑑ν𝔹(β)+ν𝔹([n,))\displaystyle=\int_{[1,n)}2^{-\frac{n}{\beta}}d\nu_{\mathbb{B}}(\beta)+\nu_{\mathbb{B}}\left([n,\infty)\right)

for large nn by Theorem 3.2.

We secondly consider the case when α>0\alpha>0 which is in need for an upper bound for the invariant measure. Then for each 𝜶=(α,α,)𝔸\bm{\alpha}=(\alpha,\alpha,\dots)\in\mathbb{A}^{\mathbb{N}} and β𝔹k(𝜶)\beta\in\mathbb{B}_{k}(\bm{\alpha}) we have

yn+1𝜶,β12(n+k)1αβy_{n+1}^{\bm{\alpha},\beta}-\frac{1}{2}\approx(n+k)^{-\frac{1}{\alpha\beta}}

as nn\to\infty. Then the invariant measure μ𝜶\mu_{\bm{\alpha}} for the random piecewise convex map {Tα,β;ν𝔹:β𝔹}\{T_{\alpha,\beta};\nu_{\mathbb{B}}:\beta\in\mathbb{B}\} satisfies that

μ𝜶(Xn𝜶)\displaystyle\mu_{\bm{\alpha}}\left(X_{n}^{\bm{\alpha}}\right) k=1n1𝔹k(𝜶)n1αβ𝑑ν𝔹(β)+k=nν𝔹(𝔹k(𝜶))\displaystyle\approx\sum_{k=1}^{n-1}\int_{\mathbb{B}_{k}(\bm{\alpha})}n^{-\frac{1}{\alpha\beta}}d\nu_{\mathbb{B}}(\beta)+\sum_{k=n}^{\infty}\nu_{\mathbb{B}}\left(\mathbb{B}_{k}(\bm{\alpha})\right)
[1,sup𝔹n1(𝜶)β)n1αβ𝑑ν𝔹(β)+k=nν𝔹(𝔹k(𝜶))\displaystyle\approx\int_{\left[1,\sup_{\mathbb{B}_{n-1}(\bm{\alpha})}\beta\right)}n^{-\frac{1}{\alpha\beta}}d\nu_{\mathbb{B}}(\beta)+\sum_{k=n}^{\infty}\nu_{\mathbb{B}}\left(\mathbb{B}_{k}(\bm{\alpha})\right)

From these observations, we have the following.

Proposition 4.6.

The random piecewise convex map derived from (4.6) admits a λ\lambda-equivalent, conservative and ergodic σ\sigma-finite invariant measure μ\mu such that for any α𝔸\alpha^{\prime}\in\mathbb{A} with ν𝔸{α𝔸:αα}>0\nu_{\mathbb{A}}\{\alpha\in\mathbb{A}:\alpha^{\prime}\leq\alpha\}>0

[1,n)2nβ𝑑ν𝔹(β)+ν𝔹([n,))μ(Xn𝟎)and\displaystyle\int_{[1,n)}2^{-\frac{n}{\beta}}d\nu_{\mathbb{B}}(\beta)+\nu_{\mathbb{B}}\left([n,\infty)\right)\lessapprox\mu\left(X_{n}^{\bm{0}}\right)\quad\text{and}
μ(Xn𝜶)[1,inf𝔹n(𝜶)β)n1αβ𝑑ν𝔹(β)+ν𝔹([inf𝔹n(𝜶)β,))\displaystyle\qquad\qquad\qquad\mu\left(X_{n}^{\bm{\alpha^{\prime}}}\right)\lessapprox\int_{\left[1,\inf_{\mathbb{B}_{n}(\bm{\alpha^{\prime}})}\beta\right)}n^{-\frac{1}{\alpha^{\prime}\beta}}d\nu_{\mathbb{B}}(\beta)+\nu_{\mathbb{B}}\left(\bigg{[}\inf_{\mathbb{B}_{n}(\bm{\alpha^{\prime}})}\beta,\infty\bigg{)}\right)

for nn large enough, where 𝟎=(0,0,)𝔸\bm{0}=(0,0,\dots)\in\mathbb{A}^{\mathbb{N}} and 𝛂=(α,α,)𝔸\bm{\alpha^{\prime}}=(\alpha^{\prime},\alpha^{\prime},\dots)\in\mathbb{A}^{\mathbb{N}}.

Remark 4.5.

As an example of this proposition, if dν𝔹(β)=(1)βdβd\nu_{\mathbb{B}}(\beta)=(\ell-1)\beta^{-\ell}d\beta for some >1\ell>1 on 𝔹=[1,+)\mathbb{B}=[1,+\infty) then the invariant measure μ\mu is infinite when 2\ell\leq 2, independent of the choice of ν𝔸\nu_{\mathbb{A}}. The calculation is similar as in Remark 4.2: To see this, we need to show the lower bound in Proposition 4.6 is proportional to or greater than n1n^{-1}.

[1,n)2nβ𝑑ν𝔹(β)+ν𝔹([n,))\displaystyle\int_{[1,n)}2^{-\frac{n}{\beta}}d\nu_{\mathbb{B}}(\beta)+\nu_{\mathbb{B}}\left([n,\infty)\right) 2n1n(1)β𝑑β+n(1)β𝑑β\displaystyle\geq 2^{-n}\int_{1}^{n}(\ell-1)\beta^{-\ell}d\beta+\int_{n}^{\infty}(\ell-1)\beta^{-\ell}d\beta
=2n(1n(1))+n(1)\displaystyle=2^{-n}\left(1-n^{-(\ell-1)}\right)+n^{-(\ell-1)}
>n1\displaystyle>n^{-1}

and our conclusion is valid.

4.7. Counterexample with infinite derivative

We finally illustrate an example which does not satisfy (A) in Theorem 3.1. The example below still admits an equivalent σ\sigma-finite invariant measure but the density function of the invariant measure is no longer bounded away from zero.

Let 𝔹\mathbb{B} be a parameter space and ν𝔹\nu_{\mathbb{B}} be a probability measure on 𝔹\mathbb{B}. For β𝔹\beta\in\mathbb{B}, define

Tβx={112x[0,12],Sβ(12,1]\displaystyle T_{\beta}x=\begin{cases}1-\sqrt{1-2x}&\left[0,\frac{1}{2}\right],\\ S_{\beta}&\left(\frac{1}{2},1\right]\end{cases} (4.7)

where SβS_{\beta}'s satisfy (2)–(2). Note that (A) does not hold, while (B) holds. We then assume that there is some κ(0,1)\kappa\in(0,1) such that for ν𝔹\nu_{\mathbb{B}}-almost every β𝔹\beta\in\mathbb{B}, Sβ(1)1κS_{\beta}(1)\leq 1-\kappa holds.

Since (4.7) satisfies (B), there is an equivalent σ\sigma-finite invariant measure μ\mu for this random piecewise convex map. Then we have for any 0<ε<κ0<\varepsilon<\kappa

μ([1ε,1])=𝔹μ(Tβ1[1ε,1])𝑑ν𝔹(β)=μ([1ε22,12]).\mu\left([1-\varepsilon,1]\right)=\int_{\mathbb{B}}\mu\left(T_{\beta}^{-1}[1-\varepsilon,1]\right)d\nu_{\mathbb{B}}(\beta)=\mu\left(\left[\tfrac{1-\varepsilon^{2}}{2},\tfrac{1}{2}\right]\right).

Since dμdλC1\frac{d\mu}{d\lambda}\leq C_{1} on [1κ22,1][\frac{1-\kappa^{2}}{2},1] for some C1>0C_{1}>0,

μ([1ε22,12])C1ε22.\mu\left(\left[\tfrac{1-\varepsilon^{2}}{2},\tfrac{1}{2}\right]\right)\leq\frac{C_{1}\varepsilon^{2}}{2}.

If dμdλ\frac{d\mu}{d\lambda} were bounded away from zero on YY, then we also have

μ([1ε,1])C2ε\mu\left([1-\varepsilon,1]\right)\geq C_{2}\varepsilon

for some C2>0C_{2}>0. However this implies ε2C2C1\varepsilon\geq\frac{2C_{2}}{C_{1}}, which is contradiction since ε>0\varepsilon>0 can be taken arbitrary small. Therefore, we conclude that dμdλ(x)0\frac{d\mu}{d\lambda}(x)\to 0 as x1x\to 1.

Acknowledgements

This work was supported by the Research Institute for Mathematical Sciences, an International Joint Usage/Research Center located in Kyoto University. Hisayoshi Toyokawa was supported by JSPS KAKENHI Grant Number 21K20330.

References

  • [1]
  • [2] J. Aaronson, An introduction to infinite ergodic theory, Mathematical Surveys and Monographs, 50. American Mathematical Society, Providence, RI, (1997), xii+284 pp.
  • [3] R. Aimino, M. Nicol and S. Vaienti, Annealed and quenched limit theorems for random expanding dynamical systems, Probab. Theory Related Fields, 162 (2015), 233–274.
  • [4] L. Arnold, Random dynamical systems, Springer (1995).
  • [5] W. Bahsoun and C. Bose, Mixing rates and limit theorems for random intermittent maps, Nonlinearity 29 (2016), 1417–1433.
  • [6] W. Bahsoun, C. Bose and Y. Duan, Decay of correlation for random intermittent maps, Nonlinearity 27 (2014), 1543–1554.
  • [7] W. Bahsoun, C. Bose and M. Ruziboev, Quenched decay of correlations for slowly mixing systems, Trans. Amer. Math. Soc., 372 (2019), 6547–6587.
  • [8] W. Bahsoun and P. Góra, Weakly convex and concave random maps with position dependent probabilities Stochastic Anal. Appl., 21 (2003), 983–994.
  • [9] H. Cui, Invariant densities for intermittent maps with critical points, J. Difference Equ. Appl., 27 (2021), 404–421.
  • [10] D. Dragičević, G. Froyland, C. González-Tokman and S. Vaienti, A spectral approach for quenched limit theorems for random expanding dynamical systems, Comm. in Math. Phy., 360 (2018), 1121–1187.
  • [11] D. Dragičević, G. Froyland, C. González-Tokman and S. Vaienti, Almost sure invariance principle for random piecewise expanding maps, Nonlinearity, 31 (2018), 2252–2280.
  • [12] S. R. Foguel, The ergodic theory of Markov processes, Van Nostrand Mathematical Studies, No. 21. Van Nostrand Reinhold Co., New York-Toronto, Ont.-London 1969 v+102 pp.
  • [13] S. R. Foguel, Selected topics in the study of Markov operators, Carolina Lecture Series, 9. University of North Carolina, Department of Mathematics, Chapel Hill, N.C., 1980.
  • [14] S. Gouëzel, Sharp polynomial estimates for the decay of correlations, Israel J. Math., 139 (2004), 29–65.
  • [15] T. Inoue, Asymptotic stability of densities for piecewise convex maps, Ann. Polon. Math., 57 (1992), 83–90.
  • [16] T. Inoue, Weakly attracting repellors for piecewise convex maps, Japan J. Indust. Appl. Math., 9 (1992), 413–430.
  • [17] T. Inoue, Invariant measures for position dependent random maps with continuous random parameters, Studia Math., 208 (2012), 11–29.
  • [18] T. Inoue, First return maps of random maps and invariant measures, Nonlinearity, 33 (2020), 249–275.
  • [19] T. Inoue, Comparison theorems for invariant measures of random dynamical systems, arXiv reprint, arXiv:2303.09784.
  • [20] M. S. Islam, Piecewise convex deterministic dynamical systems and weakly convex random dynamical systems and their invariant measures, Int. J. Appl. Comput. Math., 7 (2021), 25 pp.
  • [21] Y. Kifer, Ergodic theory of random transformations, Progress in Probability and Statistics, 10. Birkhäuser Boston, Inc., Boston, MA, 1986. x+210 pp.
  • [22] U. Krengel, Ergodic theorems, De Gruyter Studies in Mathematics, 6. Walter de Gruyter & Co., Berlin, 1985. viii+357 pp.
  • [23] A. Lasota and J. A. Yorke, Exact dynamical systems and the Frobenius-Perron operator, Trans. Amer. Math. Soc., 273 (1982), 375–384.
  • [24] C. Liverani, B. Saussol and S. Vaienti, A probabilistic approach to intermittency, Ergodic Theory Dynam. Systems, 19 (1999), 671–685.
  • [25] F. Nakamura, Y. Nakano, H. Toyokawa and K. Yano, Arcsine law for random dynamics with a core, Nonlinearity, 36 (2023), 1491–1509.
  • [26] O. Sarig, Subexponential decay of correlations, Invent. Math., 150 (2002), 629–653.
  • [27] M. Thaler, Estimates of the invariant densities of endomorphisms with indifferent fixed points, Israel J. Math., 37 (1980), 303–314.
  • [28] M. Thaler and R. Zweimüller, Distributional limit theorems in infinite ergodic theory, Probab. Theory Related Fields 135 (2006), 15–52.
  • [29] H. Toyokawa, σ\sigma-finite invariant densities for eventually conservative Markov operators, Discrete Continuous Dynamical Systems- A, 40 (2020): 2641–2669.
  • [30] H. Toyokawa, On the existence of a σ\sigma-finite acim for a random iteration of intermittent Markov maps with uniformly contractive part, Stochastics and Dynamics, 21, (2021), 14 pages.
  • [31] K. Yosida, Functional analysis, Reprint of the sixth (1980) edition. Classics in Mathematics. Springer-Verlag, Berlin, (1995).
  • [32] L.-S. Young, Recurrence times and rates of mixing Israel J. Math., 110 (1999), 153–188.