This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

A Smoothed Analysis of the Space Complexity
of Computing a Chaotic Sequence

Naoaki Okada111Graduate School of Information Science and Electrical Engineering, Kyushu University    Shuji Kijima222Faculty of Data Science, Shiga University
Abstract

This work is motivated by a question whether it is possible to calculate a chaotic sequence efficiently, e.g., is it possible to get the nn-th bit of a bit sequence generated by a chaotic map, such as β\beta-expansion, tent map and logistic map in o(n)\mathrm{o}(n) time/space? This paper gives an affirmative answer to the question about the space complexity of a tent map. We show that the decision problem of whether a given bit sequence is a valid tent code is solved in O(log2n)\mathrm{O}(\log^{2}n) space in a sense of the smoothed complexity.

1 Brief Introduction

A tent map fμ:[0,1][0,1]f_{\mu}\colon[0,1]\to[0,1] (or simply ff) is given by

f(x)={μx:x12,μ(1x):x12\displaystyle f(x)=\begin{cases}\mu x&:x\leq\frac{1}{2},\\ \mu(1-x)&:x\geq\frac{1}{2}\end{cases} (1)

where this paper is concerned with the case of 1<μ<21<\mu<2. As Figure 1 shows, it is a simple piecewise-linear map looking like a tent. Let xn=f(xn1)=fn(x)x_{n}=f(x_{n-1})=f^{n}(x) recursively for n=1,2,n=1,2,\ldots, where x0=xx_{0}=x for convenience. Clearly, x0,x1,x2,x_{0},x_{1},x_{2},\ldots is a deterministic sequence. Nevertheless, the deterministic sequence shows a complex behavior, as if “random,” when μ>1\mu>1. It is said chaotic [23]. For instance, fn(x)f^{n}(x) becomes quite different from fn(x)f^{n}(x^{\prime}) for xxx\neq x^{\prime} as nn increasing, even if |xx||x-x^{\prime}| is very small, and it is one of the most significant characters of a chaotic sequence known as the sensitivity to initial conditions — a chaotic sequence is said “unpredictable” despite a deterministic process [22, 31, 37, 6].

From the viewpoint of theoretical computer science, computing chaotic sequences seems to contain (at least) two computational issues: numerical issues and combinatorial issues including computational complexity. This paper is concerned with the computational complexity of a simple problem: Given μ\mu, xx and nn, decide whether fn(x)<1/2f^{n}(x)<1/2. Its time complexity might be one of the most interesting questions; e.g., is it possible to “predict” whether fn(x)<1/2f^{n}(x)<1/2 in time polynomial in logn\log n? Unfortunately, we in this paper cannot answer the question333 We think that the problem might be NP-hard using the arguments on the complexity of algebra and number theory in [9], but we could not find the fact. . Instead, this paper is concerned with the space complexity of the problem.

Refer to caption
(a) Tent map f(x)f(x) for μ=1.6\mu=1.6.
Refer to caption
(b) Cobweb of x0,x1,,x15x_{0},x_{1},\ldots,x_{15} for x0=0.2x_{0}=0.2.
Is x1000<12x_{1000}<\frac{1}{2}?
Figure 1: A tent map f(x)f(x) and its cobweb.

1.1 Background

Chaos.

Chaotic sequences show many interesting figures such as cobweb, strange attractor, bifurcation, etc. [22, 21, 24, 23, 18]. The chaos theory has been intensively developed in several context such as electrical engineering, information theory, statistical physics, neuroscience and computer science, with many applications, such as weather forecasting, climate change, diastrophism and disaster resilience since the 1960s. For instance, a cellular automaton, including the life game, is a classical topic in computer science, and it is closely related to the “edge of chaos.” For another instance, the “sensitivity to initial conditions” are often regarded as unpredictability, and chaotic sequences are used in pseudo random number generator, cryptography, or heuristics for NP-hard problems including chaotic genetic algorithms.

From the viewpoint of theoretical computer science, the numerical issues of computing chaotic sequences have been intensively investigated in statistical physics, information theory and probability. In contrast, the computational complexity of computing a chaotic sequence seems not well developed. It may be a simple reason that it looks unlike a decision problem.

Tent map: 1-D, piecewise-linear and chaotic.

Interestingly, very simple maps show chaotic behavior. One of the most simplest maps are piece-wise linear maps, including the tent map and the β\beta-expansion (a.k.a. Bernoulli shift) which are 1-D maps and the baker’s map which is a 2-D map [22, 30, 26, 27, 31, 37, 6, 13, 29].

The tent map, as well as the β\beta-expansion, is known to be topologically conjugate to the logistic map which is a quadratic map cerebrated as a chaotic map. Chaotic behavior of the tent map, in terms of power spectra, band structure, critical behavior, are analyzed in e.g., [22, 31, 37, 6]. The tent map is also used for pseudo random generator or encryption e.g., [1, 2, 20]. It is also used for meta-heuristics for NP-hard problems [35, 10].

Smoothed analysis.

Linear programming is an optimization problem on a (piecewise) linear system, for a linear objective function. Dantzig in 1947 gave an “efficient” algorithm, known as the simplex method, for linear programming. Khachiyan [17] gave the ellipsoid method, and proved that the linear programming is in P (cf. [19]) in 1979. Karmarkar [15] in 1984 gave another polynomial time algorithm, interior point method.

The smoothed analysis is introduced by Spielman and Teng [33], to prove that the simplex algorithms for linear programmings run in “polynomial time,” beyond the average case analysis. There are several recent progress on the smoothed analysis of algorithms [5, 4, 12, 11].

1.2 Contribution

This work.

This paper is concerned with a problem related to deciding whether fn(x)<1/2f^{n}(x)<1/2 for x[0,1)x\in[0,1) for the nn-th iterated tent map fnf^{n}. More precisely, we will define the tent language n{0,1}n{\cal L}_{n}\subseteq\{0,1\}^{n} consisting of tent codes of x[0,1)x\in[0,1) in Section 2, and we are concerned with the correct recognition of n{\cal L}_{n}. The main target of the paper is the space complexity of the following simple problem; given a bit sequence 𝐛{0,1}n\mathbf{b}\in\{0,1\}^{n} and x[0,1)x\in[0,1), decide whether 𝐛\mathbf{b} is a tent code of xx. One may think that it is a problem just to compute fi(x)f^{i}(x) (i=1,,ni=1,\ldots,n), and there is nothing more than the precision issue, even in the sense of computational complexity. However, we will show in Section 2.2 that a standard calculation attended by rounding-off easily allows impossible tent code.

By a standard argument on the numerical error, cf. [19], O(n)\mathrm{O}(n) space is enough to get it. At the same time, it seems hopeless to solve the target problem exactly in o(n)\mathrm{o}(n) space, due to the “sensitivity to initial conditions” of a chaotic sequence. Then, this paper is concerned with a decision problem of whether an input 𝐛{0,1}n\mathbf{b}\in\{0,1\}^{n} is a tent code of ϵ\epsilon-perturbed xx, and proves that it is correctly recognized in O(log2n)\mathrm{O}(\log^{2}n) space (Theorem 2.8).

Related works.

The analysis technique of the paper basically follows [25], which showed that the recognition of n{\cal L}_{n} is in O(log2n)\mathrm{O}(\log^{2}n) space in average. The technique is also very similar to or essentially the same as Markov extension developed in the context of symbolic dynamics. In 1979, Hofbauer [14] gave a representation of the kneading invariants for unimodal maps, which is known as the Markov extension and/or Hofbauer tower, and then discussed topological entropy. Hofbauer and Keller extensively developed the arguments in 1980s, see e.g., [7, 3]. We do not think the algorithms of the paper are trivial, but they are composed of the combination of the above nontrivial argument, and some classical techniques of designing space efficient algorithms.

As we stated above, the computational complexity of computing a chaotic sequence seems not well developed. Perl showed some NP-complete systems, e.g., knapsack, shows chaotic behavior [28]. On the other hand, it seems not known whether every chaotic sequence is hard to compute in the sense of NP-hard; particularly we are not sure if the problem fn(x)<1/2f^{n}(x)<1/2 is NP-hard for a tent map ff. Recently, chaotic dynamics are used for solving NP-hard problems e.g., SAT [8].

1.3 Organization

In Section 2, we will define the tent code, describe the issue of a standard calculation rounding-off, and show the precise results of the paper. Section 3 imports some basic technologies from [25]. Section 4 gives a simple algorithm for a valid calculation, as a preliminary step. Section 5 gives a smoothed analysis for the decision problem.

2 Issues and Results

2.1 Tent code

We define a tent-encoding function γμn:[0,1){0,1}n\gamma^{n}_{\mu}\colon[0,1)\to\{0,1\}^{n} (or simply γn\gamma^{n}) as follows. For convenience, let xi=fi(x)x_{i}=f^{i}(x) for i=1,2,i=1,2,\ldots as given x[0,1)x\in[0,1), where fif^{i} denote the ii-times iterated tent map formally given by fi(x)=f(fi1(x))f^{i}(x)=f(f^{i-1}(x)) recursively. Then, the tent code γn(x)=b1bn\gamma^{n}(x)=b_{1}\cdots b_{n} for x[0,1)x\in[0,1) is a bit-sequence, where

b1\displaystyle b_{1} ={0:x<12,1:x12,\displaystyle=\begin{cases}0&:x<\frac{1}{2},\\ 1&:x\geq\frac{1}{2},\end{cases} (2)

and bib_{i} (i=2,3,,ni=2,3,\ldots,n) is recursively given by

bi+1\displaystyle b_{i+1} ={bi:xi<12,bi¯:xi>12,1:xi=12,\displaystyle=\begin{cases}b_{i}&:x_{i}<\frac{1}{2},\\ \overline{b_{i}}&:x_{i}>\frac{1}{2},\\ 1&:x_{i}=\frac{1}{2},\end{cases} (3)

where b¯\overline{b} denotes bit inversion of bb, i.e., 0¯=1\overline{0}=1 and 1¯=0\overline{1}=0. We remark that the definition (3) is rephrased by

bi+1\displaystyle b_{i+1} ={0:[bi=0][xi<12],1:[bi=0][xi12],1:[bi=1][xi12],0:[bi=1][xi>12].\displaystyle=\begin{cases}0&:[b_{i}=0]\wedge\left[x_{i}<\frac{1}{2}\right],\\ 1&:[b_{i}=0]\wedge\left[x_{i}\geq\frac{1}{2}\right],\\ 1&:[b_{i}=1]\wedge\left[x_{i}\leq\frac{1}{2}\right],\\ 0&:[b_{i}=1]\wedge\left[x_{i}>\frac{1}{2}\right].\end{cases} (4)
Proposition 2.1.

Suppose γμ(x)=b1b2\gamma_{\mu}^{\infty}(x)=b_{1}b_{2}\cdots for x[0,1)x\in[0,1). Then, (μ1)i=1biμi=x(\mu-1)\sum_{i=1}^{\infty}b_{i}\mu^{-i}=x.

See [25] for a proof. The proofs are not difficult but lengthy. Thanks to this a little bit artificial definition (3), we obtain the following two more facts.

Proposition 2.2.

For any x,x[0,1)x,x^{\prime}\in[0,1),

xx\displaystyle x\leq x^{\prime} γμn(x)γμn(x)\displaystyle\Rightarrow\gamma_{\mu}^{n}(x)\preceq\gamma_{\mu}^{n}(x^{\prime})

hold where \preceq denotes the lexicographic order, that is bi=0b_{i_{*}}=0 and bi=1b^{\prime}_{i_{*}}=1 at i=min{j{1,2,}bjbj}i_{*}=\min\{j\in\{1,2,\ldots\}\mid b_{j}\neq b^{\prime}_{j}\} for γn(x)=b1b2bn\gamma^{n}(x)=b_{1}b_{2}\cdots b_{n} and γn(x)=b1b2bn\gamma^{n}(x^{\prime})=b^{\prime}_{1}b^{\prime}_{2}\cdots b^{\prime}_{n} unless γn(x)=γn(x)\gamma^{n}(x)=\gamma^{n}(x^{\prime}).

Proposition 2.3.

The nn-th iterated tent code is right continuous, i.e., γμn(x)=γμn(x+0)\gamma_{\mu}^{n}(x)=\gamma_{\mu}^{n}(x+0).

These two facts make the arguments simple. The following technical lemmas are useful to prove Propositions 2.2 and 2.3, as well as the arguments in Sections 4 and 5.

Lemma 2.4.

Suppose x,x[0,1)x,x^{\prime}\in[0,1) satisfy x<xx<x^{\prime}. If γn(x)=γn(x)\gamma^{n}(x)=\gamma^{n}(x^{\prime}) then

{xn<xnif bn=bn=0xn>xnif bn=bn=1\displaystyle\begin{cases}x_{n}<x^{\prime}_{n}&\mbox{if $b_{n}=b^{\prime}_{n}=0$, }\\ x_{n}>x^{\prime}_{n}&\mbox{if $b_{n}=b^{\prime}_{n}=1$}\end{cases} (5)

holds.

Lemma 2.5.

If γn(x)=γn(x)\gamma^{n}(x)=\gamma^{n}(x^{\prime}) for x,x[0,1)x,x^{\prime}\in[0,1) then |fn(x)fn(x)|=μn|xx||f^{n}(x)-f^{n}(x^{\prime})|=\mu^{n}|x-x^{\prime}|.

Let n,μ{\cal L}_{n,\mu} (or simply n{\cal L}_{n}) denote the set of all nn-bits tent codes, i.e.,

n={γμn(x){0,1}n|x[0,1)}\displaystyle{\cal L}_{n}=\left\{\gamma_{\mu}^{n}(x)\in\{0,1\}^{n}\ \middle|\ x\in[0,1)\right\} (6)

and we call n{\cal L}_{n} tent language (by μ(1,2)\mu\in(1,2)). Note that n{0,1}n{\cal L}_{n}\subsetneq\{0,1\}^{n} for μ(1,2)\mu\in(1,2). We say 𝐛n{0,1}n\mathbf{b}_{n}\in\{0,1\}^{n} is a valid tent code if 𝐛nn\mathbf{b}_{n}\in{\cal L}_{n}.

2.2 What is the issue?

A natural problem for the tent code could be calculation: given x(0,1]x\in(0,1] and n>0n\in\mathbb{Z}_{>0}, find γn(x)\gamma^{n}(x). By a standard argument (see e.g., [19]), it requires Θ(nlogμ)\Theta(n\log\mu) working space to compute fn(x)f^{n}(x), in general. Thus, it is natural in practice to employ rounding-off, like Algorithm 1.

Algorithm 1 Rounding-off could output an invalid code
0:  x[0,1]x\in[0,1]
0:  a bit sequence b1bnb_{1}\cdots b_{n} /* b1bnnb_{1}\cdots b_{n}\not\in{\cal L}_{n} in bad cases */
1:  set int κ\kappa large constant
2:  rational zxκz\leftarrow\langle x\rangle_{\kappa} /* Round off by κ\kappa bits (or digits) */
3:  bit b0b\leftarrow 0
4:  for i=1i=1 to nn do
5:     if b=0b=0 then
6:        if z<12z<\frac{1}{2} then b0b\leftarrow 0, else b1b\leftarrow 1/* recall (4) */
7:     else
8:        if z>12z>\frac{1}{2} then b0b\leftarrow 0, else b1b\leftarrow 1/* recall (4) */
9:     end if
10:     return  bb /* as bib_{i} */
11:     zf(z)κz\leftarrow\langle f(z)\rangle_{\kappa}
12:  end for

Due to the sensitivity to initial condition of a chaotic sequence, we cannot expect that Algorithm 1 to output b1bn=γn(x)b_{1}\cdots b_{n}=\gamma^{n}(x) exactly, but we hope that it would output some approximation. It could be a natural question whether the output b1bnnb_{1}\cdots b_{n}\in{\cal L}_{n}. The following proposition means that Algorithm 1 could output an impossible tent code.

Proposition 2.6.

Algorithm 1 could output b1bnnb_{1}\cdots b_{n}\not\in{\cal L}_{n}.

Proof.

Let μ=1.62\mu=1.62, which is slightly greater than the golden ratio 1+521.61803\frac{1+\sqrt{5}}{2}\simeq 1.61803\ldots, where the golden ratio is a solution of μ2μ1=0\mu^{2}-\mu-1=0. By our calculation, γμ15(12)=100 011 011 011 011\gamma_{\mu}^{15}(\frac{1}{2})=100\,011\,011\,011\,011, and it is, of course, a word in 15{\cal L}_{15}. If we set z8=28z28\langle z\rangle_{8}=\frac{\lfloor 2^{8}z\rfloor}{2^{8}}, meaning that round down the nearest to 292^{-9}, then Algorithm 1 outputs 100 011 011 011 00100\,011\,011\,011\,00, which is not a word of 14{\cal L}_{14}. Similarly for the same μ\mu, if we set z=1000z1000\langle z\rangle=\frac{\lfloor 1000z\rfloor}{1000}, meaning that round down the nearest to 10310^{-3}, then Algorithm 1 outputs 100 011 011 011 010100\,011\,011\,011\,010, which is not a word of 15{\cal L}_{15}. ∎

Proposition 2.6 might not be surprising. Can we correct Algorithm 1 so as to output b1bnnb_{1}\dots b_{n}\in{\cal L}_{n}? Yes it is possible if we set κ=Θ(n)\kappa=\Theta(n), that is sufficiently precise. Clearly, it requires Θ(n)\Theta(n) working space. Then, it is validate to ask if it is possible to generate/recognize b1bnnb_{1}\dots b_{n}\in{\cal L}_{n} in o(n)\mathrm{o}(n).

2.3 Problems and Results

The following problems could be natural in the sense of computational complexity of tent codes.

Problem 1 (Decision).

Given a real x[0,1)x\in[0,1) and a bit sequence 𝐛n{0,1}n\mathbf{b}_{n}\in\{0,1\}^{n}, decide if 𝐛n=γn(x)\mathbf{b}_{n}=\gamma^{n}(x).

Problem 2 (Calculation).

Given a real x[0,1)x\in[0,1) and a positive integer nn, find γn(x){0,1}n\gamma^{n}(x)\in\{0,1\}^{n}.

Recalling Proposition 2.6, it seems difficult to solve Problems 1 and 2 exactly, in o(n)\mathrm{o}(n) space. Then, we consider to compute a valid tent code around xx. Let 0<ϵ10<\epsilon\ll 1, we define

n(x,ϵ)\displaystyle{\cal L}_{n}(x,\epsilon) ={γn(x)xϵxx+ϵ}\displaystyle=\{\gamma^{n}(x^{\prime})\mid x-\epsilon\leq x^{\prime}\leq x+\epsilon\}

for x[0,1)x\in[0,1). It is equivalently rephrased by

n(x,ϵ)\displaystyle{\cal L}_{n}(x,\epsilon) ={𝐛nγn(xϵ)𝐛γn(x+ϵ)}\displaystyle=\{\mathbf{b}\in{\cal L}_{n}\mid\gamma^{n}(x-\epsilon)\preceq\mathbf{b}\preceq\gamma^{n}(x+\epsilon)\} (7)

by Propositions 2.1 and 2.2. For the calculation Problem 2, we establish the following simple theorem.

Theorem 2.7 (Approximate calculation).

Let μ(1,2)\mu\in(1,2) be rational given by an irreducible fraction μ=c/d\mu=c/d, and let 0<ϵ<1/40<\epsilon<1/4. Given a real444By a read only tape of infinite length. x[0,1)x\in[0,1), Algorithm 2 described in Section 4 outputs 𝐛nn(x,ϵ)\mathbf{b}_{n}\in{\cal L}_{n}(x,\epsilon). The space complexity of Algorithm 2 is O(lg2ϵ1lgd/lg2μ+lgn)\mathrm{O}(\lg^{2}\epsilon^{-1}\lg d/\lg^{2}\mu+\lg n).

Then, the following theorem for Problem 1 is the main result of the paper.

Theorem 2.8 (Decision for ϵ\epsilon-perturbed input).

Let μ(1,2)\mu\in(1,2) be rational given by an irreducible fraction μ=c/d\mu=c/d, and let 0<ϵ<1/40<\epsilon<1/4. Given a bit sequence 𝐛n{0,1}n\mathbf{b}_{n}\in\{0,1\}^{n} and a real555By a read only tape of infinite length. x[0,1)x\in[0,1), Algorithm 3 described in Section 5 accepts it if 𝐛nn(x,ϵ)\mathbf{b}_{n}\in{\cal L}_{n}(x,\epsilon) and rejects it if 𝐛nn(x,2ϵ)\mathbf{b}_{n}\not\in{\cal L}_{n}(x,2\epsilon). If an (ϵ\epsilon-perturbed) instance 𝐛n\mathbf{b}_{n} is given by 𝐛n=γn(X)\mathbf{b}_{n}=\gamma^{n}(X) for X[xϵ,x+ϵ]X\in[x-\epsilon,x+\epsilon] uniformly at random then the space complexity of Algorithm 3 is O(lg2n/lg3d+lgϵ1/lgd)\mathrm{O}(\lg^{2}n/\lg^{3}d+\lg\epsilon^{-1}/\lg d) in expectation.

As stated in theorems, this paper assumes μ\mu rational mainly for the reason of Turing comparability, but it is not essential666 We can establish some arguments for any real μ(0,1)\mu\in(0,1) similar (but a bit weaker) to the theorems (see also [25]). . Instead, we allow an input instance x[0,1)x\in[0,1) being a real777 We do not use this fact directly in this paper, but it might be worth to mention it for some conceivable variants in the context of smoothed analysis to draw X[xϵ,x+ϵ]X\in[x-\epsilon,x+\epsilon] uniformly at random. , given by a read only tape of infinite length. We remark that the space complexity of Theorem 2.7 is optimal in terms of nn.

Proof strategy of the theorems.

For proofs, we will introduce the “automaton” for n{\cal L}_{n} given by [25] in Section 3. Once we get the automaton, Theorem 2.7 is not difficult, and we prove it in Section 4. Algorithm 2 is relatively simple and the space complexity is trivial. Thus, the correctness is the issue, but it is also not very difficult. Then, we give Algorithm 3 and prove Theorem 2.8 in Section 5. The correctness is essentially the same as Algorithm 2. The analysis of the space complexity is the major issue.

3 Underlying Technology

This section briefly introduces some fundamental technology for the analyses in Sections 4 and 5 including the automaton and the Markov chain for n{\cal L}_{n}, according to [25].

The key idea of a space efficient computation of a tent code is a representation of an equivalent class with respect to γn\gamma^{n}. Let

T(𝐛n)\displaystyle T(\mathbf{b}_{n}) =def{fn(x)γn(x)=𝐛n}\displaystyle\overset{\rm def}{=}\{f^{n}(x)\mid\gamma^{n}(x)=\mathbf{b}_{n}\} (8)

for 𝐛nn\mathbf{b}_{n}\in{\cal L}_{n}, we call T(𝐛n)T(\mathbf{b}_{n}) the segment-type of 𝐛n\mathbf{b}_{n}. In fact, T(𝐛n)T(\mathbf{b}_{n}) is a continuous interval, where one end is open and the other is close. By some straightforward argument with (4), we get the following recursive formula.

Lemma 3.1 ([25]).

Let x[0,1)x\in[0,1), and let γn(x)=b1bn\gamma^{n}(x)=b_{1}\cdots b_{n}.
(1) Suppose Ti(x)=[v,u)T^{i}(x)=[v,u) (v<uv<u).

  • Case 1-1:

    v<12<uv<\frac{1}{2}<u.

    • Case 1-1-1.

      If fi(x)<1/2f^{i}(x)<1/2 then Ti+1(x)=[f(v),f(12))T^{i+1}(x)=[f(v),f(\tfrac{1}{2})), and bi+1=0b_{i+1}=0.

    • Case 1-1-2.

      If fi(x)1/2f^{i}(x)\geq 1/2 then Ti+1(x)=(f(u),f(12)]T^{i+1}(x)=(f(u),f(\tfrac{1}{2})], and bi+1=1b_{i+1}=1.

  • Case 1-2:

    u12u\leq\frac{1}{2}. Then Ti+1(x)=[f(v),f(u))T^{i+1}(x)=[f(v),f(u)), and bi+1=0b_{i+1}=0.

  • Case 1-3:

    v12v\geq\frac{1}{2}. Then Ti+1(x)=(f(u),f(v)]T^{i+1}(x)=(f(u),f(v)], and bi+1=1b_{i+1}=1.

(2) Similarly, suppose Ti(x)=(v,u]T^{i}(x)=(v,u] (v<uv<u).

  • Case 2-1:

    v<12<uv<\frac{1}{2}<u.

    • Case 2-1-1.

      If fn(x)1/2f^{n}(x)\leq 1/2 then Ti+1(x)=(f(v),f(12)]T^{i+1}(x)=(f(v),f(\tfrac{1}{2})], and bi+1=1b_{i+1}=1.

    • Case 2-1-2.

      If fn(x)>1/2f^{n}(x)>1/2 then Ti+1(x)=[f(u),f(12))T^{i+1}(x)=[f(u),f(\tfrac{1}{2})), and bi+1=0b_{i+1}=0.

  • Case 2-2:

    u12u\leq\frac{1}{2}. Then Ti+1(x)=(f(v),f(u)]T^{i+1}(x)=(f(v),f(u)], and bi+1=1b_{i+1}=1.

  • Case 2-3:

    v12v\geq\frac{1}{2}. Then Ti+1(x)=[f(u),f(l))T^{i+1}(x)=[f(u),f(l)), and bi+1=0b_{i+1}=0.

Let

𝒯n={T(𝐛)[0,1)𝐛n}\mathcal{T}_{n}=\{T(\mathbf{b})\subseteq[0,1)\mid\mathbf{b}\in{\cal L}_{n}\} (9)

denote the set of segment-types (of n{\cal L}_{n}). It is easy to observe that |n||{\cal L}_{n}| can grow exponential to nn, while the following theorem implies that |𝒯n|2n|\mathcal{T}_{n}|\leq 2n.

Theorem 3.2 ([25]).

Let μ(1,2)\mu\in(1,2). Let 𝐜i=γi(12)\mathbf{c}_{i}=\gamma^{i}(\frac{1}{2}), and let

Ii=T(𝐜i)andI¯i=T(𝐜¯i)\displaystyle I_{i}=T(\mathbf{c}_{i})\hskip 20.00003pt\mbox{and}\hskip 20.00003pt\overline{I}_{i}=T(\overline{\mathbf{c}}_{i}) (10)

for i=1,2,i=1,2,\ldots. Then,

𝒯n=i=1n{Ii,I¯i}\displaystyle\mathcal{T}_{n}=\bigcup_{i=1}^{n_{*}}\left\{I_{i},\overline{I}_{i}\right\}

for n1n\geq 1, where n=min({i{1,2,,n1}Ii+1𝒯i}{n})n_{*}=\min(\{i\in\{1,2,\ldots,n-1\}\mid I_{i+1}\in\mathcal{T}_{i}\}\cup\{n\}).

The following lemma, derived from Lemma 3.1, gives an explicit recursive formula of IiI_{i} and I¯i\overline{I}_{i}.

Lemma 3.3 ([25]).

IiI_{i} and I¯i\overline{I}_{i} given by (10) are recursively calculated for i=1,2,i=1,2,\ldots as follows.

I1=[0,μ2)andI¯1=(0,μ2].\displaystyle I_{1}=[0,\tfrac{\mu}{2})\hskip 20.00003pt\mbox{and}\hskip 20.00003pt\overline{I}_{1}=(0,\tfrac{\mu}{2}].

For i=2,3,i=2,3,\ldots,

Ii\displaystyle I_{i} ={{[f(v),f(12)):v<12<u[f(v),f(u)):u12(f(u),f(v)]:v12if Ii1=[v,u){[f(u),f(12)):v<12<u(f(v),f(u)]:u12[f(u),f(v)):v12if Ii1=(v,u]\displaystyle=\begin{cases}\begin{cases}[f(v),f(\tfrac{1}{2}))&:v<\tfrac{1}{2}<u\\ [f(v),f(u))&:u\leq\tfrac{1}{2}\\ (f(u),f(v)]&:v\geq\tfrac{1}{2}\end{cases}&\mbox{if $I_{i-1}=[v,u)$, }\\ \begin{cases}[f(u),f(\tfrac{1}{2}))&:v<\tfrac{1}{2}<u\\ (f(v),f(u)]&:u\leq\tfrac{1}{2}\\ [f(u),f(v))&:v\geq\tfrac{1}{2}\end{cases}&\mbox{if $I_{i-1}=(v,u]$}\end{cases} (11)

holds. Then,

I¯i\displaystyle\overline{I}_{i} ={(v,u]if Ii=[v,u)[v,u)if Ii=(v,u]\displaystyle=\begin{cases}(v,u]&\mbox{if $I_{i}=[v,u)$}\\ [v,u)&\mbox{if $I_{i}=(v,u]$}\end{cases}

holds.

For convenience, we define the level of J𝒯nJ\in\mathcal{T}_{n} by

L(J)=k\displaystyle L(J)=k (12)

if J=IkJ=I_{k} or Ik¯\overline{I_{k}}. Notice that Theorem 3.2 implies that the level of T(𝐛k)T(\mathbf{b}_{k}) for 𝐛kk\mathbf{b}_{k}\in{\cal L}_{k} may be strictly less than kk. In fact, it happens, which provides a space efficient “automaton”.

q0q_{0}startI1I_{1}I2I_{2}I3I_{3}I4I_{4}I5I_{5}I6I_{6}\cdotsInI_{n}I¯1\overline{I}_{1}I¯2\overline{I}_{2}I¯3\overline{I}_{3}I¯4\overline{I}_{4}I¯5\overline{I}_{5}I¯6\overline{I}_{6}\cdotsI¯n\overline{I}_{n}10010011110110110000
Figure 2: Transition diagram over 𝒯n\mathcal{T}_{n} for μ=1.6\mu=1.6.

State transit machine (“automaton”).

By Theorem 3.2, we can design a space efficient state transit machine888 Precisely, we need a “counter” for the length nn of the string, while notice that our main goal is not to design an automaton for n{\cal L}_{n}. Our main target Theorems 2.7 and 2.8 assume a standard Turing machine, where obviously we can count the length nn of a sequence in O(logn)\mathrm{O}(\log n) space. according to Lemma 3.1, to recognize n{\cal L}_{n}. We define the set of states by Qn={q0}{}𝒯nQ_{n}=\{q_{0}\}\cup\{\emptyset\}\cup\mathcal{T}_{n}, where q0q_{0} is the initial state, and \emptyset denotes the unique reject state. Let δ:Qn1×{0,1}Qn\delta\colon Q_{n-1}\times\{0,1\}\to Q_{n} denote the state transition function, defined as follows. Let δ(q0,1)=I1\delta(q_{0},1)=I_{1} and δ(q0,0)=I¯1\delta(q_{0},0)=\overline{I}_{1}. According to Lemma 3.1, we appropriately define

δ(J,b)=J\displaystyle\delta(J,b)=J^{\prime} (13)

for J𝒯n1J\in\mathcal{T}_{n-1} and b{0,1}b\in\{0,1\}, as far as JJ and bb are consistent. If the pair JJ and bb are inconsistent, we define δ(J,b)=\delta(J,b)=\emptyset; precisely

{J=(v,u] and v12(cf. Case 1-3)J=[v,u) and u12(cf. Case 2-2)J=[v,u) and u12(cf. Case 1-2)J=(v,u] and v12(cf. Case 2-3)\displaystyle\begin{cases}\mbox{$J=(v,u]$ and $v\geq\frac{1}{2}$}&\mbox{(cf. Case 1-3)}\\ \mbox{$J=[v,u)$ and $u\leq\frac{1}{2}$}&\mbox{(cf. Case 2-2)}\\ \mbox{$J=[v,u)$ and $u\leq\frac{1}{2}$}&\mbox{(cf. Case 1-2)}\\ \mbox{$J=(v,u]$ and $v\geq\frac{1}{2}$}&\mbox{(cf. Case 2-3)}\end{cases}

are the cases, where v=infJv=\inf J and u=supJu=\sup J.

Lemma 3.4 ([25]).

Let μ(1,2)\mu\in(1,2) be a rational given by an irreducible fraction μ=c/d\mu=c/d. For any k>0k\in\mathbb{Z}_{>0}, the state transit machine on QkQ_{k} is represented by O(k2lgd)\mathrm{O}(k^{2}\lg{d}) bits.

We will use the following two technical lemmas about the transition function in Sections 4 and 5.

Lemma 3.5 ([25]).

Suppose for μ(1,2)\mu\in(1,2) that fμi(12)12f_{\mu}^{i}(\frac{1}{2})\neq\frac{1}{2} holds for any i=1,,n1i=1,\dots,n-1. Then,

δ(In,b)\displaystyle\delta(I_{n},b) {In+1}{I¯k+11kn2}{}\displaystyle\in\left\{I_{n+1}\right\}\cup\left\{\overline{I}_{k+1}\mid 1\leq k\leq\tfrac{n}{2}\right\}\cup\{\emptyset\}

hold for b=0,1b=0,1.

Lemma 3.6 ([25]).

Suppose for μ(1,2)\mu\in(1,2) that fμi(12)12f_{\mu}^{i}(\frac{1}{2})\neq\frac{1}{2} holds for any i=1,,2n1i=1,\dots,2n-1. Then, there exists k{n+1,,2n}k\in\{n+1,\ldots,2n\} and b{0,1}b\in\{0,1\} such that

δ(Ik,b)\displaystyle\delta(I_{k},b) {I¯k+11kk2}\displaystyle\in\left\{\overline{I}_{k^{\prime}+1}\mid 1\leq k^{\prime}\leq\tfrac{k}{2}\right\}

hold.

Roughly speaking, Lemma 3.5 implies that the level increases by one, or decreases into (almost) a half by a transition step. Furthermore, Lemma 3.6 implies that there is at least one way to decrease the level during n,,2nn,\ldots,2n.

Markov model.

Furthermore, the state transitions preserve the uniform measure, over [0,1)[0,1) in the beginning, since the tent map is piecewise linear.

Lemma 3.7 ([25]).

Let XX be a random variable drawn from [0,1)[0,1) uniformly at random. Let 𝐛nn\mathbf{b}_{n}\in{\cal L}_{n}. Then,

Pr[Bn+1=bγn(X)=𝐛n]=|T(𝐛nb)||T(𝐛n0)|+|T(𝐛n1)|\displaystyle\Pr[B_{n+1}=b\mid\gamma^{n}(X)=\mathbf{b}_{n}]=\frac{|T(\mathbf{b}_{n}b)|}{|T(\mathbf{b}_{n}0)|+|T(\mathbf{b}_{n}1)|}

holds for b{0,1}b\in\{0,1\}, where let |T(𝐛nb)|=0|T(\mathbf{b}_{n}b)|=0 if 𝐛nbn+1\mathbf{b}_{n}b\not\in{\cal L}_{n+1}.

Let 𝒟n,μ{\cal D}_{n,\mu} (or simply 𝒟n{\cal D}_{n}) denote a probability distribution over n{\cal L}_{n} which follows γn(X)\gamma^{n}(X) for XX is uniformly distributed over [0,1)[0,1), i.e., 𝒟n{\cal D}_{n} represents the probability of appearing 𝐛nn\mathbf{b}_{n}\in{\cal L}_{n} as given the initial condition xx uniformly at random.

Theorem 3.8 ([25]).

Let μ(1,2)\mu\in(1,2) be a rational given by an irreducible fraction μ=c/d\mu=c/d. Then, it is possible to generate BnB\in{\cal L}_{n} according to 𝒟n{\cal D}_{n} in O(lg2nlg3d/lg4μ)\mathrm{O}(\lg^{2}n\lg^{3}d/\lg^{4}\mu) space in expectation (as well as, with high probability).

Thus, we remark that the tent language n{\cal L}_{n} is recognized in O(lg2nlg3d/lg4μ)\mathrm{O}(\lg^{2}n\lg^{3}d/\lg^{4}\mu) space on average all over the initial condition x[0,1)x\in[0,1), by Theorem 3.8.

4 Calculation in “Constant” Space

Theorem 2.7 is easy, once the argument in Section 3 is accepted. Algorithm 2 shows the approximate calculation of γn(x)\gamma^{n}(x) so that the output is a valid tent code (recall the issue in Section 2.2). Roughly speaking, Algorithm 2 calculates by rounding-off to κ=O(logϵ1)\kappa=\mathrm{O}(\log\epsilon^{-1}) bits for the first κ\kappa iterations (lines 5–10), and then traces the automaton within the level 2κ2\kappa after the κ\kappa-th iteration (lines 11–20). The algorithm traces finite automaton, and the desired space complexity is almost trivial. A main issue is the correctness; it is also trivial that the output sequence 𝐛n{0,1}n\mathbf{b}_{n}\in\{0,1\}^{n} is a valid tent code, then our goal is to prove γn(xϵ)𝐛nγn(x+ϵ)\gamma^{n}(x-\epsilon)\preceq\mathbf{b}_{n}\preceq\gamma^{n}(x+\epsilon). The trick is based on the fact that the tent map is an extension map, which will be used again in the smoothed analysis in the next session.

Then we explain the detail of Algorithm 2. Let xk\langle x\rangle_{k} denote a binary expression by the rounding off a real xx to the nearest 1/2k+11/2^{k+1}, where the following argument only requires |xkx|1/2k|\langle x\rangle_{k}-x|\leq 1/2^{k}, meaning that rounding up and down is not essential. Naturally assume that xk\langle x\rangle_{k} for x[0,1)x\in[0,1) is represented by kk bits, meaning that the space complexity of xk\langle x\rangle_{k} is O(k)\mathrm{O}(k).

In the algorithm, rationals v[k]v[k] and u[k]u[k] respectively denote infIk\inf I_{k} and supIk\sup I_{k} for k=1,2,k=1,2,\ldots. For descriptive purposes, v[0]v[0] and u[0]u[0] corresponds to q0q_{0}, and v[1]v[-1] and u[1]u[-1] corresponds to the reject state \emptyset in QnQ_{n}. The single bit c[k]c[k] denotes ckc_{k} for 𝐜n=c1cn=γn(12)\mathbf{c}_{n}=c_{1}\cdots c_{n}=\gamma^{n}(\frac{1}{2}) (recall Theorem 3.2). Thus, Ik=[v[k],u[k])I_{k}=[v[k],u[k]) and I¯k=(v[k],u[k]]\overline{I}_{k}=(v[k],u[k]] if c[k]=0c[k]=0, otherwise Ik=(v[k],u[k]]I_{k}=(v[k],u[k]] and I¯k=[v[k],u[k])\overline{I}_{k}=[v[k],u[k]) see Section 3. The integer δ(l,b)=l\delta(l,b)=l^{\prime} represents the transition δ(Il,b)=J\delta(I_{l},b)=J^{\prime}, where J=IlJ^{\prime}=I_{l^{\prime}} or I¯l\overline{I}_{l^{\prime}}, given by (13) in Section 3. Notice that if δ(Il,b)=J\delta(I_{l},b)=J^{\prime} then δ(I¯l,b)=J¯\delta(\overline{I}_{l},b)=\overline{J^{\prime}} holds [25]. The pair ll and bb represent Zi=IlZ_{i}=I_{l} if b=c[l]b=c[l], otherwise, i.e., b¯=c[l]\overline{b}=c[l], Zi=I¯lZ_{i}=\overline{I}_{l}, at the ii-th iteration (for i=1,ni=1,\ldots n). See Section A for the detail of the subprocesses, Algorithms 4 and 5.

Algorithm 2 Valid calculation with “constant” space (for μ,ϵ\mu,\epsilon)
0:  a real x[0,1]x\in[0,1]
0:  a bit sequence b1bnn(x,ϵ)b_{1}\cdots b_{n}\in{\cal L}_{n}(x,\epsilon)
1:  int κ3lgϵ/lgμ\kappa\leftarrow\lceil-3\lg\epsilon/\lg\mu\rceil
2:  compute rational v[k]v[k], rational u[k]u[k], bit c[k]c[k], int δ[k,0]\delta[k,0] and int δ[k,1]\delta[k,1] for k=1k=-1 to 2κ2\kappa by Algorithm 4
3:  int l0l\leftarrow 0, bit b1b\leftarrow 1
4:  rational zxκz\leftarrow\langle x\rangle_{\kappa}
5:  for i=1i=1 to κ\kappa do
6:     compute bit bb^{\prime} based on bb and zz by (4)
7:     return  bb^{\prime} /* as bib_{i} */
8:     update ll based on bb and bb^{\prime}, update bb (by Algorithm 5)
9:     zf(z)κz\leftarrow\langle f(z)\rangle_{\kappa}
10:  end for
11:  for i=κ+1i=\kappa+1 to nn do
12:     bargmin{δ(l,b′′)b′′{0,1},δ(l,b′′)>0}b^{\prime}\leftarrow\arg\min\{\delta(l,b^{\prime\prime})\mid b^{\prime\prime}\in\{0,1\},\ \delta(l,b^{\prime\prime})>0\}
13:     if b=c[l]b=c[l] then
14:        bbb\leftarrow b^{\prime}
15:     else
16:        bb¯b\leftarrow\overline{b^{\prime}}
17:     end if
18:     return  bb /* as bib_{i} */
19:     lδ[l,b]l\leftarrow\delta[l,b^{\prime}]
20:  end for
Theorem 4.1 (Theorem 2.7).

Given a real x[0,1)x\in[0,1), Algorithm 2 outputs 𝐛nn(x,ϵ)\mathbf{b}_{n}\in{\cal L}_{n}(x,\epsilon). The space complexity of Algorithm 2 is O(lg2ϵ1lgd/lg2μ+lgn)\mathrm{O}(\lg^{2}\epsilon^{-1}\lg d/\lg^{2}\mu+\lg n).

Proof.

To begin with, we remark that Algorithm 2 constructs the transition diagram only up to the level 2κ2\kappa. Nevertheless, Algorithm 2 correctly conducts the for-loop in lines 11–20, meaning that all transitions at line 19 are always valid during the loop. This is due to Lemma 3.6, which claims that there exists at least one l{κ+1,,2κ}l\in\{\kappa+1,\ldots,2\kappa\} such that δ(l,b)κ\delta(l,b)\leq\kappa (and also δ(l,b)>0\delta(l,b)>0), meaning that it is a reverse edge. Then, it is easy from Lemma 3.4 that the space complexity of Algorithm 2 is O(κ2lgd)=O(lg2ϵlgd/lg2μ)\mathrm{O}(\kappa^{2}\lg d)=\mathrm{O}(\lg^{2}\epsilon\lg d/\lg^{2}\mu), except for the space O(lgn)\mathrm{O}(\lg n) of counter ii (and nn) at line 11.

Then, we prove b1bnn(x,ϵ)b_{1}\cdots b_{n}\in{\cal L}_{n}(x,\epsilon). Since the algorithm follows the transition diagram of n{\cal L}_{n} (recall Section 3), it is easy to see that b1bnnb_{1}\cdots b_{n}\in{\cal L}_{n}, and hence we only need to prove

γn(xϵ)b1bnγn(x+ϵ)\displaystyle\gamma^{n}(x-\epsilon)\preceq b_{1}\cdots b_{n}\preceq\gamma^{n}(x+\epsilon) (14)

holds. We here only prove γn(xϵ)b1bn\gamma^{n}(x-\epsilon)\preceq b_{1}\cdots b_{n} while b1bnγn(x+ϵ)b_{1}\cdots b_{n}\preceq\gamma^{n}(x+\epsilon) is essentially the same. The proof is similar to that of Proposition 2.2, while the major issue is whether rounding zκ\langle z\rangle_{\kappa} preserves the “ordering,” so does the exact calculation by Lemma 2.4.

For convenience, let ziz_{i} denote the value of zz in the ii-th iteration of the algorithm, i.e., zi+1=f(zi)κz_{i+1}=\langle f(z_{i})\rangle_{\kappa}. Let x=xϵ/2x^{-}=x-\epsilon/2 and let x=xϵx^{--}=x-\epsilon. The proof consists of two parts:

γκ(x)\displaystyle\gamma^{\kappa}(x^{--}) γκ(x)and\displaystyle\prec\gamma^{\kappa}(x^{-})\hskip 20.00003pt\mbox{and} (15)
γκ(x)\displaystyle\gamma^{\kappa}(x^{-}) b1bκ.\displaystyle\preceq b_{1}\cdots b_{\kappa}. (16)

For the claim (15), notice that γκ(x)γκ(x)\gamma^{\kappa}(x^{--})\preceq\gamma^{\kappa}(x^{-}) by Lemma 2.4. If γκ(x)=γκ(x)\gamma^{\kappa}(x^{--})=\gamma^{\kappa}(x^{-}), Lemma 2.5 implies that |fκ(x)fκ(x)|=ϵ2μκϵ2μ3lgϵ/lgμ=12ϵ2>1|f^{\kappa}(x^{--})-f^{\kappa}(x^{-})|=\frac{\epsilon}{2}\mu^{\kappa}\geq\frac{\epsilon}{2}\mu^{-3\lg\epsilon/\lg\mu}=\frac{1}{2\epsilon^{2}}>1, which contradicts to 0fκ(x)10\leq f^{\kappa}(x^{\prime})\leq 1 for any x[0,1)x^{\prime}\in[0,1). Now we get (15).

For (16), similar to the proof of Proposition 2.2 based on Lemma 2.4, we claim if γi(x)=b1bi\gamma^{i}(x^{-})=b_{1}\cdots b_{i} then

{xi<ziif bi=0xi>ziif bi=1\displaystyle\begin{cases}x^{-}_{i}<z_{i}&\mbox{if $b_{i}=0$, }\\ x^{-}_{i}>z_{i}&\mbox{if $b_{i}=1$}\end{cases} (17)

hold. The basic argument is essentially the same as the proof of Lemma 2.4, which is based on the argument of segment type (see [25]), and here it is enough to check

if |xizi|ϵ2|x^{-}_{i}-z_{i}|\geq\frac{\epsilon}{2} then |xi+1zi+1|ϵ2|x^{-}_{i+1}-z_{i+1}|\geq\frac{\epsilon}{2}, as far as γi+1(x)=γi+1(z)\gamma^{i+1}(x)=\gamma^{i+1}(z) (18)

meaning that the rounding-off does not disrupt the order. Notice that

|xi+1zi+1|\displaystyle|x^{-}_{i+1}-z_{i+1}| =|f(xi)f(zi)κ||f(xi)f(zi)||f(zi)κf(zi)|\displaystyle=|f(x^{-}_{i})-\langle f(z_{i})\rangle_{\kappa}|\geq|f(x^{-}_{i})-f(z_{i})|-|\langle f(z_{i})\rangle_{\kappa}-f(z_{i})| (19)

holds, where the last inequality follows the triangle inequality |f(xi)f(zi)κ|+|f(zi)κf(zi)||f(xi)f(zi)||f(x^{-}_{i})-\langle f(z_{i})\rangle_{\kappa}|+|\langle f(z_{i})\rangle_{\kappa}-f(z_{i})|\geq|f(x^{-}_{i})-f(z_{i})|. Note that |f(xi)f(zi)|=μ|xizi|μϵ2|f(x^{-}_{i})-f(z_{i})|=\mu|x^{-}_{i}-z_{i}|\geq\mu\frac{\epsilon}{2} holds under the hypothesis γi+1(x)=γi+1(z)\gamma^{i+1}(x)=\gamma^{i+1}(z). We also remark |f(zi)κf(zi)|12κ|\langle f(z_{i})\rangle_{\kappa}-f(z_{i})|\leq\frac{1}{2^{\kappa}} by definition of κ\langle\cdot\rangle_{\kappa}. Furthermore, we claim 12κ(μ1)ϵ2\frac{1}{2^{\kappa}}\leq(\mu-1)\frac{\epsilon}{2} by κ3lgϵ/lgμ\kappa\geq-3\lg\epsilon/\lg\mu: note that 1/lgμ0.7lg(μ1)1/\lg\mu\geq 0.7-\lg(\mu-1) holds for 1<μ<21<\mu<2, and then κ3lgϵ(0.7lg(μ1))=2.1lgϵ+3lgϵlg(μ1)lgϵ2lg(μ1)lgϵ2lg(μ1)\kappa\geq-3\lg\epsilon(0.7-\lg(\mu-1))=-2.1\lg\epsilon+3\lg\epsilon\lg(\mu-1)\geq-\lg\epsilon^{2}-\lg(\mu-1)\geq-\lg\frac{\epsilon}{2}-\lg(\mu-1) holds where we use ϵ<1/4\epsilon<1/4, which implies the desired claim 12κ(μ1)ϵ2\frac{1}{2^{\kappa}}\leq(\mu-1)\frac{\epsilon}{2}. Then,

(19)μϵ212κμϵ2(μ1)ϵ2=ϵ2,\displaystyle\eqref{eq:20231029a}\geq\mu\tfrac{\epsilon}{2}-\tfrac{1}{2^{\kappa}}\geq\mu\tfrac{\epsilon}{2}-(\mu-1)\tfrac{\epsilon}{2}=\tfrac{\epsilon}{2}, (20)

and we got (18), and hence (16) by Proposition 2.2. By (15) and (16), γκ(xϵ)b1bκ\gamma^{\kappa}(x-\epsilon)\preceq b_{1}\cdots b_{\kappa} is easy. Now, we obtain (14) by Proposition 2.2. ∎

5 Smoothed Analysis for Decision

Now, we are concerned with the decision problem, Problem 1. Algorithm 3 efficiently solves the problem with ϵ\epsilon-perturbed input, for Theorem 2.8. Roughly speaking, Algorithm 3 checks whether 𝐛nn\mathbf{b}_{n}\in{\cal L}_{n} at line 24, and checks whether γn(xϵ)𝐛nγn(x+ϵ)\gamma^{n}(x-\epsilon)\preceq\mathbf{b}_{n}\preceq\gamma^{n}(x+\epsilon) for lines 6–22. Lines 25–27 show a deferred update of those parameters, to save the space complexity.

Algorithm 3 Decision (for μ,ϵ\mu,\epsilon)
0:  a bit sequence b1bn{0,1}nb_{1}\cdots b_{n}\in\{0,1\}^{n} and a real x[0,1]x\in[0,1]
0:  Accept if b1bnn(x,ϵ)b_{1}\cdots b_{n}\in{\cal L}_{n}(x,\epsilon) and Reject if b1bnn(x,2ϵ)b_{1}\cdots b_{n}\not\in{\cal L}_{n}(x,2\epsilon)
1:  int κ3lgϵ/lgμ\kappa\leftarrow\lceil-3\lg\epsilon/\lg\mu\rceil
2:  compute rational v[k]v[k], rational u[k]u[k], bit c[k]c[k], int δ[k,0]\delta[k,0] and int δ[k,1]\delta[k,1] for k=1k=-1 to κ\kappa by Algorithm 4
3:  int kκk\leftarrow\kappa, int l0l\leftarrow 0, int 𝑐𝑎𝑠𝑒1{\it case}\leftarrow 1
4:  rational zx32ϵκz^{-}\leftarrow\langle x-\frac{3}{2}\epsilon\rangle_{\kappa}, rational z+x+32ϵκz^{+}\leftarrow\langle x+\frac{3}{2}\epsilon\rangle_{\kappa}
5:  for i=1i=1 to nn do
6:     if 𝑐𝑎𝑠𝑒=1{\it case}=1 then
7:        compute bit bb^{-}, b+b^{+} respectively based on zz^{-}, z+z^{+} with bi1b_{i-1} by (4)
8:        if bi<bb_{i}<b^{-} or b+<bib^{+}<b_{i} then return Reject and halt
9:        else if b<bib^{-}<b_{i} and bi=b+b_{i}=b^{+} then case2{\rm case}\leftarrow 2
10:        else if b=bib^{-}=b_{i} and bi<b+b_{i}<b^{+} then case3{\rm case}\leftarrow 3
11:        else zf(z)κz^{-}\leftarrow\langle f(z^{-})\rangle_{\kappa}, z+f(z+)κz^{+}\leftarrow\langle f(z^{+})\rangle_{\kappa} /* i.e., b=b+=bib_{-}=b_{+}=b_{i} */
12:     else if 𝑐𝑎𝑠𝑒=2{\it case}=2 then
13:        compute bit b+b^{+} based on z+z^{+} with bi1b_{i-1} by (4)
14:        if b+<bib^{+}<b_{i} then return Reject and halt
15:        else if bi<b+b_{i}<b^{+} then case0{\rm case}\leftarrow 0
16:        else z+f(z+)κz^{+}\leftarrow\langle f(z^{+})\rangle_{\kappa} /* i.e., b+=bib_{+}=b_{i} */
17:     else if 𝑐𝑎𝑠𝑒=3{\it case}=3 then
18:        compute bit bb^{-} based on zz^{-} with bi1b_{i-1} by (4)
19:        if bi<bb_{i}<b^{-} then return Reject and halt
20:        else if b<bib^{-}<b_{i} then case0{\rm case}\leftarrow 0
21:        else zf(z)κz^{-}\leftarrow\langle f(z^{-})\rangle_{\kappa} /* i.e., b=bib_{-}=b_{i} */
22:     end if
23:     update ll based on bi1b_{i-1} and bib_{i} (by Algorithm 5)
24:     if l=1l=-1 then return Reject and halt
25:     if l=kl=k then
26:        compute v[k+1]v[k+1], u[k+1]u[k+1], c[k+1]c[k+1], δ[k,0]\delta[k,0] and δ[k,1]\delta[k,1] by Algorithm 4
27:     end if
28:  end for
29:  return  Accept
Theorem 5.1 (Theorem 2.8).

Given a bit sequence 𝐛n{0,1}n\mathbf{b}_{n}\in\{0,1\}^{n} and a real x[0,1)x\in[0,1), Algorithm 3 accepts it if 𝐛n(x,ϵ)\mathbf{b}_{n}\in{\cal L}(x,\epsilon) and rejects it if 𝐛nn(x,2ϵ)\mathbf{b}_{n}\not\in{\cal L}_{n}(x,2\epsilon). If an (ϵ\epsilon-perturbed) instance 𝐛n\mathbf{b}_{n} is given by 𝐛n=γn(X)\mathbf{b}_{n}=\gamma^{n}(X) for X[xϵ,x+ϵ]X\in[x-\epsilon,x+\epsilon] uniformly at random then the space complexity of Algorithm 3 is O(lg2n/lg3d+lgϵ1/lgd)\mathrm{O}(\lg^{2}n/\lg^{3}d+\lg\epsilon^{-1}/\lg d) in expectation.

Proof.

The correctness proof is essentially the same as that of Theorem 2.7. In the algorithm, line 29 checks whether b1bnnb_{1}\cdots b_{n}\in{\cal L}_{n}. We see whether γn(xϵ)b1bnγn(x+ϵ)\gamma^{n}(x-\epsilon)\prec b_{1}\cdots b_{n}\prec\gamma^{n}(x+\epsilon) for the first at most κ\kappa iterations, as follows. Let bib^{-}_{i} and bi+b^{+}_{i} respectively represent bb^{-} and b+b^{+} computed at line 7, 13 or 18 of the iith iteration. Then, b1bκγκ(x)b1+bκ+b^{-}_{1}\cdots b^{-}_{\kappa}\prec\gamma^{\kappa}(x)\prec b^{+}_{1}\cdots b^{+}_{\kappa} hold, by the essentially same way as (15) in the proof of Theorem 2.7. Similarly, γκ(xϵ)b1bκ\gamma^{\kappa}(x-\epsilon)\prec b^{-}_{1}\cdots b^{-}_{\kappa} holds, b1+bκ+γκ(x+ϵ)b^{+}_{1}\cdots b^{+}_{\kappa}\prec\gamma^{\kappa}(x+\epsilon) as well. Thus, 𝐛nγκ(xϵ)\mathbf{b}_{n}\prec\gamma^{\kappa}(x-\epsilon) is safely rejected at line 8 or 19, γκ(x+ϵ)𝐛n\gamma^{\kappa}(x+\epsilon)\prec\mathbf{b}_{n} as well at line 8 or 13. Thus we obtain the desired decision.

Then we are concerned with the space complexity. The analysis technique is very similar to or essentially the same as [25] for a random generation of n{\cal L}_{n}. Let XX be a random variable drawn from the interval [xϵ,x+ϵ][x-\epsilon,x+\epsilon] uniformly at random. Let γn(X)=B1,\gamma^{n}(X)=B_{1}, Let

K=max{k>0L(T(γi(X)))=k}\displaystyle K=\max\{k\in\mathbb{Z}_{>0}\mid L(T(\gamma^{i}(X)))=k\} (21)

be a random variable where L(J)=kL(J)=k if J=IkJ=I_{k} or I¯k\overline{I}_{k} (recall (12) as well as Theorem 3.2). Lemma 3.4 implies that its space complexity is O(K2lgd)\mathrm{O}(K^{2}\lg{d}). Lemma 5.2, appearing below, implies

E[O(K2lgd)]\displaystyle\mathrm{E}[\mathrm{O}(K^{2}\lg{d})] =O(E[K2]lgd)\displaystyle=\mathrm{O}(\mathrm{E}[K^{2}]\lg{d})
=O(lg2nlg3d/lg4μ)\displaystyle=\mathrm{O}(\lg^{2}{n}\lg^{3}{d}/\lg^{4}\mu)

and we obtain the claim. ∎

Lemma 5.2.

Let μ(1,2)\mu\in(1,2) be rational given by an irreducible fraction μ=c/d\mu=c/d. Suppose for μ(1,2)\mu\in(1,2) that fμi(12)12f_{\mu}^{i}(\frac{1}{2})\neq\frac{1}{2} holds for any i=1,,n1i=1,\dots,n-1. Then, E[K2]=O(logμ2nlogμ2d)=O(lg2nlg2d/lg4μ)\mathrm{E}[K^{2}]=\mathrm{O}(\log_{\mu}^{2}{n}\log_{\mu}^{2}{d})=\mathrm{O}(\lg^{2}n\lg^{2}d/\lg^{4}\mu).

We remark that the assumption of rational μ\mu is not essential in Lemma 5.2; the assumption is just for an argument about Turing comparability. We can establish a similar (but a bit weaker) version of Lemma 5.4 for any real μ(0,1)\mu\in(0,1) (cf. Proposition 5.1 of [25]). Lemma 5.2 is similar to Lemma 4.3 of [25] for random generation, where the major difference is that [25] assumes X0X_{0} is uniform on [0,1)[0,1) while Lemma 5.2 here assumes X0X_{0} is uniform on [xϵ,x+ϵ][x-\epsilon,x+\epsilon]. We only need to take care of some trouble when the initial condition is around the boundaries of the interval, xϵx-\epsilon and x+ϵx+\epsilon.

Suppose for the proof of Lemma 5.2 that a random variable XX is drawn from the interval [xϵ,x+ϵ][x-\epsilon,x+\epsilon] uniformly at random. Let X0=XX_{0}=X and let Xi=f(Xi1)X_{i}=f(X_{i-1}) for i=1,2,,ni=1,2,\ldots,n. For convenience, let Ti(x)T^{i}(x) denote T(γi(x))T(\gamma^{i}(x)). Let y[xϵ,x+ϵ]y\in[x-\epsilon,x+\epsilon]. We say XX covers around yy at ii-th iteration (i{1,2,,n}i\in\{1,2,\ldots,n\}) if

{fi(y)γi(y)=γi(y),xϵyx+ϵ}=Ti(y)\displaystyle\{f^{i}(y^{\prime})\mid\gamma^{i}(y^{\prime})=\gamma^{i}(y),\,x-\epsilon\leq y^{\prime}\leq x+\epsilon\}=T^{i}(y) (22)

holds (recall Ti(y)={fi(y)γi(y)=γi(y)}T^{i}(y)=\{f^{i}(y^{\prime})\mid\gamma^{i}(y^{\prime})=\gamma^{i}(y)\} by definition 8). Similarly, we say XX fully covers SS (S[xϵ,x+ϵ]S\subseteq[x-\epsilon,x+\epsilon]) at i{1,2,,n}i\in\{1,2,\ldots,n\} if XX covers around every ySy\in S at ii.

Lemma 5.3.

XX fully covers (xϵ+1n2,x+ϵ1n2)(x-\epsilon+\frac{1}{n^{2}},x+\epsilon-\frac{1}{n^{2}}) at or after 2logμn\lceil 2\log_{\mu}n\rceil iterations.

Proof.

Let k=2logμnk=\lceil 2\log_{\mu}n\rceil. By Lemma 2.5, if y,y[0,1)y,y^{\prime}\in[0,1) satisfies γk(y)=γk(y)\gamma^{k}(y)=\gamma^{k}(y^{\prime}) then |fk(y)fk(y)|=μk|yy|μ2logμn|yy|=n2|yy||f^{k}(y)-f^{k}(y^{\prime})|=\mu^{k}|y-y^{\prime}|\geq\mu^{2\log_{\mu}n}|y-y^{\prime}|=n^{2}|y-y^{\prime}|. On the other hand, |fk(y)fk(y)|1|f^{k}(y)-f^{k}(y^{\prime})|\leq 1, and hence the claim is easy from Propositions 2.1 and 2.2. ∎

Then, the proof of Lemma 5.2 consists of two parts: one is that the conditional expectation of K2K^{2} is O(lg2nlg2d/lg4μ)\mathrm{O}(\lg^{2}n\lg^{2}d/\lg^{4}\mu) on condition that X(xϵ+1n2,x+ϵ1n2)X\in(x-\epsilon+\frac{1}{n^{2}},x+\epsilon-\frac{1}{n^{2}}) (Lemma 5.4), and the other is that the probability of X(xϵ+1n2,x+ϵ1n2)X\not\in(x-\epsilon+\frac{1}{n^{2}},x+\epsilon-\frac{1}{n^{2}}) is small enough to allow Lemma 5.2 from Lemma 5.4. The latter claim is almost trivial (see the proof of Lemma 5.2 below)

The following lemma is the heart of the analysis, which is a version of Lemma 4.3 of [25] for random generation (see also Appendix B, for a proof).

Lemma 5.4.

Let μ(1,2)\mu\in(1,2) be rational given by an irreducible fraction μ=c/d\mu=c/d. Suppose for μ(1,2)\mu\in(1,2) that fμi(12)12f_{\mu}^{i}(\frac{1}{2})\neq\frac{1}{2} holds for any i=1,,n1i=1,\dots,n-1. On condition that XX fully covers (xϵ+1n2,x+ϵ1n2)(x-\epsilon+\frac{1}{n^{2}},x+\epsilon-\frac{1}{n^{2}}) at logμn\lceil\log_{\mu}n\rceil, the conditional expectation of K2K^{2} is O(logμ2nlogμ2d)=O(lg2nlg2d/lg4μ)\mathrm{O}(\log_{\mu}^{2}{n}\log_{\mu}^{2}{d})=\mathrm{O}(\lg^{2}n\lg^{2}d/\lg^{4}\mu).

Lemma 5.4 is supported by the following Lemma 5.5, which is almost trivial from the fact that the iterative tent map fif^{i} is piecewise linear (see Appendix C for a proof).

Lemma 5.5.

Let X[xϵ,x+ϵ]X\in[x-\epsilon,x+\epsilon] uniformly at random. Let B1Bn=γn(X)B_{1}\cdots B_{n}=\gamma^{n}(X). Suppose XX fully covers y[xϵ,x+ϵ]y\in[x-\epsilon,x+\epsilon] at ii, and let γn(y)=b1bn\gamma^{n}(y)=b_{1}\cdots b_{n}. Then, Pr[Bi+1=bi+1γi(X)=γi(y)]=|T(b1bibi+1)|μ|T(b1bi)|\Pr[B_{i+1}=b_{i+1}\mid\gamma^{i}(X)=\gamma^{i}(y)]=\frac{|T(b_{1}\cdots b_{i}b_{i+1})|}{\mu|T(b_{1}\cdots b_{i})|} holds.

Lemma 5.2 is easy from Lemma 5.4, as follows.

Proof of Lemma 5.2..

Note that the probability of the event X(xϵ+1n2,x+ϵ1n2)X\not\in(x-\epsilon+\frac{1}{n^{2}},x+\epsilon-\frac{1}{n^{2}}) is at most 2n2\frac{2}{n^{2}}. Using the trivial upper bound that KnK\leq n, the claim is easy from Lemma 5.4. ∎

6 Concluding Remarks

Motivated by the possibility of a valid computation of physical systems, this paper investigated the space complexity of computing a tent code. We showed that a valid approximate calculation is in O(logn)\mathrm{O}(\log n) space, that is optimum in terms of nn, and gave an algorithm for the valid decision working in O(log2n)\mathrm{O}(\log^{2}n) space, in a sense of the smoothed complexity where the initial condition xx^{\prime} is ϵ\epsilon-perturbed from xx. A future work is an extension to the baker’s map, which is a chaotic map of piecewise but 2-dimensional. For the purpose, we need an appropriately extended notion of the segment-type. Another future work is an extension to the logistic map, which is a chaotic map of 1-dimensional but quadratic. The time complexity of the tent code is another interesting topic to decide bn{0,1}b_{n}\in\{0,1\} as given a rational x=p/qx=p/q for a fixed μ\mu\in\mathbb{Q}. Is it possible to compute in time polynomial in the input size logp+logq+logn\log p+\log q+\log n? It might be NP-hard, but we could not find a result.

References

  • [1] T. Addabbo, M. Alioto, A. Fort, S. Rocchi and V. Vignoli, The digital tent map: Performance analysis and optimized design as a low-complexity source of pseudorandom bits. IEEE Transactions on Instrumentation and Measurement, 55:5 (2006), 1451–1458.
  • [2] M. Alawida, J. S. Teh, D. P. Oyinloye, W. H. Alshoura, M. Ahmad and R. S. Alkhawaldeh, A New Hash Function Based on Chaotic Maps and Deterministic Finite State Automata, in IEEE Access, vol. 8, pp. 113163-113174, 2020,
  • [3] H. Bruin, Combinatorics of the kneading map, International Journal of Bifurcation and Chaos, 05:05 (1995), 1339–1349.
  • [4] S. Boodaghians, J. Brakensiek, S. B. Hopkins and A. Rubinstein, Smoothed complexity of 2-player Nash equilibria, in Proc. FOCS 2020, 271–282.
  • [5] X. Chen, C. Guo, E. V. Vlatakis-Gkaragkounis, M. Yannakakis and X. Zhang, Smoothed complexity of local max-cut and binary max-CSP, in Proc. STOC 2020, 1052–1065.
  • [6] M.  Crampin and B. Heal, On the chaotic behaviour of the tent map Teaching Mathematics and its Applications: An International Journal of the IMA, 13:2 (1994), 83–89.
  • [7] W. de Melo and S. van Strien, One-Dimensional Dynamics, Springer-Verlag, 1991.
  • [8] M. Ercsey-Ravasz and Z. Toroczkai, Optimization hardness as transient chaos in an analog approach to constraint satisfaction, Nature Physics, 7 (2011), 966–970.
  • [9] M. R. Garey and D. S.  Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, W. H. Freeman and Company, 1979.
  • [10] G. Gharooni-fard, A. Khademzade, F. Moein-darbari, Evaluating the performance of one-dimensional chaotic maps in the network-on-chip mapping problem, IEICE Electronics Express, 6:12 (2009), 811–817. Algorithms and certificates for Boolean CSP refutation: smoothed is no harder than random
  • [11] V. Guruswami, P. K. Kothari and P.  Manohar, Algorithms and certificates for Boolean CSP refutation: smoothed is no harder than random, in Proc. STOC 2022, 678–689.
  • [12] N. Haghtalab, T. Roughgarden and A. Shetty, Smoothed analysis with adaptive adversaries, in Proc. FOCS 2022, 942–953.
  • [13] E. Hopf, Ergodentheorie, Springer Verlag, Berlin, 1937.
  • [14] F. Hofbauer, On intrinsic ergodicity of piecewise monotonic transformations with positive entropy, Israel Journal of Mathematics, 34:3 (1979), 213–237.
  • [15] N. Karmarkar, A new polynomial time algorithm for linear programming, Combinatorica, 4:4 (1984), 373–395.
  • [16] A. Kanso, H. Yahyaoui and M. Almulla, Keyed hash function based on chaotic map, Information Sciences, 186 (2012), 249–264.
  • [17] L. G. Khachiyan, A polynomial algorithm in linear programming, Soviet Mathematics Doklady, 20 (1979), 191–194.
  • [18] T. Kohda, Signal processing using chaotic dynamics, IEICE ESS Fundamentals Review, 2:4 (2008), 16–36, in Japanese.
  • [19] B. Korte and J. Vygen, Combinatorial Optimization: Theory and Algorithms, Springer-Verlag, 2018.
  • [20] C. Li, G. Luo, K, Qin, C. Li, An image encryption scheme based on chaotic tent map, Nonlinear. Dyn., 87 (2017), 127–133.
  • [21] T.Y. Li and J.A. Yorke, Period three implies chaos, Amer. Math. Monthly, 82 (1975), 985–995.
  • [22] E.N. Lorenz, Deterministic nonperiodic flow, Journal of Atmospheric Sciences, 20:2 (1963), 130–141.
  • [23] E. Lorenz, The Essence of Chaos, University of Washington Press, 1993.
  • [24] R. May, Simple mathematical models with very complicated dynamics. Nature, 261 (1976), 459–467.
  • [25] N. Okada and S. Kijima, The space complexity of generating tent codes, arXiv:2310.14185 (2023).
  • [26] W. Parry, On the β\beta-expansions of real numbers, Acta Math. Acad. Sci. Hung., 11 (1960), 401–416.
  • [27] W. Parry, Representations for real numbers, Acta Math.Acad. Sci.Hung., 15 (1964), 95–105.
  • [28] J. Perl, On chaotic behaviour of some np-complete problems, LNCS, 314 (WG1987), 149–161.
  • [29] G. Radons, G. C. Hartmann, H. H. Diebner, O. E. Rossler, Staircase baker’s map generates flaring-type time series, Discrete Dynamics in Nature and Society, 5 (2000), 107–120.
  • [30] A. Rényi, Representations for real numbers and their ergodic properties, Acta Mathematica Hungarica, 8:3-4 (1957), 477–493.
  • [31] H. Shigematsu, H. Mori, T. Yoshida and H. Okamoto, Analytic study of power spectra of the tent maps near band-splitting transitions, J. Stat. Phys., 30 (1983), 649–679.
  • [32] M. Sipser, Introduction to the Theory of Computation, 3rd ed., Cengage Learning, 2012.
  • [33] D. A. Spielman and S.-H. Teng, Smoothed analysis: why the simplex algorithm usually takes polynomial time, Journal of the ACM, 51:3 (2004), 385–463.
  • [34] D. A. Spielman and S.-H. Teng, Smoothed analysis: An attempt to explain the behavior of algorithms in practice, Communications of the ACM, 52:10 (2009), 76–84.
  • [35] J. Xiao, J. Xu, Z. Chen, K. Zhang and L. Pan, A hybrid quantum chaotic swarm evolutionary algorithm for DNA encoding, Computers & Mathematics with Applications, 57:11–12 (2009), 1949–1958.
  • [36] H. Yang, K.-W. Wong, X. Liao, Y. Wang and D. Yang, One-way hash function construction based on chaotic map network, Chaos, Solitons & Fractals, 41:5 (2009), 2566–2574.
  • [37] T. Yoshida, H. Mori and H. Shigematsu, Analytic study of chaos of the tent map: Band structures, power spectra, and critical behaviors. J. Stat. Phys., 31 (1983), 279–308.

Appendix A Subprocesses

This section shows two subprocesses Algorithms 4 and 5, which are called in Algorithms 2 and 3. Algorithm 4 follows Lemmas 3.1 and 3.3, and Algorithm 5 follows (13).

Algorithm 4 Compute v,u,c,δv,u,c,\delta
0:  kk
0:  v[k]v[k], u[k]u[k], c[k]c[k], δ[k1,0]\delta[k-1,0], δ[k1,1]\delta[k-1,1]
1:  if k=1k=-1 then
2:     v[1]0v[-1]\leftarrow 0, u[1]0u[-1]\leftarrow 0 /* reject state */
3:  end if
4:  if k=0k=0 then
5:     v[0]0v[0]\leftarrow 0, u[0]1u[0]\leftarrow 1, bit c[0]0c[0]\leftarrow 0 /* =q0=q_{0} */
6:  end if
7:  if k=1k=1 then
8:     v[1]0v[1]\leftarrow 0, u[1]f(12)u[1]\leftarrow f(\frac{1}{2}), c[1]1c[1]\leftarrow 1, δ[0,0]1\delta[0,0]\leftarrow 1, δ[0,1]1\delta[0,1]\leftarrow 1 /* =I1=I_{1} */
9:  end if
10:  if k2k\geq 2 then
11:     if v[k1]<12<u[k1]v[k-1]<\frac{1}{2}<u[k-1] then
12:        if c[k1]=0c[k-1]=0 then
13:           δ[k1,0]k\delta[k-1,0]\leftarrow k, v[k]f(v[k1])v[k]\leftarrow f(v[k-1]), u[k]f(12)u[k]\leftarrow f(\tfrac{1}{2}), c[k]0c[k]\leftarrow 0
14:           δ[k1,1]k\delta[k-1,1]\leftarrow k^{\prime} such that v[k]=f(u[k1])v[k^{\prime}]=f(u[k-1]) and u[k]=f(12)u[k^{\prime}]=f(\tfrac{1}{2})
15:        else /* i.e., c[k1]=1c[k-1]=1 */
16:           δ[k1,0]k\delta[k-1,0]\leftarrow k, v[k]f(u[k1])v[k]\leftarrow f(u[k-1]), u[k]f(12)u[k]\leftarrow f(\tfrac{1}{2}), c[k]0c[k]\leftarrow 0
17:           δ[k1,1]k\delta[k-1,1]\leftarrow k^{\prime} such that v[k]=f(v[k1])v[k^{\prime}]=f(v[k-1]) and u[k]=f(12)u[k^{\prime}]=f(\tfrac{1}{2})
18:        end if
19:     else if u[k1]12u[k-1]\leq\frac{1}{2} then
20:        δ[k1,c[k1]]k\delta[k-1,c[k-1]]\leftarrow k, v[k]f(v[k1])v[k]\leftarrow f(v[k-1]), u[k]f(u[k1])u[k]\leftarrow f(u[k-1]), c[k]c[k1]c[k]\leftarrow c[k-1]
21:        δ[k1,c[k1]¯]1\delta[k-1,\overline{c[k-1]}]\leftarrow-1
22:     else /* i.e., v[k1]12v[k-1]\geq\frac{1}{2} */
23:        δ[k1,c[k1]¯]k\delta[k-1,\overline{c[k-1]}]\leftarrow k, v[k]f(u[k1])v[k]\leftarrow f(u[k-1]), u[k]f(v[k1])u[k]\leftarrow f(v[k-1]), c[k]c[k1]¯c[k]\leftarrow\overline{c[k-1]}
24:        δ[k1,c[k1]]1\delta[k-1,c[k-1]]\leftarrow-1
25:     end if
26:  end if
Algorithm 5 Update ll and bb
0:  an integer ll, bits bb, bb^{\prime}
0:  an integer ll, a bit bb
1:  if b=c[l]b=c[l] then /* Zi=IlZ_{i}=I_{l} */
2:     lδ[l,b]l\leftarrow\delta[l,b^{\prime}], bbb\leftarrow b^{\prime}
3:  else /* Zi=I¯lZ_{i}=\overline{I}_{l} */
4:     lδ[l,b¯]l\leftarrow\delta[l,\overline{b^{\prime}}], bb¯b\leftarrow\overline{b^{\prime}}
5:  end if
6:  return  ll and bb

Appendix B Proof of Lemma  5.4

This Section proves Lemma 5.4.

Lemma B.1 (Lemma 5.4).

Let μ(1,2)\mu\in(1,2) be rational given by an irreducible fraction μ=c/d\mu=c/d. Suppose for μ(1,2)\mu\in(1,2) that fμi(12)12f_{\mu}^{i}(\frac{1}{2})\neq\frac{1}{2} holds for any i=1,,n1i=1,\dots,n-1. On condition that XX fully covers (xϵ+1n2,x+ϵ1n2)(x-\epsilon+\frac{1}{n^{2}},x+\epsilon-\frac{1}{n^{2}}) at logμn\lceil\log_{\mu}n\rceil, the conditional expectation of K2K^{2} is O(logμ2nlogμ2d)=O(lg2nlg2d/lg4μ)\mathrm{O}(\log_{\mu}^{2}{n}\log_{\mu}^{2}{d})=\mathrm{O}(\lg^{2}n\lg^{2}d/\lg^{4}\mu).

The proof strategy of Lemma 5.4 is as follows. Lemma 3.5 implies that a chain must follow the path Il,Il+1,,I2lI_{l},I_{l+1},\ldots,I_{2l} (or I¯l,I¯l+1,,I¯2l\overline{I}_{l},\overline{I}_{l+1},\ldots,\overline{I}_{2l}) to reach level 2l2l and the probability is |I2l|μl|Il|\frac{|I_{2l}|}{\mu^{l}|I_{l}|} (Lemma B.3). We then prove that there exists l=O(lognlogd)l=\mathrm{O}(\log n\log d) such that |I2l|μl|Il|n3\frac{|I_{2l}|}{\mu^{l}|I_{l}|}\leq n^{-3} (Lemma B.4), which provides Pr[K2l]n2\Pr[K\geq 2l]\leq n^{-2} (Lemma B.6). Lemma 5.4 is easy from Lemma B.6.

Let Zt=L(T(γt(X))Z_{t}=L(T(\gamma^{t}(X)) for t=0,1,2,t=0,1,2,\ldots, i.e., ZtZ_{t} denote the level of the state at tt-th iteration. We observe the following fact from Lemma 3.5.

Observation B.2.

If ZtZ_{t} visits I2jI_{2j} (resp. I¯2j\overline{I}_{2j}) for the first time then Zti=I2jiZ_{t-i}=I_{2j-i} (resp. Zti=I¯2jiZ_{t-i}=\overline{I}_{2j-i}) for i=1,2,,ji=1,2,\ldots,j.

Proof.

By Lemma 3.5, all in-edges to IkI_{k} (resp. I¯k\overline{I}_{k}) for any k=j+1,,2jk=j+1,\dots,2j come from Ik1I_{k-1} (resp. I¯k1\overline{I}_{k-1}), or a node of level 2j2j or greater. Since ZtZ_{t} has not visited any level greater than 2j2j by the hypothesis and the above argument again, we obtain the claim. ∎

By Observation B.2, if a Markov chain Z1,Z2,Z_{1},Z_{2},\ldots visits level 2l2l for the first time at time tt then L(Ztl)L(Z_{t-l}) must be ll. The next lemma gives an upper bound of the probability from level ll to 2l2l.

Lemma B.3.

Suppose that XX covers around appropriate yy corresponding to ZtlZ_{t-l} at tlt-l. Then,

Pr[L(Zt)=2lL(Ztl)=l]=|I2l|μl|Il|.\Pr[L(Z_{t})=2l\mid L(Z_{t-l})=l]=\frac{|I_{2l}|}{\mu^{l}|I_{l}|}.
Proof.

By Observation B.2, the path from IlI_{l} to I2lI_{2l} is unique and

Pr[Zt=I2lZtl=Il]\displaystyle\Pr[Z_{t}=I_{2l}\mid Z_{t-l}=I_{l}] =i=l2l1p(Ii,Ii+1)\displaystyle=\prod_{i=l}^{2l-1}p(I_{i},I_{i+1})
=i=l2l1|Ii+1|μ|Ii|\displaystyle=\prod_{i=l}^{2l-1}\frac{|I_{i+1}|}{\mu|I_{i}|} (by Lemma 5.5)\displaystyle(\mbox{by Lemma \ref{lem:trans-prob}})
=|I2l|μl|Il|\displaystyle=\frac{|I_{2l}|}{\mu^{l}|I_{l}|} (23)

holds. We remark that |Ii|=|I¯i||I_{i}|=|\overline{I}_{i}| holds for any ii, meaning that p(Ii,Ii+1)=p(I¯i,I¯i+1)p(I_{i},I_{i+1})=p(\overline{I}_{i},\overline{I}_{i+1}), and hence Pr[Zt=I¯2lZtl=I¯l]=|I2l|μl|Il|\Pr[Z_{t}=\overline{I}_{2l}\mid Z_{t-l}=\overline{I}_{l}]=\frac{|I_{2l}|}{\mu^{l}|I_{l}|}. ∎

The following lemma is the first mission of the proof of Lemma 5.4.

Lemma B.4.

Let μ(1,2)\mu\in(1,2) be rational given by an irreducible fraction μ=c/d\mu=c/d. Suppose for μ(1,2)\mu\in(1,2) that fμi(12)12f_{\mu}^{i}(\frac{1}{2})\neq\frac{1}{2} holds for any i=1,,n1i=1,\dots,n-1. Then, there exists ll such that l8logμdlogμnl\leq 8\lceil{\log_{\mu}d}\rceil\lceil{\log_{\mu}n}\rceil and

|I2l|μl|Il|n3\frac{|I_{2l}|}{\mu^{l}|I_{l}|}\leq n^{-3} (24)

holds.

To prove Lemma B.4, we remark the following fact.

Lemma B.5.

Let μ(1,2)\mu\in(1,2) be rational given by an irreducible fraction μ=c/d\mu=c/d. Then, |Ik|12dk|I_{k}|\geq\frac{1}{2d^{k}} for any k2k\geq 2.

Proof.

By the recursive formula (11), we see that IkI_{k} is either [fi(12),fj(12))\left[f^{i}(\frac{1}{2}),f^{j}(\frac{1}{2})\right) or (fi(12),fj(12)]\left(f^{i}(\frac{1}{2}),f^{j}(\frac{1}{2})\right] where iki\leq k and jkj\leq k. We can denote fi(12)f^{i}(\frac{1}{2}) as ci2di\frac{c_{i}}{2d^{i}} (ci>0c_{i}\in\mathbb{Z}_{>0}) for any ii. Therefore,

|Ik|=|fi(12)fj(12)|=|ci2dicj2dj|=|cidkicjdkj2dk||I_{k}|=\left|f^{i}(\tfrac{1}{2})-f^{j}(\tfrac{1}{2})\right|=\left|\frac{c_{i}}{2d^{i}}-\frac{c_{j}}{2d^{j}}\right|=\left|\frac{c_{i}d^{k-i}-c_{j}d^{k-j}}{2d^{k}}\right| (25)

holds. Clearly, cidkicjdkjc_{i}d^{k-i}-c_{j}d^{k-j} is an integer, and it is not 0 since |Ik|0|I_{k}|\neq 0. Thus, we obtain |Ik|12dk|I_{k}|\geq\frac{1}{2d^{k}}. ∎

Then, we prove Lemma B.4.

Proof of Lemma B.4.

For convenience, let li=2ilogμnl_{i}=2^{i}\lceil{\log_{\mu}n}\rceil for i=1,2,i=1,2,\ldots. Assume for a contradiction that (24) never hold for any l1,l2,,lkl_{1},l_{2},\ldots,l_{k}, where k=max{4,log2logμd+2}k=\max\{4,\lceil{\log_{2}\log_{\mu}d}\rceil+2\} for convenience. In other words,

|Ili+1|>n3μli|Ili||I_{l_{i+1}}|>n^{-3}\mu^{l_{i}}|I_{l_{i}}| (26)

holds every i=1,2,,ki=1,2,\ldots,k. Thus, we inductively obtain that

|Ilk+1|\displaystyle|I_{l_{k+1}}| >n3μlk|Ilk|\displaystyle>n^{-3}\mu^{l_{k}}|I_{l_{k}}|
>n6μlkμlk1|Ilk1|\displaystyle>n^{-6}\mu^{l_{k}}\mu^{l_{k-1}}|I_{l_{k-1}}|
>\displaystyle>\dots
>n3kμlkμlk1μl1|Il1|\displaystyle>n^{-3k}\mu^{l_{k}}\mu^{l_{k-1}}\dots\mu^{l_{1}}|I_{l_{1}}| (27)

holds. By the definition of lil_{i},

μli=μ2ilogμnμ2ilogμn=n2i\mu^{l_{i}}=\mu^{2^{i}\lceil{\log_{\mu}n}\rceil}\geq\mu^{2^{i}\log_{\mu}n}=n^{2^{i}} (28)

holds. Lemma B.5 implies that

|Il1|12dl1=12d2logμn12d4logμn=12n4logμd\displaystyle|I_{l_{1}}|\geq\frac{1}{2d^{l_{1}}}=\frac{1}{2}d^{-2\lceil{\log_{\mu}n}\rceil}\geq\frac{1}{2}d^{-4\log_{\mu}n}=\frac{1}{2}n^{-4\log_{\mu}d} (29)

holds. Then, (27), (28) and (29) imply

|Ilk+1|\displaystyle|I_{l_{k+1}}| >n3kn2k+2k1++2112n4logμd\displaystyle>n^{-3k}\cdotp n^{2^{k}+2^{k-1}+\dots+2^{1}}\cdotp\frac{1}{2}n^{-4\log_{\mu}d} (30)

holds. By taking the logn\log_{n} of the both sides of (30), we see that

logn|Ilk+1|\displaystyle\log_{n}{|I_{l_{k+1}}|} >3k+2k+12logn24logμd\displaystyle>-3k+2^{k+1}-2-\log_{n}2-4\log_{\mu}d
=(2k4logμd)+(2k3k2logn2)\displaystyle=(2^{k}-4\log_{\mu}d)+(2^{k}-3k-2-\log_{n}2) (31)

holds. Since klog2logμd+2k\geq\lceil{\log_{2}\log_{\mu}d}\rceil+2 by definition, it is not difficult to see that

2k4logμd\displaystyle 2^{k}-4\log_{\mu}d 22+log2(logμd)4logμd\displaystyle\geq 2^{2+\log_{2}(\log_{\mu}d)}-4\log_{\mu}d
=4logμd4logμd\displaystyle=4\log_{\mu}d-4\log_{\mu}d
=0\displaystyle=0 (32)

holds. Since k4k\geq 4 by definition, it is also not difficult to observe that

2k3k2logn2 24342logn2= 2logn2> 0\displaystyle 2^{k}-3k-2-\log_{n}2\ \geq\ 2^{4}-3\cdot 4-2-\log_{n}2\ =\ 2-\log_{n}2\ >\ 0 (33)

holds. Equations (31), (32) and (33) imply that logn|Ilk+1|>0\log_{n}{|I_{l_{k+1}}|}>0, meaning that |Ilk+1|>1|I_{l_{k+1}}|>1. At the same time, notice that any segment-type JJ satisfies J[0,1]J\subseteq[0,1], meaning that |Ilk+1|1|I_{l_{k+1}}|\leq 1. Contradiction. Thus, we obtain (24) for at least one of l1,l2,,lkl_{1},l_{2},\ldots,l_{k}.

Finally, we check the size of lkl_{k}:

lk\displaystyle l_{k} =2klogμn2max{4,log2logμd+2}logμn2max{4,log2logμd+3}logμn\displaystyle=2^{k}\lceil{\log_{\mu}n}\rceil\leq 2^{\max\{4,\lceil{\log_{2}\log_{\mu}d}\rceil+2\}}\lceil{\log_{\mu}n}\rceil\leq 2^{\max\{4,\log_{2}\log_{\mu}d+3\}}\lceil{\log_{\mu}n}\rceil
=max{16,8logμd}logμn=8max{2,logμd}logμn8logμdlogμn\displaystyle=\max\{16,8\log_{\mu}d\}\lceil{\log_{\mu}n}\rceil=8\max\{2,\log_{\mu}d\}\lceil{\log_{\mu}n}\rceil\leq 8\lceil{\log_{\mu}d}\rceil\lceil{\log_{\mu}n}\rceil

where the last equality follows logμd>1\log_{\mu}d>1 since μ<2\mu<2 and d2d\geq 2. We obtain a desired ll. ∎

By Lemmas B.3 and B.4, we obtain the following fact.

Lemma B.6.

Let l=8logμdlogμnl_{*}=8\lceil{\log_{\mu}d}\rceil\lceil{\log_{\mu}n}\rceil for convenience. Then

Pr[K2l]n2\displaystyle\Pr[K\geq 2l_{*}]\leq n^{-2}

holds.

Proof.

For μ=c/d\mu=c/d and nn, Lemma B.4 implies that there exists ll such that lll\leq l_{*} and

|I2l|μl|Il|n3\displaystyle\frac{|I_{2l}|}{\mu^{l}|I_{l}|}\leq n^{-3} (34)

holds. Let AtA_{t} (t=1,,nt=1,\ldots,n) denote the event that ZtZ_{t} reaches the level 2l2l for the first time. It is easy to see that

Pr[K2l]=Pr[t=0nAt]\displaystyle\Pr[K\geq 2l]=\Pr\left[\bigvee_{t=0}^{n}A_{t}\right] (35)

holds999Precisely, t=0nAt=t=2lnAt\bigvee_{t=0}^{n}A_{t}=\bigvee_{t=2l_{*}}^{n}A_{t} holds, but we do not use the fact here. by the definition of AtA_{t}. We also remark that the event AtA_{t} implies not only Zt=2lZ_{t}=2l but also L(Ztl)=lL(Z_{t-l})=l by Observation B.2. It means that

Pr[At]\displaystyle\Pr[A_{t}] Pr[[L(Zt)=2l][L(Ztl)=l]]\displaystyle\leq\Pr[[L(Z_{t})=2l]\wedge[L(Z_{t-l})=l]]
=Pr[L(Zt)=2lL(Ztl)=l]Pr[L(Ztl)=l]\displaystyle=\Pr[L(Z_{t})=2l\mid L(Z_{t-l})=l]\Pr[L(Z_{t-l})=l]
Pr[L(Zt)=2lL(Ztl)=l]\displaystyle\leq\Pr[L(Z_{t})=2l\mid L(Z_{t-l})=l] (36)

holds. Then,

Pr[K2l]\displaystyle\Pr[K\geq 2l] =Pr[t=0nAt]\displaystyle=\Pr\left[\bigvee_{t=0}^{n}A_{t}\right] (by (35))\displaystyle(\mbox{by \eqref{eq:l*2}})
t=0nPr[At]\displaystyle\leq\sum_{t=0}^{n}\Pr\left[A_{t}\right] (union bound)\displaystyle(\mbox{union bound})
nPr[L(Zt)=2lL(Ztl)=l]\displaystyle\leq n\Pr[L(Z_{t})=2l\mid L(Z_{t-l})=l] (by (36))\displaystyle(\mbox{by \eqref{eq:l*3}})
n|I2l|μl|Il|\displaystyle\leq n\frac{|I_{2l}|}{\mu^{l}|I_{l}|} (by Lemma B.3)\displaystyle(\mbox{by Lemma~{}\ref{lem:go_back}})
n2\displaystyle\leq n^{-2} (by (34))\displaystyle(\mbox{by \eqref{eq:l*1}})

holds. We remark that Pr[K2l]Pr[K2l]\Pr[K\geq 2l_{*}]\leq\Pr[K\geq 2l] is trivial since l<ll<l_{*}. ∎

We are ready to prove Lemma 5.4.

Proof of Lemma 5.4.

Let l=8logμdlogμnl_{*}=8\lceil{\log_{\mu}d}\rceil\lceil{\log_{\mu}n}\rceil for convenience. Notice that XX fully covers (xϵ+1n2,x+ϵ1n2)(x-\epsilon+\frac{1}{n^{2}},x+\epsilon-\frac{1}{n^{2}}) at or after l2logμnl_{*}\geq\lceil 2\log_{\mu}n\rceil by Lemma 5.3. Then

E[K2]\displaystyle\mathrm{E}[K^{2}] =k=1nk2Pr[K=k]\displaystyle=\sum_{k=1}^{n}k^{2}\Pr[K=k]
=k=12l1k2Pr[K=k]+k=2lnk2Pr[K=k]\displaystyle=\sum_{k=1}^{2l_{*}-1}k^{2}\Pr[K=k]+\sum_{k=2l_{*}}^{n}k^{2}\Pr[K=k]
(2l1)2Pr[K2l1]+n2Pr[K2l]\displaystyle\leq(2l_{*}-1)^{2}\Pr[K\leq 2l_{*}-1]+n^{2}\Pr[K\geq 2l_{*}]
(2l1)2+n2Pr[K2l]\displaystyle\leq(2l_{*}-1)^{2}+n^{2}\Pr[K\geq 2l_{*}]
(2l1)2+1\displaystyle\leq(2l_{*}-1)^{2}+1 (by Lemma B.6)\displaystyle(\mbox{by Lemma~{}\ref{lem:l*}})
=(16logμdlogμn1)2+1\displaystyle=(16\lceil{\log_{\mu}d}\rceil\lceil{\log_{\mu}n}\rceil-1)^{2}+1

holds. Now the claim is easy. ∎

Appendix C Proof of Lemma 5.5

Lemma C.1 (Lemma 5.5).

Let X[xϵ,x+ϵ]X\in[x-\epsilon,x+\epsilon] uniformly at random. Let B1Bn=γn(X)B_{1}\cdots B_{n}=\gamma^{n}(X). Suppose XX covers around y[xϵ,x+ϵ]y\in[x-\epsilon,x+\epsilon] at ii, and let γn(y)=b1bn\gamma^{n}(y)=b_{1}\cdots b_{n}. Then,

Pr[Bi+1=bi+1γi(X)=γi(y)]=|T(b1bibi+1)|μ|T(b1bi)|\displaystyle\Pr[B_{i+1}=b_{i+1}\mid\gamma^{i}(X)=\gamma^{i}(y)]=\frac{|T(b_{1}\cdots b_{i}b_{i+1})|}{\mu|T(b_{1}\cdots b_{i})|}

holds.

Proof.

Since XX covers around yy at ii, it is not difficult to see that

Pr[γi(X)=𝐛i]=|Si|2ϵand\displaystyle\Pr[\gamma^{i}(X)=\mathbf{b}_{i}]=\frac{|S_{i}|}{2\epsilon}\quad\mbox{and}
Pr[γi+1(X)=𝐛i+1,γi(X)=𝐛i]=Pr[γi+1(X)=𝐛i+1]=|Si+1|2ϵ\displaystyle\Pr[\gamma^{i+1}(X)=\mathbf{b}_{i+1},\gamma^{i}(X)=\mathbf{b}_{i}]=\Pr[\gamma^{i+1}(X)=\mathbf{b}_{i+1}]=\frac{|S_{i+1}|}{2\epsilon}

hold. Thus,

Pr[γi+1(X)=𝐛i+1γi(X)=𝐛i]=|Si+1||Si|=μi+1|Si+1|μi+1|Si|\displaystyle\Pr[\gamma^{i+1}(X)=\mathbf{b}_{i+1}\mid\gamma^{i}(X)=\mathbf{b}_{i}]=\frac{|S_{i+1}|}{|S_{i}|}=\frac{\mu^{i+1}|S_{i+1}|}{\mu^{i+1}|S_{i}|} (37)

holds. Since the iterative tent map fif^{i} is piecewise linear, it is not difficult to see that fif^{i} preserves the uniform measure (cf. Lemma B.6 in [25]), and hence μi|Si|=|T(γi(y))|\mu^{i}|S_{i}|=|T(\gamma^{i}(y))| as well as μi+1|Si+1|=|T(γi+1(y))|\mu^{i+1}|S_{i+1}|=|T(\gamma^{i+1}(y))| hold. Then,

(37)=|T(γi+1(y))|μ|T(γi(y))|\displaystyle\eqref{eq:20231111a}=\frac{|T(\gamma^{i+1}(y))|}{\mu|T(\gamma^{i}(y))|}

and we obtain the claim. ∎