A Smoothed Analysis of the Space Complexity
of Computing a Chaotic Sequence
Abstract
This work is motivated by a question whether it is possible to calculate a chaotic sequence efficiently, e.g., is it possible to get the -th bit of a bit sequence generated by a chaotic map, such as -expansion, tent map and logistic map in time/space? This paper gives an affirmative answer to the question about the space complexity of a tent map. We show that the decision problem of whether a given bit sequence is a valid tent code is solved in space in a sense of the smoothed complexity.
1 Brief Introduction
A tent map (or simply ) is given by
(1) |
where this paper is concerned with the case of . As Figure 1 shows, it is a simple piecewise-linear map looking like a tent. Let recursively for , where for convenience. Clearly, is a deterministic sequence. Nevertheless, the deterministic sequence shows a complex behavior, as if “random,” when . It is said chaotic [23]. For instance, becomes quite different from for as increasing, even if is very small, and it is one of the most significant characters of a chaotic sequence known as the sensitivity to initial conditions — a chaotic sequence is said “unpredictable” despite a deterministic process [22, 31, 37, 6].
From the viewpoint of theoretical computer science, computing chaotic sequences seems to contain (at least) two computational issues: numerical issues and combinatorial issues including computational complexity. This paper is concerned with the computational complexity of a simple problem: Given , and , decide whether . Its time complexity might be one of the most interesting questions; e.g., is it possible to “predict” whether in time polynomial in ? Unfortunately, we in this paper cannot answer the question333 We think that the problem might be NP-hard using the arguments on the complexity of algebra and number theory in [9], but we could not find the fact. . Instead, this paper is concerned with the space complexity of the problem.
![]() |
![]() Is ? |
1.1 Background
Chaos.
Chaotic sequences show many interesting figures such as cobweb, strange attractor, bifurcation, etc. [22, 21, 24, 23, 18]. The chaos theory has been intensively developed in several context such as electrical engineering, information theory, statistical physics, neuroscience and computer science, with many applications, such as weather forecasting, climate change, diastrophism and disaster resilience since the 1960s. For instance, a cellular automaton, including the life game, is a classical topic in computer science, and it is closely related to the “edge of chaos.” For another instance, the “sensitivity to initial conditions” are often regarded as unpredictability, and chaotic sequences are used in pseudo random number generator, cryptography, or heuristics for NP-hard problems including chaotic genetic algorithms.
From the viewpoint of theoretical computer science, the numerical issues of computing chaotic sequences have been intensively investigated in statistical physics, information theory and probability. In contrast, the computational complexity of computing a chaotic sequence seems not well developed. It may be a simple reason that it looks unlike a decision problem.
Tent map: 1-D, piecewise-linear and chaotic.
Interestingly, very simple maps show chaotic behavior. One of the most simplest maps are piece-wise linear maps, including the tent map and the -expansion (a.k.a. Bernoulli shift) which are 1-D maps and the baker’s map which is a 2-D map [22, 30, 26, 27, 31, 37, 6, 13, 29].
The tent map, as well as the -expansion, is known to be topologically conjugate to the logistic map which is a quadratic map cerebrated as a chaotic map. Chaotic behavior of the tent map, in terms of power spectra, band structure, critical behavior, are analyzed in e.g., [22, 31, 37, 6]. The tent map is also used for pseudo random generator or encryption e.g., [1, 2, 20]. It is also used for meta-heuristics for NP-hard problems [35, 10].
Smoothed analysis.
Linear programming is an optimization problem on a (piecewise) linear system, for a linear objective function. Dantzig in 1947 gave an “efficient” algorithm, known as the simplex method, for linear programming. Khachiyan [17] gave the ellipsoid method, and proved that the linear programming is in P (cf. [19]) in 1979. Karmarkar [15] in 1984 gave another polynomial time algorithm, interior point method.
1.2 Contribution
This work.
This paper is concerned with a problem related to deciding whether for for the -th iterated tent map . More precisely, we will define the tent language consisting of tent codes of in Section 2, and we are concerned with the correct recognition of . The main target of the paper is the space complexity of the following simple problem; given a bit sequence and , decide whether is a tent code of . One may think that it is a problem just to compute (), and there is nothing more than the precision issue, even in the sense of computational complexity. However, we will show in Section 2.2 that a standard calculation attended by rounding-off easily allows impossible tent code.
By a standard argument on the numerical error, cf. [19], space is enough to get it. At the same time, it seems hopeless to solve the target problem exactly in space, due to the “sensitivity to initial conditions” of a chaotic sequence. Then, this paper is concerned with a decision problem of whether an input is a tent code of -perturbed , and proves that it is correctly recognized in space (Theorem 2.8).
Related works.
The analysis technique of the paper basically follows [25], which showed that the recognition of is in space in average. The technique is also very similar to or essentially the same as Markov extension developed in the context of symbolic dynamics. In 1979, Hofbauer [14] gave a representation of the kneading invariants for unimodal maps, which is known as the Markov extension and/or Hofbauer tower, and then discussed topological entropy. Hofbauer and Keller extensively developed the arguments in 1980s, see e.g., [7, 3]. We do not think the algorithms of the paper are trivial, but they are composed of the combination of the above nontrivial argument, and some classical techniques of designing space efficient algorithms.
As we stated above, the computational complexity of computing a chaotic sequence seems not well developed. Perl showed some NP-complete systems, e.g., knapsack, shows chaotic behavior [28]. On the other hand, it seems not known whether every chaotic sequence is hard to compute in the sense of NP-hard; particularly we are not sure if the problem is NP-hard for a tent map . Recently, chaotic dynamics are used for solving NP-hard problems e.g., SAT [8].
1.3 Organization
In Section 2, we will define the tent code, describe the issue of a standard calculation rounding-off, and show the precise results of the paper. Section 3 imports some basic technologies from [25]. Section 4 gives a simple algorithm for a valid calculation, as a preliminary step. Section 5 gives a smoothed analysis for the decision problem.
2 Issues and Results
2.1 Tent code
We define a tent-encoding function (or simply ) as follows. For convenience, let for as given , where denote the -times iterated tent map formally given by recursively. Then, the tent code for is a bit-sequence, where
(2) |
and () is recursively given by
(3) |
where denotes bit inversion of , i.e., and . We remark that the definition (3) is rephrased by
(4) |
Proposition 2.1.
Suppose for . Then, .
See [25] for a proof. The proofs are not difficult but lengthy. Thanks to this a little bit artificial definition (3), we obtain the following two more facts.
Proposition 2.2.
For any ,
hold where denotes the lexicographic order, that is and at for and unless .
Proposition 2.3.
The -th iterated tent code is right continuous, i.e., .
These two facts make the arguments simple. The following technical lemmas are useful to prove Propositions 2.2 and 2.3, as well as the arguments in Sections 4 and 5.
Lemma 2.4.
Suppose satisfy . If then
(5) |
holds.
Lemma 2.5.
If for then .
Let (or simply ) denote the set of all -bits tent codes, i.e.,
(6) |
and we call tent language (by ). Note that for . We say is a valid tent code if .
2.2 What is the issue?
A natural problem for the tent code could be calculation: given and , find . By a standard argument (see e.g., [19]), it requires working space to compute , in general. Thus, it is natural in practice to employ rounding-off, like Algorithm 1.
Due to the sensitivity to initial condition of a chaotic sequence, we cannot expect that Algorithm 1 to output exactly, but we hope that it would output some approximation. It could be a natural question whether the output . The following proposition means that Algorithm 1 could output an impossible tent code.
Proposition 2.6.
Algorithm 1 could output .
Proof.
Let , which is slightly greater than the golden ratio , where the golden ratio is a solution of . By our calculation, , and it is, of course, a word in . If we set , meaning that round down the nearest to , then Algorithm 1 outputs , which is not a word of . Similarly for the same , if we set , meaning that round down the nearest to , then Algorithm 1 outputs , which is not a word of . ∎
2.3 Problems and Results
The following problems could be natural in the sense of computational complexity of tent codes.
Problem 1 (Decision).
Given a real and a bit sequence , decide if .
Problem 2 (Calculation).
Given a real and a positive integer , find .
Recalling Proposition 2.6, it seems difficult to solve Problems 1 and 2 exactly, in space. Then, we consider to compute a valid tent code around . Let , we define
for . It is equivalently rephrased by
(7) |
by Propositions 2.1 and 2.2. For the calculation Problem 2, we establish the following simple theorem.
Theorem 2.7 (Approximate calculation).
Then, the following theorem for Problem 1 is the main result of the paper.
Theorem 2.8 (Decision for -perturbed input).
Let be rational given by an irreducible fraction , and let . Given a bit sequence and a real555By a read only tape of infinite length. , Algorithm 3 described in Section 5 accepts it if and rejects it if . If an (-perturbed) instance is given by for uniformly at random then the space complexity of Algorithm 3 is in expectation.
As stated in theorems, this paper assumes rational mainly for the reason of Turing comparability, but it is not essential666 We can establish some arguments for any real similar (but a bit weaker) to the theorems (see also [25]). . Instead, we allow an input instance being a real777 We do not use this fact directly in this paper, but it might be worth to mention it for some conceivable variants in the context of smoothed analysis to draw uniformly at random. , given by a read only tape of infinite length. We remark that the space complexity of Theorem 2.7 is optimal in terms of .
Proof strategy of the theorems.
For proofs, we will introduce the “automaton” for given by [25] in Section 3. Once we get the automaton, Theorem 2.7 is not difficult, and we prove it in Section 4. Algorithm 2 is relatively simple and the space complexity is trivial. Thus, the correctness is the issue, but it is also not very difficult. Then, we give Algorithm 3 and prove Theorem 2.8 in Section 5. The correctness is essentially the same as Algorithm 2. The analysis of the space complexity is the major issue.
3 Underlying Technology
This section briefly introduces some fundamental technology for the analyses in Sections 4 and 5 including the automaton and the Markov chain for , according to [25].
The key idea of a space efficient computation of a tent code is a representation of an equivalent class with respect to . Let
(8) |
for , we call the segment-type of . In fact, is a continuous interval, where one end is open and the other is close. By some straightforward argument with (4), we get the following recursive formula.
Lemma 3.1 ([25]).
Let , and let .
(1) Suppose ().
-
Case 1-1:
.
-
Case 1-1-1.
If then , and .
-
Case 1-1-2.
If then , and .
-
Case 1-1-1.
-
Case 1-2:
. Then , and .
-
Case 1-3:
. Then , and .
(2) Similarly, suppose ().
-
Case 2-1:
.
-
Case 2-1-1.
If then , and .
-
Case 2-1-2.
If then , and .
-
Case 2-1-1.
-
Case 2-2:
. Then , and .
-
Case 2-3:
. Then , and .
Let
(9) |
denote the set of segment-types (of ). It is easy to observe that can grow exponential to , while the following theorem implies that .
Theorem 3.2 ([25]).
Let . Let , and let
(10) |
for . Then,
for , where .
The following lemma, derived from Lemma 3.1, gives an explicit recursive formula of and .
Lemma 3.3 ([25]).
For convenience, we define the level of by
(12) |
if or . Notice that Theorem 3.2 implies that the level of for may be strictly less than . In fact, it happens, which provides a space efficient “automaton”.
State transit machine (“automaton”).
By Theorem 3.2, we can design a space efficient state transit machine888 Precisely, we need a “counter” for the length of the string, while notice that our main goal is not to design an automaton for . Our main target Theorems 2.7 and 2.8 assume a standard Turing machine, where obviously we can count the length of a sequence in space. according to Lemma 3.1, to recognize . We define the set of states by , where is the initial state, and denotes the unique reject state. Let denote the state transition function, defined as follows. Let and . According to Lemma 3.1, we appropriately define
(13) |
for and , as far as and are consistent. If the pair and are inconsistent, we define ; precisely
are the cases, where and .
Lemma 3.4 ([25]).
Let be a rational given by an irreducible fraction . For any , the state transit machine on is represented by bits.
Lemma 3.5 ([25]).
Suppose for that holds for any . Then,
hold for .
Lemma 3.6 ([25]).
Suppose for that holds for any . Then, there exists and such that
hold.
Markov model.
Furthermore, the state transitions preserve the uniform measure, over in the beginning, since the tent map is piecewise linear.
Lemma 3.7 ([25]).
Let be a random variable drawn from uniformly at random. Let . Then,
holds for , where let if .
Let (or simply ) denote a probability distribution over which follows for is uniformly distributed over , i.e., represents the probability of appearing as given the initial condition uniformly at random.
Theorem 3.8 ([25]).
Let be a rational given by an irreducible fraction . Then, it is possible to generate according to in space in expectation (as well as, with high probability).
Thus, we remark that the tent language is recognized in space on average all over the initial condition , by Theorem 3.8.
4 Calculation in “Constant” Space
Theorem 2.7 is easy, once the argument in Section 3 is accepted. Algorithm 2 shows the approximate calculation of so that the output is a valid tent code (recall the issue in Section 2.2). Roughly speaking, Algorithm 2 calculates by rounding-off to bits for the first iterations (lines 5–10), and then traces the automaton within the level after the -th iteration (lines 11–20). The algorithm traces finite automaton, and the desired space complexity is almost trivial. A main issue is the correctness; it is also trivial that the output sequence is a valid tent code, then our goal is to prove . The trick is based on the fact that the tent map is an extension map, which will be used again in the smoothed analysis in the next session.
Then we explain the detail of Algorithm 2. Let denote a binary expression by the rounding off a real to the nearest , where the following argument only requires , meaning that rounding up and down is not essential. Naturally assume that for is represented by bits, meaning that the space complexity of is .
In the algorithm, rationals and respectively denote and for . For descriptive purposes, and corresponds to , and and corresponds to the reject state in . The single bit denotes for (recall Theorem 3.2). Thus, and if , otherwise and see Section 3. The integer represents the transition , where or , given by (13) in Section 3. Notice that if then holds [25]. The pair and represent if , otherwise, i.e., , , at the -th iteration (for ). See Section A for the detail of the subprocesses, Algorithms 4 and 5.
Theorem 4.1 (Theorem 2.7).
Proof.
To begin with, we remark that Algorithm 2 constructs the transition diagram only up to the level . Nevertheless, Algorithm 2 correctly conducts the for-loop in lines 11–20, meaning that all transitions at line 19 are always valid during the loop. This is due to Lemma 3.6, which claims that there exists at least one such that (and also ), meaning that it is a reverse edge. Then, it is easy from Lemma 3.4 that the space complexity of Algorithm 2 is , except for the space of counter (and ) at line 11.
Then, we prove . Since the algorithm follows the transition diagram of (recall Section 3), it is easy to see that , and hence we only need to prove
(14) |
holds. We here only prove while is essentially the same. The proof is similar to that of Proposition 2.2, while the major issue is whether rounding preserves the “ordering,” so does the exact calculation by Lemma 2.4.
For convenience, let denote the value of in the -th iteration of the algorithm, i.e., . Let and let . The proof consists of two parts:
(15) | ||||
(16) |
For the claim (15), notice that by Lemma 2.4. If , Lemma 2.5 implies that , which contradicts to for any . Now we get (15).
For (16), similar to the proof of Proposition 2.2 based on Lemma 2.4, we claim if then
(17) |
hold. The basic argument is essentially the same as the proof of Lemma 2.4, which is based on the argument of segment type (see [25]), and here it is enough to check
if then , as far as | (18) |
meaning that the rounding-off does not disrupt the order. Notice that
(19) |
holds, where the last inequality follows the triangle inequality . Note that holds under the hypothesis . We also remark by definition of . Furthermore, we claim by : note that holds for , and then holds where we use , which implies the desired claim . Then,
(20) |
and we got (18), and hence (16) by Proposition 2.2. By (15) and (16), is easy. Now, we obtain (14) by Proposition 2.2. ∎
5 Smoothed Analysis for Decision
Now, we are concerned with the decision problem, Problem 1. Algorithm 3 efficiently solves the problem with -perturbed input, for Theorem 2.8. Roughly speaking, Algorithm 3 checks whether at line 24, and checks whether for lines 6–22. Lines 25–27 show a deferred update of those parameters, to save the space complexity.
Theorem 5.1 (Theorem 2.8).
Proof.
The correctness proof is essentially the same as that of Theorem 2.7. In the algorithm, line 29 checks whether . We see whether for the first at most iterations, as follows. Let and respectively represent and computed at line 7, 13 or 18 of the th iteration. Then, hold, by the essentially same way as (15) in the proof of Theorem 2.7. Similarly, holds, as well. Thus, is safely rejected at line 8 or 19, as well at line 8 or 13. Thus we obtain the desired decision.
Then we are concerned with the space complexity. The analysis technique is very similar to or essentially the same as [25] for a random generation of . Let be a random variable drawn from the interval uniformly at random. Let Let
(21) |
be a random variable where if or (recall (12) as well as Theorem 3.2). Lemma 3.4 implies that its space complexity is . Lemma 5.2, appearing below, implies
and we obtain the claim. ∎
Lemma 5.2.
Let be rational given by an irreducible fraction . Suppose for that holds for any . Then, .
We remark that the assumption of rational is not essential in Lemma 5.2; the assumption is just for an argument about Turing comparability. We can establish a similar (but a bit weaker) version of Lemma 5.4 for any real (cf. Proposition 5.1 of [25]). Lemma 5.2 is similar to Lemma 4.3 of [25] for random generation, where the major difference is that [25] assumes is uniform on while Lemma 5.2 here assumes is uniform on . We only need to take care of some trouble when the initial condition is around the boundaries of the interval, and .
Suppose for the proof of Lemma 5.2 that a random variable is drawn from the interval uniformly at random. Let and let for . For convenience, let denote . Let . We say covers around at -th iteration () if
(22) |
holds (recall by definition 8). Similarly, we say fully covers () at if covers around every at .
Lemma 5.3.
fully covers at or after iterations.
Proof.
Then, the proof of Lemma 5.2 consists of two parts: one is that the conditional expectation of is on condition that (Lemma 5.4), and the other is that the probability of is small enough to allow Lemma 5.2 from Lemma 5.4. The latter claim is almost trivial (see the proof of Lemma 5.2 below)
The following lemma is the heart of the analysis, which is a version of Lemma 4.3 of [25] for random generation (see also Appendix B, for a proof).
Lemma 5.4.
Let be rational given by an irreducible fraction . Suppose for that holds for any . On condition that fully covers at , the conditional expectation of is .
Lemma 5.4 is supported by the following Lemma 5.5, which is almost trivial from the fact that the iterative tent map is piecewise linear (see Appendix C for a proof).
Lemma 5.5.
Let uniformly at random. Let . Suppose fully covers at , and let . Then, holds.
6 Concluding Remarks
Motivated by the possibility of a valid computation of physical systems, this paper investigated the space complexity of computing a tent code. We showed that a valid approximate calculation is in space, that is optimum in terms of , and gave an algorithm for the valid decision working in space, in a sense of the smoothed complexity where the initial condition is -perturbed from . A future work is an extension to the baker’s map, which is a chaotic map of piecewise but 2-dimensional. For the purpose, we need an appropriately extended notion of the segment-type. Another future work is an extension to the logistic map, which is a chaotic map of 1-dimensional but quadratic. The time complexity of the tent code is another interesting topic to decide as given a rational for a fixed . Is it possible to compute in time polynomial in the input size ? It might be NP-hard, but we could not find a result.
References
- [1] T. Addabbo, M. Alioto, A. Fort, S. Rocchi and V. Vignoli, The digital tent map: Performance analysis and optimized design as a low-complexity source of pseudorandom bits. IEEE Transactions on Instrumentation and Measurement, 55:5 (2006), 1451–1458.
- [2] M. Alawida, J. S. Teh, D. P. Oyinloye, W. H. Alshoura, M. Ahmad and R. S. Alkhawaldeh, A New Hash Function Based on Chaotic Maps and Deterministic Finite State Automata, in IEEE Access, vol. 8, pp. 113163-113174, 2020,
- [3] H. Bruin, Combinatorics of the kneading map, International Journal of Bifurcation and Chaos, 05:05 (1995), 1339–1349.
- [4] S. Boodaghians, J. Brakensiek, S. B. Hopkins and A. Rubinstein, Smoothed complexity of 2-player Nash equilibria, in Proc. FOCS 2020, 271–282.
- [5] X. Chen, C. Guo, E. V. Vlatakis-Gkaragkounis, M. Yannakakis and X. Zhang, Smoothed complexity of local max-cut and binary max-CSP, in Proc. STOC 2020, 1052–1065.
- [6] M. Crampin and B. Heal, On the chaotic behaviour of the tent map Teaching Mathematics and its Applications: An International Journal of the IMA, 13:2 (1994), 83–89.
- [7] W. de Melo and S. van Strien, One-Dimensional Dynamics, Springer-Verlag, 1991.
- [8] M. Ercsey-Ravasz and Z. Toroczkai, Optimization hardness as transient chaos in an analog approach to constraint satisfaction, Nature Physics, 7 (2011), 966–970.
- [9] M. R. Garey and D. S. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, W. H. Freeman and Company, 1979.
- [10] G. Gharooni-fard, A. Khademzade, F. Moein-darbari, Evaluating the performance of one-dimensional chaotic maps in the network-on-chip mapping problem, IEICE Electronics Express, 6:12 (2009), 811–817. Algorithms and certificates for Boolean CSP refutation: smoothed is no harder than random
- [11] V. Guruswami, P. K. Kothari and P. Manohar, Algorithms and certificates for Boolean CSP refutation: smoothed is no harder than random, in Proc. STOC 2022, 678–689.
- [12] N. Haghtalab, T. Roughgarden and A. Shetty, Smoothed analysis with adaptive adversaries, in Proc. FOCS 2022, 942–953.
- [13] E. Hopf, Ergodentheorie, Springer Verlag, Berlin, 1937.
- [14] F. Hofbauer, On intrinsic ergodicity of piecewise monotonic transformations with positive entropy, Israel Journal of Mathematics, 34:3 (1979), 213–237.
- [15] N. Karmarkar, A new polynomial time algorithm for linear programming, Combinatorica, 4:4 (1984), 373–395.
- [16] A. Kanso, H. Yahyaoui and M. Almulla, Keyed hash function based on chaotic map, Information Sciences, 186 (2012), 249–264.
- [17] L. G. Khachiyan, A polynomial algorithm in linear programming, Soviet Mathematics Doklady, 20 (1979), 191–194.
- [18] T. Kohda, Signal processing using chaotic dynamics, IEICE ESS Fundamentals Review, 2:4 (2008), 16–36, in Japanese.
- [19] B. Korte and J. Vygen, Combinatorial Optimization: Theory and Algorithms, Springer-Verlag, 2018.
- [20] C. Li, G. Luo, K, Qin, C. Li, An image encryption scheme based on chaotic tent map, Nonlinear. Dyn., 87 (2017), 127–133.
- [21] T.Y. Li and J.A. Yorke, Period three implies chaos, Amer. Math. Monthly, 82 (1975), 985–995.
- [22] E.N. Lorenz, Deterministic nonperiodic flow, Journal of Atmospheric Sciences, 20:2 (1963), 130–141.
- [23] E. Lorenz, The Essence of Chaos, University of Washington Press, 1993.
- [24] R. May, Simple mathematical models with very complicated dynamics. Nature, 261 (1976), 459–467.
- [25] N. Okada and S. Kijima, The space complexity of generating tent codes, arXiv:2310.14185 (2023).
- [26] W. Parry, On the -expansions of real numbers, Acta Math. Acad. Sci. Hung., 11 (1960), 401–416.
- [27] W. Parry, Representations for real numbers, Acta Math.Acad. Sci.Hung., 15 (1964), 95–105.
- [28] J. Perl, On chaotic behaviour of some np-complete problems, LNCS, 314 (WG1987), 149–161.
- [29] G. Radons, G. C. Hartmann, H. H. Diebner, O. E. Rossler, Staircase baker’s map generates flaring-type time series, Discrete Dynamics in Nature and Society, 5 (2000), 107–120.
- [30] A. Rényi, Representations for real numbers and their ergodic properties, Acta Mathematica Hungarica, 8:3-4 (1957), 477–493.
- [31] H. Shigematsu, H. Mori, T. Yoshida and H. Okamoto, Analytic study of power spectra of the tent maps near band-splitting transitions, J. Stat. Phys., 30 (1983), 649–679.
- [32] M. Sipser, Introduction to the Theory of Computation, 3rd ed., Cengage Learning, 2012.
- [33] D. A. Spielman and S.-H. Teng, Smoothed analysis: why the simplex algorithm usually takes polynomial time, Journal of the ACM, 51:3 (2004), 385–463.
- [34] D. A. Spielman and S.-H. Teng, Smoothed analysis: An attempt to explain the behavior of algorithms in practice, Communications of the ACM, 52:10 (2009), 76–84.
- [35] J. Xiao, J. Xu, Z. Chen, K. Zhang and L. Pan, A hybrid quantum chaotic swarm evolutionary algorithm for DNA encoding, Computers & Mathematics with Applications, 57:11–12 (2009), 1949–1958.
- [36] H. Yang, K.-W. Wong, X. Liao, Y. Wang and D. Yang, One-way hash function construction based on chaotic map network, Chaos, Solitons & Fractals, 41:5 (2009), 2566–2574.
- [37] T. Yoshida, H. Mori and H. Shigematsu, Analytic study of chaos of the tent map: Band structures, power spectra, and critical behaviors. J. Stat. Phys., 31 (1983), 279–308.
Appendix A Subprocesses
This section shows two subprocesses Algorithms 4 and 5, which are called in Algorithms 2 and 3. Algorithm 4 follows Lemmas 3.1 and 3.3, and Algorithm 5 follows (13).
Appendix B Proof of Lemma 5.4
This Section proves Lemma 5.4.
Lemma B.1 (Lemma 5.4).
Let be rational given by an irreducible fraction . Suppose for that holds for any . On condition that fully covers at , the conditional expectation of is .
The proof strategy of Lemma 5.4 is as follows. Lemma 3.5 implies that a chain must follow the path (or ) to reach level and the probability is (Lemma B.3). We then prove that there exists such that (Lemma B.4), which provides (Lemma B.6). Lemma 5.4 is easy from Lemma B.6.
Let for , i.e., denote the level of the state at -th iteration. We observe the following fact from Lemma 3.5.
Observation B.2.
If visits (resp. ) for the first time then (resp. ) for .
Proof.
By Lemma 3.5, all in-edges to (resp. ) for any come from (resp. ), or a node of level or greater. Since has not visited any level greater than by the hypothesis and the above argument again, we obtain the claim. ∎
By Observation B.2, if a Markov chain visits level for the first time at time then must be . The next lemma gives an upper bound of the probability from level to .
Lemma B.3.
Suppose that covers around appropriate corresponding to at . Then,
Proof.
By Observation B.2, the path from to is unique and
(23) |
holds. We remark that holds for any , meaning that , and hence . ∎
The following lemma is the first mission of the proof of Lemma 5.4.
Lemma B.4.
Let be rational given by an irreducible fraction . Suppose for that holds for any . Then, there exists such that and
(24) |
holds.
To prove Lemma B.4, we remark the following fact.
Lemma B.5.
Let be rational given by an irreducible fraction . Then, for any .
Proof.
By the recursive formula (11), we see that is either or where and . We can denote as () for any . Therefore,
(25) |
holds. Clearly, is an integer, and it is not since . Thus, we obtain . ∎
Then, we prove Lemma B.4.
Proof of Lemma B.4.
For convenience, let for . Assume for a contradiction that (24) never hold for any , where for convenience. In other words,
(26) |
holds every . Thus, we inductively obtain that
(27) |
holds. By the definition of ,
(28) |
holds. Lemma B.5 implies that
(29) |
holds. Then, (27), (28) and (29) imply
(30) |
holds. By taking the of the both sides of (30), we see that
(31) |
holds. Since by definition, it is not difficult to see that
(32) |
holds. Since by definition, it is also not difficult to observe that
(33) |
holds. Equations (31), (32) and (33) imply that , meaning that . At the same time, notice that any segment-type satisfies , meaning that . Contradiction. Thus, we obtain (24) for at least one of .
Finally, we check the size of :
where the last equality follows since and . We obtain a desired . ∎
Lemma B.6.
Let for convenience. Then
holds.
Proof.
For and , Lemma B.4 implies that there exists such that and
(34) |
holds. Let () denote the event that reaches the level for the first time. It is easy to see that
(35) |
holds999Precisely, holds, but we do not use the fact here. by the definition of . We also remark that the event implies not only but also by Observation B.2. It means that
(36) |
holds. Then,
holds. We remark that is trivial since . ∎
We are ready to prove Lemma 5.4.
Appendix C Proof of Lemma 5.5
Lemma C.1 (Lemma 5.5).
Let uniformly at random. Let . Suppose covers around at , and let . Then,
holds.
Proof.
Since covers around at , it is not difficult to see that
hold. Thus,
(37) |
holds. Since the iterative tent map is piecewise linear, it is not difficult to see that preserves the uniform measure (cf. Lemma B.6 in [25]), and hence as well as hold. Then,
and we obtain the claim. ∎