This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

The Dimension Spectrum Conjecture for Planar Lines

D. M. Stull
Department of Computer Science
Northwestern University
[email protected]
Abstract

Let La,bL_{a,b} be a line in the Euclidean plane with slope aa and intercept bb. The dimension spectrum sp(La,b)\operatorname{sp}(L_{a,b}) is the set of all effective dimensions of individual points on La,bL_{a,b}. Jack Lutz, in the early 2000s posed the dimension spectrum conjecture. This conjecture states that, for every line La,bL_{a,b}, the spectrum of La,bL_{a,b} contains a unit interval.

In this paper we prove that the dimension spectrum conjecture is true. Specifically, let (a,b)(a,b) be a slope-intercept pair, and let d=min{dim(a,b),1}d=\min\{\dim(a,b),1\}. For every s[0,1]s\in[0,1], we construct a point xx such that dim(x,ax+b)=d+s\dim(x,ax+b)=d+s. Thus, we show that sp(La,b)\operatorname{sp}(L_{a,b}) contains the interval [d,1+d][d,1+d].

1 Introduction

The effective dimension, dim(x)\dim(x), of a point xnx\in\mathbb{R}^{n} gives a fine-grained measure of the algorithmic randomness of xx. Effective dimension was first defined by J. Lutz [5], and was originally used to quantify the sizes of complexity classes. Unsurprisingly, because of its strong connection to (classical) Hausdorff dimension, effective dimension has proven to be geometrically meaningful [3, 15, 1, 9]. Indeed, an exciting line of research has shown that one can prove classical results in geometric measure theory using effective dimension [7, 10, 11, 13]. Importantly, these are not effectivizations of known results, but new results whose proofs rely on effective methods. Thus, it is of considerable interest to investigate the effective dimensions of points of geometric objects such as lines.

Let La,bL_{a,b} be a line in the Euclidean plane with slope aa and intercept bb. Given the point-wise nature of effective dimension, one can study the dimension spectrum of La,bL_{a,b}. That is, the set

sp(La,b)={dim(x,ax+b)|x}\operatorname{sp}(L_{a,b})=\{\dim(x,ax+b)\,|x\in\mathbb{R}\}

of all effective dimensions of points on La,bL_{a,b}. In the early 2000s, Jack Lutz posed the dimension spectrum conjecture for lines. That is, he conjectured that the dimension spectrum of every line in the plane contains a unit interval.

The first progress on this conjecture was made by Turetsky.

Theorem 1 (Turetsky [18]).

The set of points xnx\in\mathbb{R}^{n} with dim(x)=1\dim(x)=1 is connected.

This immediately implies that 1sp(La,b)1\in\operatorname{sp}(L_{a,b}) for every line La,bL_{a,b}. The next progress on the dimension spectrum conjecture was by Lutz and Stull [11]. They showed that the effective dimension of points on a line is intimately connected to problems in fractal geometry. Among other things, they proved that 1+dsp(La,b)1+d\in\operatorname{sp}(L_{a,b}) for every line La,bL_{a,b}, where d=min{dim(a,b),1}d=\min\{\dim(a,b),1\}. Shortly thereafter, Lutz and Stull [12] proved the dimension spectrum conjecture for the special case where the effective dimension and strong dimension of (a,b)(a,b) agree.

In this paper, we prove that dimension spectrum conjecture is true. For every s(0,1)s\in(0,1), we construct a point xx such that dim(x,ax+b)=d+s\dim(x,ax+b)=d+s, where d=min{dim(a,b),1}d=\min\{\dim(a,b),1\}. This, combined with the results of Lutz and Stull, imply that

[d,1+d]sp(La,b)[d,1+d]\subseteq\operatorname{sp}(L_{a,b}),

for every planar line La,bL_{a,b}. The proof of the conjecture builds on the techniques of [11]. The primary difficulty of the conjecture is the case when the dimension of xx is less than the difficulty of the line (a,b)(a,b). We expand on the nature of this dim(x)<dim(a,b)\dim(x)<\dim(a,b) obstacle in Section 3.1. Our main technical contribution is showing how to overcome this difficulty by encoding the information of aa into our point xx. Further complications arise in the “high-dimensional” case, i.e., when dim(a,b)>1\dim(a,b)>1. In this case, we combine the encoding idea with a non-constructive argument.

Apart from its intrinsic interest, recent work has shown that the effective dimensions of points has deep connections to problems in classical analysis [10, 11, 13, 17, 8]. Lutz and Lutz [7] proved the point-to-set principle, which characterizes the Hausdorff dimension of a set by effective dimension of its individual points. Lutz and Stull [11], using the point-to-set principle, showed that lower bounds on the effective dimensions of points on a line are intimately related to well-known problems of classical geometric measure theory such the Kakeya and Furstenberg conjectures.

The structure of the paper is as follows. In Section 2, we recall the basic definitions and results of Kolmogorov complexity and effective dimension we need. In Section 3, we recall the strategy of Lutz and Stull [11] to give strong lower bounds on the effective dimension of points on a line. In Sections 3 and 3.1 we give intuition about this strategy, and discuss why it is not enough to settle the dimension spectrum conjecture.

In Section 4, we prove the dimension spectrum conjecture for lines with effective dimension at most one. We also give a brief overview of this proof, and how it overcomes the strategy discussed in Section 3. In Section 5, we prove the dimension spectrum conjecture for lines with effective dimension greater than one. We also give intuition of this proof, and how it overcomes the difficulties when the line is high-dimensional.

Finally, in the conclusion, we discuss open questions and avenues for future research.

2 Preliminaries

The conditional Kolmogorov complexity of a binary string σ{0,1}\sigma\in\{0,1\}^{*} given binary string τ{0,1}\tau\in\{0,1\}^{*} is

K(σ|τ)=minπ{0,1}{(π):U(π,τ)=σ},K(\sigma|\tau)=\min_{\pi\in\{0,1\}^{*}}\left\{\ell(\pi):U(\pi,\tau)=\sigma\right\}\,,

where UU is a fixed universal prefix-free Turing machine and (π)\ell(\pi) is the length of π\pi. The Kolmogorov complexity of σ\sigma is K(σ)=K(σ|λ)K(\sigma)=K(\sigma|\lambda), where λ\lambda is the empty string. Thus, the Kolmogorov complexity of a string σ\sigma is the minimum length program which, when run on a universal Turing machine, eventually halts and outputs σ\sigma. We stress that the choice of universal machine effects the Kolmogorov complexity by at most an additive constant (which, especially for our purposes, can be safely ignored). See [4, 16, 2] for a more comprehensive overview of Kolmogorov complexity.

We can extend these definitions to Euclidean spaces by introducing “precision” parameters [9, 7]. Let xmx\in\mathbb{R}^{m}, and r,sr,s\in\mathbb{N}. The Kolmogorov complexity of xx at precision rr is

Kr(x)=min{K(p):pB2r(x)m}.K_{r}(x)=\min\left\{K(p)\,:\,p\in B_{2^{-r}}(x)\cap\mathbb{Q}^{m}\right\}\,.

The conditional Kolmogorov complexity of xx at precision rr given qmq\in\mathbb{Q}^{m} is

K^r(x|q)=min{K(p):pB2r(x)m}.\hat{K}_{r}(x|q)=\min\left\{K(p)\,:\,p\in B_{2^{-r}}(x)\cap\mathbb{Q}^{m}\right\}\,.

The conditional Kolmogorov complexity of xx at precision rr given yny\in\mathbb{R}^{n} at precision ss is

Kr,s(x|y)=max{K^r(x|q):qB2r(y)n}.K_{r,s}(x|y)=\max\big{\{}\hat{K}_{r}(x|q)\,:\,q\in B_{2^{-r}}(y)\cap\mathbb{Q}^{n}\big{\}}\,.

We abbreviate Kr,r(x|y)K_{r,r}(x|y) by Kr(x|y)K_{r}(x|y).

The effective Hausdorff dimension and effective packing dimension111Although effective Hausdorff was originally defined by J. Lutz [6] using martingales, it was later shown by Mayordomo [14] that the definition used here is equivalent. For more details on the history of connections between Hausdorff dimension and Kolmogorov complexity, see [2, 15]. of a point xnx\in\mathbb{R}^{n} are

dim(x)=lim infrKr(x)randDim(x)=lim suprKr(x)r.\dim(x)=\liminf_{r\to\infty}\frac{K_{r}(x)}{r}\quad\text{and}\quad\operatorname{Dim}(x)=\limsup_{r\to\infty}\frac{K_{r}(x)}{r}\,.

Intuitively, these dimensions measure the density of algorithmic information in the point xx.

By letting the underlying fixed prefix-free Turing machine UU be a universal oracle machine, we may relativize the definition in this section to an arbitrary oracle set AA\subseteq\mathbb{N}. The definitions of KrA(x)K^{A}_{r}(x), dimA(x)\dim^{A}(x), DimA(x)\operatorname{Dim}^{A}(x), etc. are then all identical to their unrelativized versions, except that UU is given oracle access to AA. Note that taking oracles as subsets of the naturals is quite general. We can, and frequently do, encode a point yy into an oracle, and consider the complexity of a point relative to yy. In these cases, we typically forgo explicitly referring to this encoding, and write e.g. Kry(x)K^{y}_{r}(x).

Among the most used results in algorithmic information theory is the symmetry of information. In Euclidean spaces, this was first proved, in a slightly weaker form in [7], and in the form presented below in [11].

Lemma 2.

For every m,nm,n\in\mathbb{N}, xmx\in\mathbb{R}^{m}, yny\in\mathbb{R}^{n}, and r,sr,s\in\mathbb{N} with rsr\geq s,

  1. (i)

    |Kr(x|y)+Kr(y)Kr(x,y)|O(logr)+O(loglogy).\displaystyle|K_{r}(x|y)+K_{r}(y)-K_{r}(x,y)\big{|}\leq O(\log r)+O(\log\log\|y\|)\,.

  2. (ii)

    |Kr,s(x|x)+Ks(x)Kr(x)|O(logr)+O(loglogx).\displaystyle|K_{r,s}(x|x)+K_{s}(x)-K_{r}(x)|\leq O(\log r)+O(\log\log\|x\|)\,.

2.1 Initial segments versus KK-optimizing rationals

For x=(x1,,xn)nx=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n} and a precision rr\in\mathbb{N}, let xr=(x1r,,xnr)x{\upharpoonright}r=(x_{1}{\upharpoonright}r,\ldots,x_{n}{\upharpoonright}r), where each

xir=2r2rxix_{i}{\upharpoonright}r=2^{-r}\lfloor 2^{r}x_{i}\rfloor

is the truncation of xix_{i} to rr bits to the right of the binary point. For r(0,)r\in(0,\infty), let xr=xrx{\upharpoonright}r=x{\upharpoonright}\lceil r\rceil.

We can relate the complexity Kr(x)K_{r}(x) of xx at precision rr and the initial segment complexity K(xr)K(x{\upharpoonright}r) of the binary representation of xx. Lutz and Stull [11] proved the following lemma, and its corollaries, relating these two quantities. Informally, it shows that, up to a logarithmic error, the two quantities are equivalent.

Lemma 3.

For every m,nm,n\in\mathbb{N}, there is a constant cc such that for all xmx\in\mathbb{R}^{m}, pnp\in\mathbb{Q}^{n}, and rr\in\mathbb{N},

|K^r(x|p)K(xr|p)|K(r)+c.|\hat{K}_{r}(x|p)-K(x{\upharpoonright}r\,|\,p)|\leq K(r)+c\,.

This has the following two useful corollaries.

Corollary 4.

For every mm\in\mathbb{N}, there is a constant cc such that for every xmx\in\mathbb{R}^{m} and rr\in\mathbb{N},

|Kr(x)K(xr)|K(r)+c.|K_{r}(x)-K(x{\upharpoonright}r)|\leq K(r)+c\,.
Corollary 5.

For every m,nm,n\in\mathbb{N}, there is a constant cc such that for all xmx\in\mathbb{R}^{m}, yny\in\mathbb{R}^{n}, and r,sr,s\in\mathbb{N},

|Kr,s(x|y)K(xr|ys)|K(r)+K(s)+c.|K_{r,s}(x|y)-K(x{\upharpoonright}r\,|\,y{\upharpoonright}s)|\leq K(r)+K(s)+c\,.

3 Previous Work

The proof of our main theorem will use the tools and techniques introduced by Lutz and Stull [11]. In this section we will state the main lemmas needed for this paper. We will devote some time giving intuition about each lemma. In Subsection 3.1, we give an informal discussion on how to combine these lemmas to give bounds on the effective dimensions of points on a line. We will also discuss where these tools break down, motivating the techniques introduced in this paper.

The first lemma, informally, states the following. Suppose that La,bL_{a,b} intersects (x,ax+b)(x,ax+b) and the complexity of (a,b)(a,b) is low (item (i)). Further assume that (item (ii)), if Lu,vL_{u,v} is any other line intersecting (x,ax+b)(x,ax+b) such that (a,b)(u,v)<2m\|(a,b)-(u,v)\|<2^{-m} then either

  1. 1.

    u,vu,v is of high complexity, or

  2. 2.

    u,vu,v is very close to a,ba,b.

Then it is possible to compute an approximation of (a,b)(a,b) given an approximation of (x,ax+b)(x,ax+b) and first mm bits of (a,b)(a,b). Indeed, we can simply enumerate over all low complexity lines, since we know that the only candidate is essentially (a,b)(a,b).

Lemma 6 (Lutz and Stull [11]).

Suppose that AA\subseteq\mathbb{N},a,b,xa,b,x\in\mathbb{R}, m,rm,r\in\mathbb{N}, δ+\delta\in\mathbb{R}_{+}, and ε,η+\varepsilon,\eta\in\mathbb{Q}_{+} satisfy rlog(2|a|+|x|+5)+1r\geq\log(2|a|+|x|+5)+1 and the following conditions.

  • (i)

    KrA(a,b)(η+ε)rK^{A}_{r}(a,b)\leq\left(\eta+\varepsilon\right)r.

  • (ii)

    For every (u,v)B2m(a,b)(u,v)\in B_{2^{-m}}(a,b) such that ux+v=ax+bux+v=ax+b,

    KrA(u,v)(ηε)r+δ(rt),K^{A}_{r}(u,v)\geq\left(\eta-\varepsilon\right)r+\delta\cdot(r-t)\,,

    whenever t=log(a,b)(u,v)(0,r]t=-\log\|(a,b)-(u,v)\|\in(0,r].

Then,

KrA(a,b,x)Kr(x,ax+b)+Km,r(a,b|x,ax+b)+4εδr+K(ε,η)+O(logr).K_{r}^{A}(a,b,x)\leq K_{r}(x,ax+b)+K_{m,r}(a,b\,|\,x,ax+b)+\frac{4\varepsilon}{\delta}r+K(\varepsilon,\eta)+O(\log r)\,.

The second lemma which will be important in proving our main theorem is the following. It is essentially the approximation version of the simple geometric fact that any two lines intersect at a single point. In other words, if ax+b=ux+vax+b=ux+v and you are given an approximation of (a,b)(a,b) and an approximation of (u,v)(u,v), then you can compute an approximation of xx. Moreover, the quality of the approximation of xx depends linearly on the distance between (u,v)(u,v) and (a,b)(a,b).

Lemma 7 ([11]).

Let a,b,xa,b,x\in\mathbb{R}. For all u,vB1(a,b)u,v\in B_{1}(a,b) such that ux+v=ax+bux+v=ax+b, and for all rt:=log(a,b)(u,v)r\geq t:=-\log\|(a,b)-(u,v)\|,

Kr(u,v)Kt(a,b)+Krt,r(x|a,b)O(logr).K_{r}(u,v)\geq K_{t}(a,b)+K_{r-t,r}(x|a,b)-O(\log r)\,.

The primary function of this lemma is to give a lower bound on the complexity of any line intersecting (x,ax+b)(x,ax+b), i.e., ensuring condition (ii) of the previous lemma.

Finally, we also need the following oracle construction of Lutz and Stull. The purpose of this lemma is to show that we can lower the complexity of our line (a,b)(a,b), thus ensuring item (i) of Lemma 6. Crucially, we can lower this complexity using only the information contained in (a,b)(a,b).

Lemma 8 ([11]).

Let rr\in\mathbb{N}, z2z\in\mathbb{R}^{2}, and η[0,dim(z)]\eta\in\mathbb{Q}\cap[0,\dim(z)]. Then there is an oracle D=D(r,z,η)D=D(r,z,\eta) satisfying

  • (i)

    For every trt\leq r, KtD(z)=min{ηr,Kt(z)}+O(logr)K^{D}_{t}(z)=\min\{\eta r,K_{t}(z)\}+O(\log r).

  • (ii)

    For every m,tm,t\in\mathbb{N} and ymy\in\mathbb{R}^{m}, Kt,rD(y|z)=Kt,r(y|z)+O(logr)K^{D}_{t,r}(y|z)=K_{t,r}(y|z)+O(\log r) and Ktz,D(y)=Ktz(y)+O(logr)K_{t}^{z,D}(y)=K_{t}^{z}(y)+O(\log r).

3.1 Combining the lemmas

We now briefly discuss the strategy of [11] which combines the above lemmas to give non-trivial bounds on the effective dimension of points on a line. Suppose (a,b)(a,b) is a line with dim(a,b)=d\dim(a,b)=d, and xx is a point with dima,b(x)=s\dim^{a,b}(x)=s. We will also make the crucial assumption that dsd\leq s. Roughly, Lutz and Stull showed that, for sufficiently large rr

Kr(x,ax+b)(s+d)rK_{r}(x,ax+b)\geq(s+d)r.

The strategy is as follows. Note that to simplify the exposition, all inequalities in this discussion will be approximate. Using Lemma 8, we find an oracle DD which reduces the complexity of (a,b)(a,b) to some ηdr\eta\leq dr, i.e., KrD(a,b)=ηrK^{D}_{r}(a,b)=\eta r. Combining this with Lemma 7, we get a lower bound on every line (u,v)(u,v) intersecting (x,ax+b)(x,ax+b). That is, we show for any such line,

KrD(u,v)ηt+s(rt)O(logr)K^{D}_{r}(u,v)\geq\eta t+s(r-t)-O(\log r)

By our choice of η\eta, we can simplify this inequality to get

KrD(u,v)srO(logr)K^{D}_{r}(u,v)\geq sr-O(\log r)

In particular, relative to DD, both conditions of Lemma 6 are satisfied and we have the sufficient lower bound.

In the previous sketch, it was crucial that the dimension of (a,b)(a,b) was less than ss, in order for the lower bound from Lemma 7 to be useful. In the case where dim(a,b)\dim(a,b) is much larger than dim(x)\dim(x), this strategy breaks down, and further techniques are required.

We also note that this seems to be a very deep issue. As discussed in the Introduction, the point-to-set principle of J. Lutz and N. Lutz [7] allows us to translate problems from (classical) geometric measure theory into problems of effective dimension. The same issue discussed in this section occurs when attacking the notorious Kakeya and Furstenberg set conjectures using the point-to-set principle. While resolving this obstacle in full generality is still elusive, we are able to get around it in the context of the Dimension Spectrum Conjecture.

4 Low-Dimensional Lines

In this section, we prove the spectrum conjecture for lines with dim(a,b)1\dim(a,b)\leq 1.

Theorem 9.

Let (a,b)2(a,b)\in\mathbb{R}^{2} be a slope-intercept pair with dim(a,b)1\dim(a,b)\leq 1.. Then for every s[0,1]s\in[0,1], there is a point xx\in\mathbb{R} such that

dim(x,ax+b)=s+dim(a,b)\dim(x,ax+b)=s+\dim(a,b).

We begin by giving an intuitive overview of the proof.

4.1 Overview of the proof

As mentioned in Section 3.1, the main obstacle of the Dimension Spectrum Conjecture occurs when the dimension of xx is lower than the dimension of the line (a,b)(a,b). As mentioned in Section 3.1, in general, this issue is still formidable. However, in the Dimension Spectrum Conjecture, we are given the freedom to specifically construct the point xx, allowing us overcome this obstacle.

The most natural way to construct a sequence xx with dima,b(x)=s\dim^{a,b}(x)=s is to start with a random sequence, and pad it with long strings of zeros. This simple construction, unfortunately, does not seem to work.

We are able to overcome the obstacle by padding the random sequence with the bits of aa, instead of with zeros. Thus, given an approximation (x,ax+b)(x,ax+b) we trivially have a decent approximation of aa (formalized iin Lemma 10). This allows us, using Lemma 6, to restrict our search for (a,b)(a,b) to a smaller set of candidate lines.

4.2 Proof for low-dimensional lines

Fix a slope-intercept pair (a,b)(a,b), and let d=dim(a,b)d=\dim(a,b). Let s(0,d)s\in(0,d). Let yy\in\mathbb{R} be random relative to (a,b)(a,b). Thus, for every rr\in\mathbb{N},

Kra,b(y)rO(logr)K^{a,b}_{r}(y)\geq r-O(\log r).

Define the sequence of natural numbers {hj}j\{h_{j}\}_{j\in\mathbb{N}} inductively as follows. Define h0=1h_{0}=1. For every j>0j>0, let

hj=min{h2hj1:Kh(a,b)(d+1j)h}.h_{j}=\min\left\{h\geq 2^{h_{j-1}}:K_{h}(a,b)\leq\left(d+\frac{1}{j}\right)h\right\}.

Note that hjh_{j} always exists. For every rr\in\mathbb{N}, let

x[r]={a[rshj] if r(shj,hj] for some jy[r] otherwise\displaystyle x[r]=\begin{cases}a[r-\lfloor sh_{j}\rfloor]&\text{ if }r\in(\lfloor sh_{j}\rfloor,h_{j}]\text{ for some }j\in\mathbb{N}\\ y[r]&\text{ otherwise}\end{cases}

where x[r]x[r] is the rrth bit of xx. Define xx\in\mathbb{R} to be the real number with this binary expansion.

One of the most important aspects of our construction is that we encode (a subset of) the information of aa into our point xx. This is formalized in the following lemma.

Lemma 10.

For every jj\in\mathbb{N}, and every rr such that shj<rhjsh_{j}<r\leq h_{j},

Krshj,r(a,b|x,ax+b)O(loghj)K_{r-sh_{j},r}(a,b\,|\,x,ax+b)\leq O(\log h_{j}).

Proof.

By definition, the last rshjr-sh_{j} bits of xx are equal to the first rshjr-sh_{j} bits of aa. That is,

x[shj]x[shj+1]x[r]\displaystyle x[sh_{j}]\,x[sh_{j}+1]\,\ldots x[r] =a[0]a[1]a[rshj]\displaystyle=a[0]\,a[1]\,\ldots a[r-sh_{j}]
=a(rshj).\displaystyle=a{\upharpoonright}(r-sh_{j}).

Therefore, since additional information cannot increase Kolmogorov complexity,

Krshj,r(a|x,ax+b)\displaystyle K_{r-sh_{j},r}(a\,|\,x,ax+b) Krshj,r(a|x)\displaystyle\leq K_{r-sh_{j},r}(a\,|\,x)
O(loghj).\displaystyle\leq O(\log h_{j}).

Note that, given 2(rshj)2^{-(r-sh_{j})}-approximations of aa, xx, and ax+bax+b, it is possible to compute an approximation of bb. That is,

Krshj(b|a,x,ax+b)O(loghj)K_{r-sh_{j}}(b\,|\,a,x,ax+b)\leq O(\log h_{j}).

Therefore, by Lemma 2 and the two above inequalities,

Krshj,r(a,b|x,ax+b)\displaystyle K_{r-sh_{j},r}(a,b\,|\,x,ax+b) =Krshj,r(a|x,ax+b)\displaystyle=K_{r-sh_{j},r}(a\,|\,x,ax+b)
+Krshj,r(b|a,x,ax+b)+O(logr)\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;+K_{r-sh_{j},r}(b\,|\,a,x,ax+b)+O(\log r)
O(loghj)+Krshj,r(b|a,x,ax+b)+O(logr)\displaystyle\leq O(\log h_{j})+K_{r-sh_{j},r}(b\,|\,a,x,ax+b)+O(\log r)
O(loghj).\displaystyle\leq O(\log h_{j}).

The other important property of our construction is that (a,b)(a,b) gives no information about xx, beyond the information specifically encoded into xx.

Lemma 11.

For every jj\in\mathbb{N}, the following hold.

  1. 1.

    Kta,b(x)tO(loghj)K^{a,b}_{t}(x)\geq t-O(\log h_{j}), for all tshjt\leq sh_{j}.

  2. 2.

    Kra,b(x)shj+rhjO(loghj)K^{a,b}_{r}(x)\geq sh_{j}+r-h_{j}-O(\log h_{j}), for all hjrshj+1h_{j}\leq r\leq sh_{j+1}.

Proof.

We first prove item (1). Let tshjt\leq sh_{j}. Then, by our construction of xx, and choice of yy,

Kta,b(x)\displaystyle K^{a,b}_{t}(x) Kta,b(y)hj1O(logt)\displaystyle\geq K^{a,b}_{t}(y)-h_{j-1}-O(\log t)
tO(logt)loghjO(logt)\displaystyle\geq t-O(\log t)-\log h_{j}-O(\log t)
tO(loghj).\displaystyle\geq t-O(\log h_{j}).

For item (2), let hjrshj+1h_{j}\leq r\leq sh_{j+1}. Then, by item (1), Lemma 2 and our construction of xx,

Kra,b(x)\displaystyle K^{a,b}_{r}(x) =Khja,b(x)+Kr,hja,b(x)O(logr)\displaystyle=K^{a,b}_{h_{j}}(x)+K^{a,b}_{r,h_{j}}(x)-O(\log r) [Lemma 2]
shj+Kr,hja,b(x)O(logr)\displaystyle\geq sh_{j}+K^{a,b}_{r,h_{j}}(x)-O(\log r) [Item (1)]
shj+Kr,hja,b(y)O(logr)\displaystyle\geq sh_{j}+K^{a,b}_{r,h_{j}}(y)-O(\log r)
shj+rhjO(logr),\displaystyle\geq sh_{j}+r-h_{j}-O(\log r),

and the proof is complete.

We now prove bounds on the complexity of our constructed point. We break the proof into two parts.

In the first, we give lower bounds on Kr(x,ax+b)K_{r}(x,ax+b) at precisions shj<rhjsh_{j}<r\leq h_{j}. Intuitively, the proof proceeds as follows. Since r>shjr>sh_{j}, given (x,ax+b)(x,ax+b) to precision rr immediately gives a 2r+shj2^{-r+sh_{j}} approximation of (a,b)(a,b). Thus, we only have to search for candidate lines (u,v)(u,v) which satisfy t=(a,b)(u,v)<2r+shjt=\|(a,b)-(u,v)\|<2^{-r+sh_{j}}. Then, because of the lower bound on tt, the complexity Krt(x)K_{r-t}(x) is maximal. In other words, we are essentially in the case that the complexity of xx is high. Thus, we are able to use the method described in Section 3.1. We now formalize this intuition.

Lemma 12.

For every γ>0\gamma>0 and all sufficiently large jj\in\mathbb{N},

Kr(x,ax+b)(s+d)rγrK_{r}(x,ax+b)\geq(s+d)r-\gamma r,

for every r(shj,hj]r\in(sh_{j},h_{j}].

Proof.

Let η\eta\in\mathbb{Q} such that

dγ/4<η<dγ2d-\gamma/4<\eta<d-\gamma^{2}.

Let ε\varepsilon\in\mathbb{Q} such that

ε<γ(dη)/16\varepsilon<\gamma(d-\eta)/16.

Note that

4ε1ηγ4\frac{4\varepsilon}{1-\eta}\leq\frac{\gamma}{4}

We also note that, since η\eta and ε\varepsilon are constant,

K(η,ε)=O(1)K(\eta,\varepsilon)=O(1).

Let D=D(r,(a,b),η)D=D(r,(a,b),\eta) be the oracle of Lemma 8 and let δ=1η\delta=1-\eta.

Let (u,v)(u,v) be a line such that t:=(a,b)(u,v)rshjt:=\|(a,b)-(u,v)\|\geq r-sh_{j}, and ux+v=ax+bux+v=ax+b. Note that rtshjr-t\leq sh_{j}. Then, by Lemma 7, Lemma 8 and Lemma 11(1),

KrD(u,v)\displaystyle K^{D}_{r}(u,v) KtD(a,b)+Krt,rD(x|a,b)O(logr)\displaystyle\geq K^{D}_{t}(a,b)+K^{D}_{r-t,r}(x\,|\,a,b)-O(\log r) [Lemma 7]
KtD(a,b)+Krt,r(x|a,b)O(logr)\displaystyle\geq K^{D}_{t}(a,b)+K_{r-t,r}(x\,|\,a,b)-O(\log r) [Lemma 8]
KtD(a,b)+rtO(logr).\displaystyle\geq K^{D}_{t}(a,b)+r-t-O(\log r). [Lemma 11(1)]

There are two cases by Lemma 8. For the first, assume that KtD(a,b)=ηrK^{D}_{t}(a,b)=\eta r. Then

ηr+rtO(logr)\displaystyle\geq\eta r+r-t-O(\log r) [Definition of dim\dim]
=(ηε)r+rt\displaystyle=(\eta-\varepsilon)r+r-t [rr is large]
(ηε)r+(1η)(rt)\displaystyle\geq(\eta-\varepsilon)r+(1-\eta)(r-t)

For the second, assume that KtD(a,b)=Kt(a,b)K^{D}_{t}(a,b)=K_{t}(a,b). Then

KrD(u,v)\displaystyle K^{D}_{r}(u,v) Kt(a,b)+rtO(logr)\displaystyle\geq K_{t}(a,b)+r-t-O(\log r)
dto(t)+rtO(logr)\displaystyle\geq dt-o(t)+r-t-O(\log r) [Definition of dim\dim]
=ηr+(1η)rt(1d)εr\displaystyle=\eta r+(1-\eta)r-t(1-d)-\varepsilon r [rr is large]
ηrεr+(1η)(rt)\displaystyle\geq\eta r-\varepsilon r+(1-\eta)(r-t) [d>ηd>\eta]
(ηε)r+(1η)(rt).\displaystyle\geq(\eta-\varepsilon)r+(1-\eta)(r-t). (1)

Therefore, in either case, we may apply Lemma 6,

Kr(x,ax+b)\displaystyle K_{r}(x,ax+b) KrD(a,b,x)Krshj,r(a,b|,x,ax+b)\displaystyle\geq K_{r}^{D}(a,b,x)-K_{r-sh_{j},r}(a,b\,|,x,ax+b) [Lemma 6]
4ε1ηrK(η,ε)O(logr)\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;-\frac{4\varepsilon}{1-\eta}r-K(\eta,\varepsilon)-O(\log r)
KrD(a,b,x)Krshj,r(a,b|x,ax+b)γ4rγ8r\displaystyle\geq K_{r}^{D}(a,b,x)-K_{r-sh_{j},r}(a,b\,|\,x,ax+b)-\frac{\gamma}{4}r-\frac{\gamma}{8}r
=KrD(a,b,x)Krshj,r(a,b|x,ax+b)3γ8r.\displaystyle=K_{r}^{D}(a,b,x)-K_{r-sh_{j},r}(a,b\,|\,x,ax+b)-\frac{3\gamma}{8}r. (2)

By Lemma 11(1), our construction of oracle DD, and the symmetry of information,

KrD(a,b,x)\displaystyle K_{r}^{D}(a,b,x) =KrD(a,b)+KrD(x|a,b)O(logr)\displaystyle=K_{r}^{D}(a,b)+K_{r}^{D}(x\,|\,a,b)-O(\log r) [Lemma 2]
=KrD(a,b)+Kr(x|a,b)O(logr)\displaystyle=K_{r}^{D}(a,b)+K_{r}(x\,|\,a,b)-O(\log r) [Lemma 8(ii)]
ηr+Kr(x|a,b)O(logr)\displaystyle\geq\eta r+K_{r}(x\,|\,a,b)-O(\log r) [Lemma 8(i)]
ηr+shjO(logr)\displaystyle\geq\eta r+sh_{j}-O(\log r) [Lemma 11(1)]
ηr+shjγ4r.\displaystyle\geq\eta r+sh_{j}-\frac{\gamma}{4}r. (3)

Finally, by Lemma 10,

Krshj,r(a,bx,ax+b)γ8r.K_{r-sh_{j},r}(a,b\mid x,ax+b)\leq\frac{\gamma}{8}r. (4)

Together, inequalities (2), (3) and (4) imply that

Kr(x,ax+b)\displaystyle K_{r}(x,ax+b) KrD(a,b,x)Krshj,r(a,b|,x,ax+b)3γ8r\displaystyle\geq K_{r}^{D}(a,b,x)-K_{r-sh_{j},r}(a,b\,|,x,ax+b)-\frac{3\gamma}{8}r
ηr+shjγ4rγ8r3γ8r\displaystyle\geq\eta r+sh_{j}-\frac{\gamma}{4}r-\frac{\gamma}{8}r-\frac{3\gamma}{8}r
drγ4r+shj3γ4r\displaystyle\geq dr-\frac{\gamma}{4}r+sh_{j}-\frac{3\gamma}{4}r
dr+shjγr\displaystyle\geq dr+sh_{j}-\gamma r
(s+d)rγr,\displaystyle\geq(s+d)r-\gamma r,

and the proof is complete. ∎

We now give lower bounds on the complexity of our point, Kr(x,ax+b)K_{r}(x,ax+b), when hj<rshj+1h_{j}<r\leq sh_{j+1}. Intuitively, the proof proeeds as follows. Using the previous lemma, we can, given a 2hj2^{-h_{j}}-approximation of (x,ax+b)(x,ax+b), compute a 2hj2^{-h_{j}}-approximation of (a,b)(a,b). Thus, we only have to compute the last rhjr-h_{j} bits of (a,b)(a,b). Importantly, since r>hjr>h_{j}, the last rhjr-h_{j} bits of xx are maximal. Hence, we can simply lower the complexity of the last rhjr-h_{j} bits of (a,b)(a,b) to roughly s(rhj)s(r-h_{j}). Thus, we are again, essentially, in the case where dim(x)dim(a,b)\dim(x)\geq\dim(a,b) and the techniques of Section 3.1 work. We now formalize this intuition.

Lemma 13.

For every γ>0\gamma>0 and all sufficiently large jj\in\mathbb{N},

Kr(x,ax+b)(s+d)rγrK_{r}(x,ax+b)\geq(s+d)r-\gamma r,

for every r(hj,shj+1]r\in(h_{j},sh_{j+1}].

Proof.

Recall that we are assuming that s<ds<d. Let s^(0,s)\hat{s}\in\mathbb{Q}\cap(0,s) be a dyadic rational such that

γ/8<ss^<γ/4\gamma/8<s-\hat{s}<\gamma/4.

Let d^(0,dim(a,b))\hat{d}\in\mathbb{Q}\cap(0,\dim(a,b)) be a dyadic rational such that

γ/8<dim(a,b)d^<γ/4\gamma/8<\dim(a,b)-\hat{d}<\gamma/4.

Define

α=s(rhj)+dim(a,b)hjr\alpha=\frac{s(r-h_{j})+\dim(a,b)h_{j}}{r},

and η(0,α)\eta\in\mathbb{Q}\cap(0,\alpha) by

η=s^(rhj)+d^hjr\eta=\frac{\hat{s}(r-h_{j})+\hat{d}h_{j}}{r}.

Finally, let ε=γ2/64\varepsilon=\gamma^{2}/64. Note that

αη\displaystyle\alpha-\eta =s(rhj)+dhjs^(rhj)d^hjr\displaystyle=\frac{s(r-h_{j})+dh_{j}-\hat{s}(r-h_{j})-\hat{d}h_{j}}{r}
=(ss^)(rhj)+(dd^)hjr\displaystyle=\frac{(s-\hat{s})(r-h_{j})+(d-\hat{d})h_{j}}{r}
γ4(rhj)+γ4hjr\displaystyle\leq\frac{\frac{\gamma}{4}(r-h_{j})+\frac{\gamma}{4}h_{j}}{r}
=γ4\displaystyle=\frac{\gamma}{4} (5)

Similarly,

αη\displaystyle\alpha-\eta =s(rhj)+dim(a,b)hjs^(rhj)d^hj1r\displaystyle=\frac{s(r-h_{j})+\dim(a,b)h_{j}-\hat{s}(r-h_{j})-\hat{d}h_{j-1}}{r}
=(ss^)(rhj)+(dim(a,b)d^)hjr\displaystyle=\frac{(s-\hat{s})(r-h_{j})+(\dim(a,b)-\hat{d})h_{j}}{r}
>γ8(rhj)+γ8hjr\displaystyle>\frac{\frac{\gamma}{8}(r-h_{j})+\frac{\gamma}{8}h_{j}}{r}
=γ8\displaystyle=\frac{\gamma}{8} (6)

In particular,

4εαηγ/4.\frac{4\varepsilon}{\alpha-\eta}\leq\gamma/4. (7)

We also note that

K(ε,η)K(γ,s^,d^,r,hj)O(logr),K(\varepsilon,\eta)\leq K(\gamma,\hat{s},\hat{d},r,h_{j})\leq O(\log r), (8)

since jj was chosen to be sufficiently large and γ\gamma is constant.

Finally, let D=D(r,(a,b),η)D=D(r,(a,b),\eta) be the oracle of Lemma 8. Note that we chose DD so that, roughly, DD lowers the complexity of the last rhjr-h_{j} bits of (a,b)(a,b) to s(rhj)s(r-h_{j}).

Let (u,v)(u,v) be a line such that t:=(a,b)(u,v)hjt:=\|(a,b)-(u,v)\|\geq h_{j}, and ux+v=ax+bux+v=ax+b. Then, by Lemmas 7, 8 and 11,

KrD(u,v)\displaystyle K^{D}_{r}(u,v) KtD(a,b)+Krt,rD(x|a,b)O(logr)\displaystyle\geq K^{D}_{t}(a,b)+K^{D}_{r-t,r}(x\,|\,a,b)-O(\log r) [Lemma 7]
KtD(a,b)+Krt,r(x|a,b)O(logr)\displaystyle\geq K^{D}_{t}(a,b)+K_{r-t,r}(x\,|\,a,b)-O(\log r) [Lemma 8]
KtD(a,b)+s(rt)O(logr).\displaystyle\geq K^{D}_{t}(a,b)+s(r-t)-O(\log r). [Lemma 11(1)]

There are two cases. In the first, KtD(a,b)=ηrK^{D}_{t}(a,b)=\eta r. Then,

KrD(u,v)\displaystyle K^{D}_{r}(u,v) ηr+s(rt)O(logr)\displaystyle\geq\eta r+s(r-t)-O(\log r)
(ηε)r+s(rt)\displaystyle\geq(\eta-\varepsilon)r+s(r-t)
(ηε)r+(αη)(rt).\displaystyle\geq(\eta-\varepsilon)r+(\alpha-\eta)(r-t).

In the other case, KtD(a,b)=Kt(a,b)K^{D}_{t}(a,b)=K_{t}(a,b). Then,

KrD(u,v)\displaystyle K^{D}_{r}(u,v) Kt(a,b)+s(rt)O(logr)\displaystyle\geq K_{t}(a,b)+s(r-t)-O(\log r)
dto(t)+s(rt)O(logr)\displaystyle\geq dt-o(t)+s(r-t)-O(\log r) [Definition of dim\dim]
=dhj+d(thj)+s(rt)o(r)\displaystyle=dh_{j}+d(t-h_{j})+s(r-t)-o(r)
=dhj+d(thj)+s(rhj)s(thj)o(r)\displaystyle=dh_{j}+d(t-h_{j})+s(r-h_{j})-s(t-h_{j})-o(r)
=αr+(ds)(thj)o(r)\displaystyle=\alpha r+(d-s)(t-h_{j})-o(r)
=ηr+(αη)r+(ds)(thj)o(r)\displaystyle=\eta r+(\alpha-\eta)r+(d-s)(t-h_{j})-o(r)
ηr+(αη)(rt)o(r)\displaystyle\geq\eta r+(\alpha-\eta)(r-t)-o(r)
(ηε)r+(αη)(rt).\displaystyle\geq(\eta-\varepsilon)r+(\alpha-\eta)(r-t).

Therefore we may apply Lemma 6, which yields

KrD(a,b,x)\displaystyle K^{D}_{r}(a,b,x) Kr(x,ax+b)+Khj,rD(a,b,x|x,ax+b)\displaystyle\leq K_{r}(x,ax+b)+K^{D}_{h_{j},r}(a,b,x\,|\,x,ax+b) [Lemma 6]
+4εαηr+K(ε,η)+O(logr)\displaystyle\;\;\;\;\;\;\;\;\;\;\;+\frac{4\varepsilon}{\alpha-\eta}r+K(\varepsilon,\eta)+O(\log r)
Kr(x,ax+b)+Khj,rD(a,b,x|x,ax+b)\displaystyle\leq K_{r}(x,ax+b)+K^{D}_{h_{j},r}(a,b,x\,|\,x,ax+b)
+γ4r+γ8r\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;+\frac{\gamma}{4}r+\frac{\gamma}{8}r [Choice of η,ε\eta,\varepsilon]
=Kr(x,ax+b)+Khj,rD(a,b,x|x,ax+b)+3γ8r.\displaystyle=K_{r}(x,ax+b)+K^{D}_{h_{j},r}(a,b,x\,|\,x,ax+b)+\frac{3\gamma}{8}r. (9)

By Lemma 11, and our construction of oracle DD,

KrD(a,b,x)\displaystyle K_{r}^{D}(a,b,x) =KrD(a,b)+KrD(x|a,b)O(logr)\displaystyle=K_{r}^{D}(a,b)+K_{r}^{D}(x\,|\,a,b)-O(\log r) [Lemma 2]
=ηr+Kr(x|a,b)O(logr)\displaystyle=\eta r+K_{r}(x\,|\,a,b)-O(\log r) [Lemma 8]
ηr+shj+rhjO(logr)\displaystyle\geq\eta r+sh_{j}+r-h_{j}-O(\log r) [Lemma 11(2)]
αrγ4r+shj+rhjO(logr)\displaystyle\geq\alpha r-\frac{\gamma}{4}r+sh_{j}+r-h_{j}-O(\log r)
s(rhj)+dhjγ4r+shj+rhjO(logr)\displaystyle\geq s(r-h_{j})+dh_{j}-\frac{\gamma}{4}r+sh_{j}+r-h_{j}-O(\log r)
(1+s)r(1d)hjγ4r.\displaystyle\geq(1+s)r-(1-d)h_{j}-\frac{\gamma}{4}r. (10)

By Lemmas 12, and 2, and the fact that additional information cannot increase Kolmogorov complexity

Khj,r(a,b,x|x,ax+b)\displaystyle K_{h_{j},r}(a,b,x\,|\,x,ax+b) Khj,hj(a,b,x|x,ax+b)\displaystyle\leq K_{h_{j},h_{j}}(a,b,x\,|\,x,ax+b)
=Khj(a,b,x)Khj(x,ax+b)\displaystyle=K_{h_{j}}(a,b,x)-K_{h_{j}}(x,ax+b) [Lemma 2]
=Khj(a,b)+Khj(xa,b)\displaystyle=K_{h_{j}}(a,b)+K_{h_{j}}(x\mid a,b)
Khj(x,ax+b)\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;-K_{h_{j}}(x,ax+b) [Lemma 2]
=Khj(a,b)+shjKhj(x,ax+b)\displaystyle=K_{h_{j}}(a,b)+sh_{j}-K_{h_{j}}(x,ax+b) [Lemma 11]
Khj(a,b)+shj(s+d)hj+γ16hj\displaystyle\leq K_{h_{j}}(a,b)+sh_{j}-(s+d)h_{j}+\frac{\gamma}{16}h_{j} [Lemma 12]
dhj+hj/jdhj+γ16r\displaystyle\leq dh_{j}+h_{j}/j-dh_{j}+\frac{\gamma}{16}r [Definition of hjh_{j}]
γ8r\displaystyle\leq\frac{\gamma}{8}r (11)

Combining inequalities (9), (10) and (11) , we see that

Kr(x,ax+b)\displaystyle K_{r}(x,ax+b) KrD(a,b,x)γ8r3γ8r\displaystyle\geq K_{r}^{D}(a,b,x)-\frac{\gamma}{8}r-\frac{3\gamma}{8}r
(1+s)r(1d)hjγ4rγ4r\displaystyle\geq(1+s)r-(1-d)h_{j}-\frac{\gamma}{4}r-\frac{\gamma}{4}r
(1+s)r(1d)hjγr.\displaystyle\geq(1+s)r-(1-d)h_{j}-\gamma r.

Note that, since d1d\leq 1, and hjrh_{j}\leq r,

(1+s)rhj(1d)(s+d)r\displaystyle(1+s)r-h_{j}(1-d)-(s+d)r =r(1d)hj(1d)\displaystyle=r(1-d)-h_{j}(1-d)
=(rhj)(1d)\displaystyle=(r-h_{j})(1-d)
0.\displaystyle\geq 0.

Thus,

Kr(x,ax+b)\displaystyle K_{r}(x,ax+b) (1+s)rhj(1d)γr\displaystyle\geq(1+s)r-h_{j}(1-d)-\gamma r
(s+d)rγr,\displaystyle\geq(s+d)r-\gamma r,

and the proof is complete for the case s<dim(a,b)s<\dim(a,b).

We are now able to prove our main theorem.

Proof of Theorem 9.

Let (a,b)2(a,b)\in\mathbb{R}^{2} be a slope-intercept pair with

d=dim(a,b)1d=\dim(a,b)\leq 1.

Let s[0,1]s\in[0,1]. If s=0s=0, then

Kr(a,a2+b)\displaystyle K_{r}(a,a^{2}+b) =Kr(a)+Kr(a2+b|a)+O(logr)\displaystyle=K_{r}(a)+K_{r}(a^{2}+b\,|\,a)+O(\log r)
=Kr(a)+Kr(b|a)+O(logr)\displaystyle=K_{r}(a)+K_{r}(b\,|\,a)+O(\log r)
=Kr(a,b)+O(logr),\displaystyle=K_{r}(a,b)+O(\log r),

and so the conclusion holds.

If s=1s=1, then by [11], for any point xx which is random relative to (a,b)(a,b),

dim(x,ax+b)=1+d\dim(x,ax+b)=1+d,

and the claim follows.

If sds\geq d, then Lutz and Stull [11] showed that for any xx such that

dima,b(x)=dim(x)=s\dim^{a,b}(x)=\dim(x)=s,

we have dim(x,ax+b)=s+d\dim(x,ax+b)=s+d.

Therefore, we may assume that s(0,1)s\in(0,1) and s<ds<d. Let xx be the point constructed in this section. Let γ>0\gamma>0. Let jj be large enough so that the conclusions of Lemmas 12 and 13 hold for these choices of (a,b)(a,b), xx, ss and γ\gamma. Then, by Lemmas 12 and 13,

dim(x,ax+b)\displaystyle\dim(x,ax+b) =lim infrKr(x,ax+b)r\displaystyle=\liminf_{r\rightarrow\infty}\frac{K_{r}(x,ax+b)}{r}
lim infr(s+d)rγrr\displaystyle\geq\liminf_{r\rightarrow\infty}\frac{(s+d)r-\gamma r}{r}
=s+dγ.\displaystyle=s+d-\gamma.

Since we chose γ\gamma arbitrarily, we see that

dim(x,ax+b)s+d\dim(x,ax+b)\geq s+d.

For the upper bound, let jj\in\mathbb{N} be sufficiently large. Then

Khj(x,ax+b)\displaystyle K_{h_{j}}(x,ax+b) Khj(x,a,b)\displaystyle\leq K_{h_{j}}(x,a,b)
=Khj(a,b)+Khj(x|a,b)\displaystyle=K_{h_{j}}(a,b)+K_{h_{j}}(x\,|\,a,b)
dhj+shj\displaystyle\leq dh_{j}+sh_{j}
=(d+s)hj.\displaystyle=(d+s)h_{j}.

Therefore,

dim(x,ax+b)s+d\dim(x,ax+b)\leq s+d,

and the proof is complete. ∎

5 High-Dimensional Lines

In this section we show that the Dimension Spectrum Conjecture holds for lines of high dimension, i.e., for lines La,bL_{a,b} such that dim(a,b)>1\dim(a,b)>1. That is, we will prove the following theorem.

Theorem 14.

Let (a,b)2(a,b)\in\mathbb{R}^{2} be a slope-intercept pair with dim(a,b)>1\dim(a,b)>1.. Then for every s[0,1]s\in[0,1], there is a point xx\in\mathbb{R} such that

dim(x,ax+b)=1+s\dim(x,ax+b)=1+s.

5.1 Overview of proof

In this case, we again apply essential insight of the proof for low-dimensional lines, namely, encoding (a subset of) the information of aa into xx. However, when dim(a,b)>1\dim(a,b)>1 constructing xx as before potentially causes a problem. Specifically, in this case, the previous construction might cause dim(x,ax+b)\dim(x,ax+b) to become too large.

The overcome this, we rely on a non-constructive argument. More specifically, we begin as in the construction of xx in the low-dimensional case. However at stage jj of our construction, we do not add all hjshjh_{j}-sh_{j} bits of aa to xx. Instead we consider the m=hjshjm=h_{j}-sh_{j} strings 𝐱𝟎,,𝐱𝐦\mathbf{x_{0}},\ldots,\mathbf{x_{m}}, where

𝐱𝐧[i]={0 if 0i<mn1a[i(mn)] if mnim\displaystyle\mathbf{x_{n}}[i]=\begin{cases}0&\text{ if }0\leq i<m-n\\ \frac{1}{a}[i-(m-n)]&\text{ if }m-n\leq i\leq m\end{cases} (*)

and look at the extension of xx with the bits of 𝐱𝐧\mathbf{x_{n}}.

Using a discrete, approximate, version of the mean value theorem, we are able to conclude that there is some extension x=x𝐱𝐧x^{\prime}=x\mathbf{x_{n}} such that

minshjrhj|Kr(x,ax+b)(1+s)r|\min\limits_{sh_{j}\leq r\leq h_{j}}|K_{r}(x^{\prime},ax^{\prime}+b)-(1+s)r|

is sufficiently small. We then carry on with the argument of the low-dimensional lines until shj+1sh_{j+1}.

5.2 Proof for high-dimensional lines

In order to prove Theorem 14, we will, given any slope-intercept pair (a,b)(a,b) and s(0,1)s\in(0,1), construct a point x[0,1]x\in[0,1] such that dim(x,ax+b)=1+s\dim(x,ax+b)=1+s.

Our construction is best phrased as constructing an infinite binary sequence 𝐱\mathbf{x}, and then taking xx to be the unique real number whose binary expansion is 𝐱\mathbf{x}. We now recall terminology needed in the construction. We will use bold variables to denote binary strings and (infinite) binary sequences. If 𝐱\mathbf{x} is a (finite) binary string and 𝐲\mathbf{y} is a binary string or sequence, we write 𝐱𝐲\mathbf{x}\prec\mathbf{y} if 𝐱\mathbf{x} is a prefix of 𝐲\mathbf{y}.

Let (a,b)(a,b) be a slope intercept pair and let d=dim(a,b)d=\dim(a,b). Define the sequence of natural numbers {hj}j\{h_{j}\}_{j\in\mathbb{N}} inductively as follows. Define h0=2h_{0}=2. For every j>0j>0, let

hj=min{h2hj1:Kh(a,b)(d+2j)h}.h_{j}=\min\left\{h\geq 2^{h_{j-1}}:K_{h}(a,b)\leq\left(d+2^{-j}\right)h\right\}.

We define our sequence 𝐱\mathbf{x} inductively. Let 𝐲\mathbf{y} be a random, relative to (a,b)(a,b), binary sequence. That is, there is some constant cc such that

Ka,b(yr)rc,K^{a,b}(y{\upharpoonright}r)\geq r-c, (12)

for every rr\in\mathbb{N}. We begin our inductive definition by setting 𝐱[02]=𝐲[02]\mathbf{x}[0\ldots 2]=\mathbf{y}[0\ldots 2]. Suppose we have defined 𝐱\mathbf{x} up to hj1h_{j-1}. We then set

𝐱[r]=𝐲[r]\mathbf{x}[r]=\mathbf{y}[r], for all hj1<rshjh_{j-1}<r\leq sh_{j}.

To specify the next hjshjh_{j}-sh_{j} bits of 𝐱\mathbf{x}, we use the following lemma, which we will prove in the next section.

Lemma 15.

For every sufficiently large jj, there is a binary string 𝐳\mathbf{z} of length hjshjh_{j}-sh_{j} such that

minshj<rhj|Kr(x,ax+b)(1+s)r|<rj,\min\limits_{sh_{j}<r\leq h_{j}}\left|K_{r}(x,ax+b)-(1+s)r\right|<\frac{r}{j},

where xx is any real such that 𝐱𝐳x\mathbf{x}\mathbf{z}\prec x. Moreover, 𝐳\mathbf{z} is of the form (*) of Section 5.1.

For now, we assume the truth of this lemma. If the current jj is not sufficiently large, take 𝐳\mathbf{z} to be the string of all zeros. Otherwise, if jj is sufficiently large, we let 𝐳\mathbf{z} be such a binary string. We then set

𝐱[r]=𝐳[rshj]\mathbf{x}[r]=\mathbf{z}[r-sh_{j}], for all shj<rhjsh_{j}<r\leq h_{j},

completing the inductive step. Finally, we let xa,b,sx_{a,b,s} be the real number with binary expansion 𝐱\mathbf{x}.

Proposition 16.

Let x=xa,b,sx=x_{a,b,s} be the real we just constructed. Then for every jj,

  1. 1.

    Kshja,b(x)shjO(loghj)K^{a,b}_{sh_{j}}(x)\geq sh_{j}-O(\log h_{j}), and

  2. 2.

    Kr(xa,b)shj+rhjK_{r}(x\mid a,b)\geq sh_{j}+r-h_{j}, for every hjr<shj+1h_{j}\leq r<sh_{j+1}.

Proof.

To see (1), by our construction, 𝐱[hj1shj]=𝐲[hj1shj]\mathbf{x}[h_{j-1}\ldots sh_{j}]=\mathbf{y}[h_{j-1}\ldots sh_{j}]. Thus, by Corollary 4,

Kshja,b(x)\displaystyle K^{a,b}_{sh_{j}}(x) =Ka,b(𝐱)O(loghj)\displaystyle=K^{a,b}(\mathbf{x})-O(\log h_{j})
Ka,b(𝐲)hj1O(loghj)\displaystyle\geq K^{a,b}(\mathbf{y})-h_{j-1}-O(\log h_{j})
shjhj1O(loghj)\displaystyle\geq sh_{j}-h_{j-1}-O(\log h_{j})
shjO(loghj).\displaystyle\geq sh_{j}-O(\log h_{j}).

For item (2), by our construction 𝐱[hjr]=𝐲[hjr]\mathbf{x}[h_{j}\ldots r]=\mathbf{y}[h_{j}\ldots r]. Therefore, by Corollary 4 and our construction of xx,

Kr(xa,b)\displaystyle K_{r}(x\mid a,b) Kra,b(x)\displaystyle\geq K^{a,b}_{r}(x)
=Ka,b(𝐱)O(loghj)\displaystyle=K^{a,b}(\mathbf{x})-O(\log h_{j})
Ka,b(𝐲[0shj]𝐳𝐲[hjr])hj1O(loghj)\displaystyle\geq K^{a,b}(\mathbf{y}[0\ldots sh_{j}]\mathbf{z}\mathbf{y}[h_{j}\ldots r])-h_{j-1}-O(\log h_{j})
Ka,b(𝐲[0shj]𝐲[hjr])\displaystyle\geq K^{a,b}(\mathbf{y}[0\ldots sh_{j}]\mathbf{y}[h_{j}\ldots r])
hj1O(loghj)\displaystyle\;\;\;\;\;\;\;\;\;-h_{j-1}-O(\log h_{j}) [𝐳\mathbf{z} computable from aa]
shj+rhjO(loghj)\displaystyle\geq sh_{j}+r-h_{j}-O(\log h_{j}) [Definition of 𝐲\mathbf{y}]

We now show, again assuming Lemma 15, that dim(x,ax+b)=1+s\dim(x,ax+b)=1+s, where x=xa,b,sx=x_{a,b,s} is the point we have just constructed.

We begin by proving an upper bound on dim(x,ax+b)\dim(x,ax+b). Note that this essentially follows from our choice of 𝐳\mathbf{z}.

Proposition 17.

Let (a,b)(a,b) be a slope intercept pair, s(0,1)s\in(0,1) and γ\gamma\in\mathbb{Q} be positive. Let x=xa,b,sx=x_{a,b,s} be the point we have just constructed. Then

dim(x,ax+b)(1+s)+γ\dim(x,ax+b)\leq(1+s)+\gamma.

Proof.

Let jj be sufficiently large. By our construction of xx,

minshj<rhj|Kr(x,ax+b)(1+s)r|<γr4\min\limits_{sh_{j}<r\leq h_{j}}\left|K_{r}(x,ax+b)-(1+s)r\right|<\frac{\gamma r}{4} (13)

Therefore,

dim(x,ax+b)\displaystyle\dim(x,ax+b) =lim infrKr(x,ax+b)r\displaystyle=\liminf\limits_{r}\frac{K_{r}(x,ax+b)}{r}
lim infjminshj<rhjKr(x,ax+b)r\displaystyle\leq\liminf\limits_{j}\min\limits_{sh_{j}<r\leq h_{j}}\frac{K_{r}(x,ax+b)}{r}
lim infjminshj<rhj(1+s)r+γr/4r\displaystyle\leq\liminf\limits_{j}\min\limits_{sh_{j}<r\leq h_{j}}\frac{(1+s)r+\gamma r/4}{r}
=lim infjminshj<rhj1+s+γ/4\displaystyle=\liminf\limits_{j}\min\limits_{sh_{j}<r\leq h_{j}}1+s+\gamma/4
=1+s+γ4.\displaystyle=1+s+\frac{\gamma}{4}.

We break the proof of the lower bound on dim(x,ax+b)\dim(x,ax+b) into two parts. In the first, we give lower bounds on Kr(x,ax+b)K_{r}(x,ax+b) on the interval r(shj,hj]r\in(sh_{j},h_{j}]. Note that this essentially follows from inequality (13).

Proposition 18.

Let (a,b)(a,b) be a slope intercept pair, s(0,1)s\in(0,1), γ\gamma\in\mathbb{Q} be positive and jj be sufficiently large. Let x=xa,b,sx=x_{a,b,s} be the point we have just constructed. Then

Kr(x,ax+b)(1+sγ)rK_{r}(x,ax+b)\geq(1+s-\gamma)r

for all shj<rhjsh_{j}<r\leq h_{j}

Proof.

Let jj be sufficiently large and shj<rhjsh_{j}<r\leq h_{j}. Then, by (13),

Kr(x,ax+b)(1+s)rγrK_{r}(x,ax+b)\geq(1+s)r-\gamma r.

We now give lower bounds on Kr(x,ax+b)K_{r}(x,ax+b) on the interval r(hj1,shj]r\in(h_{j-1},sh_{j}]. The proof of this lemma is very similar to the analogous lemma for low-dimensional lines (Lemma 13). Intuitively, the proof is as follows. Using the previous lemma, we can, given a 2hj2^{-h_{j}}-approximation of (x,ax+b)(x,ax+b), compute a 2hj2^{-h_{j}}-approximation of (a,b)(a,b) with a small amount of extra bits. Having done so, we have to compute the last rhjr-h_{j} bits of (a,b)(a,b). Importantly, since r>hjr>h_{j}, the last rhjr-h_{j} bits of xx are maximal. Thus, we can simply lower the complexity of the last rhjr-h_{j} bits of (a,b)(a,b) so that the complexity of these bits is roughly s(rhj)s(r-h_{j}). Thus, we are again, morally, in the case where dim(x)dim(a,b)\dim(x)\geq\dim(a,b) and the techniques of Section 3.1 work. We now formalize this intuition.

Lemma 19.

Let (a,b)(a,b) be a slope intercept pair, s(0,1)s\in(0,1), γ\gamma\in\mathbb{Q} be positive and jj be sufficiently large. Let x=xa,b,sx=x_{a,b,s} be the point we have just constructed. Then

Kr(x,ax+b)(1+sγ)rK_{r}(x,ax+b)\geq(1+s-\gamma)r

for all hj1<rshjh_{j-1}<r\leq sh_{j}

Proof.

Intuitively, we will use the approximation of (x,ax+b)(x,ax+b) at precision hj1h_{j-1} to compute (a,b)(a,b) at precision hj1h_{j-1}. Then we will only search for candidate lines within 2hj12^{-h_{j-1}} of (a,b)(a,b). Formally, the argument proceeds as follows.

We first show that we can compute (a,b)(a,b) to within 2hj12^{-h_{j-1}} with an approximation of (x,ax+b)(x,ax+b), with few additional bits of information. By Lemma 2 and inequality (13)

Khj1,r(a,b,x|x,ax+b)\displaystyle K_{h_{j-1},r}(a,b,x\,|\,x,ax+b) Khj1,hj1(a,b,x|x,ax+b)+O(loghj))\displaystyle\leq K_{h_{j-1},h_{j-1}}(a,b,x\,|\,x,ax+b)+O(\log h_{j}))
=Khj1(a,b,x)Khj1(x,ax+b)\displaystyle=K_{h_{j-1}}(a,b,x)-K_{h_{j-1}}(x,ax+b) [Lemma 2]
Khj1(a,b,x)(1+s)hj1+γ4hj1\displaystyle\leq K_{h_{j-1}}(a,b,x)-(1+s)h_{j-1}+\frac{\gamma}{4}h_{j-1} [(13)]
=Khj1(a,b)+Khj1(xa,b)\displaystyle=K_{h_{j-1}}(a,b)+K_{h_{j-1}}(x\mid a,b)
(1+s)hj1+γ4hj1\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;-(1+s)h_{j-1}+\frac{\gamma}{4}h_{j-1} [Lemma 2]
dhj1+hj2j+Khj1(xa,b)\displaystyle\leq dh_{j-1}+h_{j}2^{-j}+K_{h_{j-1}}(x\mid a,b)
(1+s)hj1+γhj14\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;-(1+s)h_{j-1}+\frac{\gamma h_{j-1}}{4} [Definition hjh_{j}]
dhj1+hj2j+shj1\displaystyle\leq dh_{j-1}+h_{j}2^{-j}+sh_{j-1}
(1+s)hj1+γhj14\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;-(1+s)h_{j-1}+\frac{\gamma h_{j-1}}{4} [Proposition 16]
dhj+shj1\displaystyle\leq dh_{j}+sh_{j-1}
(1+s)hj1+γhj12\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;-(1+s)h_{j-1}+\frac{\gamma h_{j-1}}{2} [jj large]
dhjhj+γhj12.\displaystyle\leq dh_{j}-h_{j}+\frac{\gamma h_{j-1}}{2}. (14)

Thus, we can, given a 2r2^{-r} approximation of (x,ax+b)(x,ax+b), compute a 2hj12^{-h_{j-1}}-approximation of (a,b)(a,b) with

(d1)hj+γhj12(d-1)h_{j}+\frac{\gamma h_{j-1}}{2}

additional bits of information. Knowing (a,b)(a,b) to precision hj1h_{j-1} allows us to search for candidate lines within 2hj12^{-h_{j-1}} of (a,b)(a,b), i.e., using Lemma 6 with m=hj1m=h_{j-1}.

Let s^(0,s)\hat{s}\in\mathbb{Q}\cap(0,s) be a dyadic rational such that

γ/8<ss^<γ/4\gamma/8<s-\hat{s}<\gamma/4.

Let d^(0,dim(a,b))\hat{d}\in\mathbb{Q}\cap(0,\dim(a,b)) be a dyadic rational such that

γ/8<dim(a,b)d^<γ/4\gamma/8<\dim(a,b)-\hat{d}<\gamma/4.

Define

α=s(rhj1)+dhj1r\alpha=\frac{s(r-h_{j-1})+dh_{j-1}}{r}.

Define

η=s^(rhj1)+d^hj1r\eta=\frac{\hat{s}(r-h_{j-1})+\hat{d}h_{j-1}}{r}.

Finally, let ε=γ2/64\varepsilon=\gamma^{2}/64. Note that

αη\displaystyle\alpha-\eta =s(rhj1)+dhj1s^(rhj1)d^hj1r\displaystyle=\frac{s(r-h_{j-1})+dh_{j-1}-\hat{s}(r-h_{j-1})-\hat{d}h_{j-1}}{r}
=(ss^)(rhj1)+(dd^)hj1r\displaystyle=\frac{(s-\hat{s})(r-h_{j-1})+(d-\hat{d})h_{j-1}}{r}
γ4(rhj1)+γ4hj1r\displaystyle\leq\frac{\frac{\gamma}{4}(r-h_{j-1})+\frac{\gamma}{4}h_{j-1}}{r}
=γ4\displaystyle=\frac{\gamma}{4} (15)

Similarly,

αη\displaystyle\alpha-\eta =s(rhj1)+dhj1s^(rhj1)d^hj1r\displaystyle=\frac{s(r-h_{j-1})+dh_{j-1}-\hat{s}(r-h_{j-1})-\hat{d}h_{j-1}}{r}
=(ss^)(rhj1)+(dd^)hj1r\displaystyle=\frac{(s-\hat{s})(r-h_{j-1})+(d-\hat{d})h_{j-1}}{r}
>γ8(rhj1)+γ4hj1r\displaystyle>\frac{\frac{\gamma}{8}(r-h_{j-1})+\frac{\gamma}{4}h_{j-1}}{r}
=γ8\displaystyle=\frac{\gamma}{8} (16)

In particular,

4εαηγ/4.\frac{4\varepsilon}{\alpha-\eta}\leq\gamma/4. (17)

We also note that

K(ε,η)K(γ,s^,d^,r,hj1)O(logr),K(\varepsilon,\eta)\leq K(\gamma,\hat{s},\hat{d},r,h_{j-1})\leq O(\log r), (18)

since jj was chosen to be sufficiently large and γ\gamma is constant.

Let D=D(r,(a,b),η)D=D(r,(a,b),\eta) be the oracle of Lemma 8. We now show that the conditions of Lemma 6 are satisfied for these choices a,b,η,ε,ra,b,\eta,\varepsilon,r and δ=αη\delta=\alpha-\eta, m=hj1m=h_{j-1} and A=DA=D.

Let (u,v)(u,v) be a line such that t:=(a,b)(u,v)hj1t:=\|(a,b)-(u,v)\|\geq h_{j-1}, and ux+v=ax+bux+v=ax+b. Then, by Lemmas 7, 8, and Proposition 16,

KrD(u,v)\displaystyle K^{D}_{r}(u,v) KtD(a,b)+Krt,rD(x|a,b)O(logr)\displaystyle\geq K^{D}_{t}(a,b)+K^{D}_{r-t,r}(x\,|\,a,b)-O(\log r) [Lemma 7]
KtD(a,b)+Krt,r(x|a,b)O(logr)\displaystyle\geq K^{D}_{t}(a,b)+K_{r-t,r}(x\,|\,a,b)-O(\log r) [Lemma 8]
KtD(a,b)+s(rt)O(logr).\displaystyle\geq K^{D}_{t}(a,b)+s(r-t)-O(\log r). [Proposition 16]

By Lemma 8, there are two cases. In the first, KtD(a,b)=ηrK^{D}_{t}(a,b)=\eta r, and so

KrD(u,v)\displaystyle K^{D}_{r}(u,v) KtD(a,b)+s(rt)O(logr)\displaystyle\geq K^{D}_{t}(a,b)+s(r-t)-O(\log r)
=ηr+s(rt)O(logr)\displaystyle=\eta r+s(r-t)-O(\log r)
(ηε)r+s(rt)\displaystyle\geq(\eta-\varepsilon)r+s(r-t) [jj is large]
(ηε)r+δ(rt)\displaystyle\geq(\eta-\varepsilon)r+\delta(r-t) [γ\gamma is small]

In the second case, KtD(a,b)=Kt(a,b)K^{D}_{t}(a,b)=K_{t}(a,b), and so

KrD(u,v)\displaystyle K^{D}_{r}(u,v) KtD(a,b)+s(rt)O(logr)\displaystyle\geq K^{D}_{t}(a,b)+s(r-t)-O(\log r)
dto(t)+s(rt)O(logr)\displaystyle\geq dt-o(t)+s(r-t)-O(\log r) [Definition of dim\dim]
=dhj1+d(thj1)+s(rt)o(r)\displaystyle=dh_{j-1}+d(t-h_{j-1})+s(r-t)-o(r)
=dhj1+d(thj1)+s(rhj1)s(thj1)o(r)\displaystyle=dh_{j-1}+d(t-h_{j-1})+s(r-h_{j-1})-s(t-h_{j-1})-o(r)
=αr+d(thj1)s(thj1)o(r)\displaystyle=\alpha r+d(t-h_{j-1})-s(t-h_{j-1})-o(r) [Definition of α\alpha]
=ηr+(αη)r+(ds)(thj1)o(r)\displaystyle=\eta r+(\alpha-\eta)r+(d-s)(t-h_{j-1})-o(r)
ηr+(αη)ro(r)\displaystyle\geq\eta r+(\alpha-\eta)r-o(r) [d>1d>1, t>hj1t>h_{j-1}]
ηr+(αη)(rt)o(r)\displaystyle\geq\eta r+(\alpha-\eta)(r-t)-o(r) [α>η\alpha>\eta]
(ηε)r+δ(rt)\displaystyle\geq(\eta-\varepsilon)r+\delta(r-t) [jj is large]

Therefore, in either case, we may apply Lemma 6, relative to DD which yields

KrD(a,b,x)\displaystyle K^{D}_{r}(a,b,x) Kr(x,ax+b)+Khj,r(a,b,x|x,ax+b)\displaystyle\leq K_{r}(x,ax+b)+K_{h_{j},r}(a,b,x\,|\,x,ax+b)
+4εαηr+K(ε,η)+O(logr)\displaystyle\;\;\;\;\;\;\;+\frac{4\varepsilon}{\alpha-\eta}r+K(\varepsilon,\eta)+O(\log r) [Lemma 6]
Kr(x,ax+b)+dhjhj+γhj12\displaystyle\leq K_{r}(x,ax+b)+dh_{j}-h_{j}+\frac{\gamma h_{j-1}}{2}
+4εαηr+K(ε,η)+O(logr)\displaystyle\;\;\;\;\;\;\;+\frac{4\varepsilon}{\alpha-\eta}r+K(\varepsilon,\eta)+O(\log r) [(14)]
Kr(x,ax+b)+dhjhj+γhj12\displaystyle\leq K_{r}(x,ax+b)+dh_{j}-h_{j}+\frac{\gamma h_{j-1}}{2}
+4εαηr+O(logr)\displaystyle\;\;\;\;\;\;\;+\frac{4\varepsilon}{\alpha-\eta}r+O(\log r) [(18)]
Kr(x,ax+b)+dhjhj+γhj12\displaystyle\leq K_{r}(x,ax+b)+dh_{j}-h_{j}+\frac{\gamma h_{j-1}}{2}
+γr4+O(logr)\displaystyle\;\;\;\;\;\;\;+\frac{\gamma r}{4}+O(\log r) [(17)]
Kr(x,ax+b)+dhjhj+3γr4+O(logr)\displaystyle\leq K_{r}(x,ax+b)+dh_{j}-h_{j}+\frac{3\gamma r}{4}+O(\log r) (20)

By Lemma 11, and our construction of oracle DD,

KrD(a,b,x)\displaystyle K_{r}^{D}(a,b,x) =KrD(a,b)+KrD(x|a,b)O(logr)\displaystyle=K_{r}^{D}(a,b)+K_{r}^{D}(x\,|\,a,b)-O(\log r) [Lemma 2]
=ηr+Kr(x|a,b)O(logr)\displaystyle=\eta r+K_{r}(x\,|\,a,b)-O(\log r) [Lemma 8]
ηr+shj+rhjO(logr)\displaystyle\geq\eta r+sh_{j}+r-h_{j}-O(\log r) [Lemma 11(2)]
αrγ4r+shj+rhjO(logr)\displaystyle\geq\alpha r-\frac{\gamma}{4}r+sh_{j}+r-h_{j}-O(\log r)
s(rhj)+dhjγ4r+shj+rhjO(logr)\displaystyle\geq s(r-h_{j})+dh_{j}-\frac{\gamma}{4}r+sh_{j}+r-h_{j}-O(\log r)
(1+s)r(1d)hjγ4r.\displaystyle\geq(1+s)r-(1-d)h_{j}-\frac{\gamma}{4}r. (21)

Rearranging (20) and combining this with (21), we see that

Kr(x,ax+b)\displaystyle K_{r}(x,ax+b) KrD(a,b,x)dhj+hj3γr4O(logr)\displaystyle\geq K^{D}_{r}(a,b,x)-dh_{j}+h_{j}-\frac{3\gamma r}{4}-O(\log r) [(20)]
(1+s)r(1d)hjγ4r\displaystyle\geq(1+s)r-(1-d)h_{j}-\frac{\gamma}{4}r
dhj+hj3γr4O(logr)\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;-dh_{j}+h_{j}-\frac{3\gamma r}{4}-O(\log r) [(21)]
=(1+s)rγrO(logr)\displaystyle=(1+s)r-\gamma r-O(\log r)

We are now able to prove that the Dimension Spectrum Conjecture holds for high dimensional lines.

Proof of Theorem 14.

Let (a,b)2(a,b)\in\mathbb{R}^{2} be a slope-intercept pair with

d=dim(a,b)>1d=\dim(a,b)>1.

Let s[0,1]s\in[0,1]. In the case where s=0s=0, Turetsky showed (Theorem 1) that 1sp(La,b)1\in\operatorname{sp}(L_{a,b}), i.e., there is a point xx such that dim(x,ax+b)=1\dim(x,ax+b)=1. In the case where s=1s=1, Lutz and Stull [11] showed than any point xx which is random relative to (a,b)(a,b) satisfies

dim(x,ax+b)=2\dim(x,ax+b)=2.

Therefore, we may assume that s(0,1)s\in(0,1). Let x=xa,b,sx=x_{a,b,s} be the point constructed in this section. By Propositions 17 and 18 and Lemma 19, for every γ\gamma,

|dim(x,ax+b)(1+s)|<γ|\dim(x,ax+b)-(1+s)|<\gamma.

Thus, by the definition of effective dimension,

dim(x,ax+b)=1+s\dim(x,ax+b)=1+s,

and the proof is complete. ∎

5.3 Proof of Lemma 15

To complete the proof of the main theorem of this section, we now prove Lemma 15. Recall that this states that, for every jj, after setting 𝐱[hj1shj]=𝐲[hj1shj]\mathbf{x}[h_{j-1}\ldots sh_{j}]=\mathbf{y}[h_{j-1}\ldots sh_{j}], the following holds.

Lemma 15. For every sufficiently large jj there is a binary string 𝐳\mathbf{z} of length hjshjh_{j}-sh_{j} such that

minshj<rhj|Kr(x,ax+b)(1+s)r|<rj,\min\limits_{sh_{j}<r\leq h_{j}}\left|K_{r}(x,ax+b)-(1+s)r\right|<\frac{r}{j},

where xx is any real such that 𝐱𝐳x\mathbf{x}\mathbf{z}\prec x. Moreover, 𝐳\mathbf{z} is of the form (*) of Section 5.1.

Let m=hjshjm=h_{j}-sh_{j}. For each 0nm0\leq n\leq m, define the binary string 𝐱𝐧\mathbf{x_{n}} of length mm by

𝐱𝐧[i]={0 if 0i<mn1a[i(mn)] if mnim\displaystyle\mathbf{x_{n}}[i]=\begin{cases}0&\text{ if }0\leq i<m-n\\ \frac{1}{a}[i-(m-n)]&\text{ if }m-n\leq i\leq m\end{cases}

Thus, for example 𝐱𝟎\mathbf{x_{0}} is the binary string of mm zeros, while 𝐱𝐦\mathbf{x_{m}} is the binary string containing the mm-bit prefix of 1a\frac{1}{a}.

Let xx be the real number such that 𝐱𝐱𝟎x\mathbf{x}\mathbf{x_{0}}\prec x, and whose binary expansion contains only zeros after shjsh_{j}. For each 1nm1\leq n\leq m, let xnx_{n} be the real number defined by

xn=x+2hj+n/ax_{n}=x+2^{-h_{j}+n}/a.

Therefore, for every nn,

(xn,axn+b)=(xn,ax+b+2hj+n)(x_{n},ax_{n}+b)=(x_{n},ax+b+2^{-h_{j}+n}).

Since the binary expansion of xx satisfies x[r]=0x[r]=0 for all rshjr\geq sh_{j}, we have, for every nn,

𝐱𝐱𝐧xn\mathbf{x}\mathbf{x_{n}}\prec x_{n} (22)

In other words, the binary expansion of xnx_{n} up to index hjh_{j} is just the concatenation of 𝐱\mathbf{x} and 𝐱𝐧\mathbf{x_{n}}.

We now collect a few facts about our points xnx_{n}.

Lemma 20.

For every n,rn,r such that 0nm0\leq n\leq m and shjrhjsh_{j}\leq r\leq h_{j} the following hold.

  1. 1.

    Kn,hj(a|xn)O(loghj)K_{n,h_{j}}(a\,|\,x_{n})\leq O(\log h_{j}).

  2. 2.

    For every nn and n>nn^{\prime}>n,

    |Kr(xn,axn+b)Kr(xn,axn+b)|<nn+log(r)|K_{r}(x_{n^{\prime}},ax_{n^{\prime}}+b)-K_{r}(x_{n},ax_{n}+b)|<n^{\prime}-n+\log(r).

  3. 3.

    Krshj,r(a,bxm,axm+b)O(logr)K_{r-sh_{j},r}(a,b\mid x_{m},ax_{m}+b)\leq O(\log r).

Note that the constants implied by the big oh notation depend only on aa.

Proof.

From the definition of 𝐱𝐧\mathbf{x_{n}},

𝐱𝐧[(mn)m]=1a[0n]\mathbf{x_{n}}[(m-n)\ldots m]=\frac{1}{a}[0\ldots n].

Since the map a1aa\mapsto\frac{1}{a} is bi-Lipschitz on an interval, there some constant cc depending only on aa, such that we can compute a 2n+c2^{-n+c}-approximation of aa given the first nn bits of 1/a1/a. Thus, by Corollary 5,

Kn,hj(axn)\displaystyle K_{n,h_{j}}(a\mid x_{n}) =K(anxnhj)+O(loghj)\displaystyle=K(a{\upharpoonright}n\mid x_{n}{\upharpoonright}h_{j})+O(\log h_{j}) [Corollary 5]
=K(an𝐱𝐱𝐧)+O(loghj)\displaystyle=K(a{\upharpoonright}n\mid\mathbf{x}\mathbf{x_{n}})+O(\log h_{j}) [(22)]
K(an𝐱𝐧)+O(loghj)\displaystyle\leq K(a{\upharpoonright}n\mid\mathbf{x_{n}})+O(\log h_{j})
O(loghj).\displaystyle\leq O(\log h_{j}).

For item (2), let n>nn^{\prime}>n. We first assume that r<hjnr<h_{j}-n^{\prime}. Then

|xnxn|\displaystyle|x_{n^{\prime}}-x_{n}| =x+2hj+n/a(x+2hj+n/a)\displaystyle=x+2^{-h_{j}+n^{\prime}}/a-(x+2^{-h_{j}+n}/a)
=2hj(2n2n)/a)\displaystyle=2^{-h_{j}}(2^{n^{\prime}}-2^{n})/a)
=2hj+n(12nn)/a\displaystyle=2^{-h_{j}+n^{\prime}}(1-2^{n-n^{\prime}})/a
2hj+n/2a\displaystyle\leq 2^{-h_{j}+n^{\prime}}/2a
O(2hj+n).\displaystyle\leq O(2^{-h_{j}+n^{\prime}}).

Thus,

(xn,axn+b)(xn,axn+b)\displaystyle\|(x_{n^{\prime}},ax_{n^{\prime}}+b)-(x_{n},ax_{n}+b)\| =(xn,ax+b+2hj+n)\displaystyle=\|(x_{n^{\prime}},ax+b+2^{-h_{j}+n^{\prime}})
(xn,ax+b+2hj+n)\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;-(x_{n},ax+b+2^{-h_{j}+n^{\prime}})\|
O(2hj+n).\displaystyle\leq O(2^{-h_{j}+n^{\prime}}).

Therefore, under our assumption that r<hjnr<h_{j}-n^{\prime},

Kr(xn,axn+b)=Kr(xn,axn+b)O(1)K_{r}(x_{n^{\prime}},ax_{n^{\prime}}+b)=K_{r}(x_{n},ax_{n}+b)-O(1),

and the claim follows. Now assume that rhjnr\geq h_{j}-n^{\prime}. By Corollary 4,

Kr(xn,axn+b)\displaystyle K_{r}(x_{n^{\prime}},ax_{n^{\prime}}+b) =Kr(xn,ax+b+2hj+n)\displaystyle=K_{r}(x_{n^{\prime}},ax+b+2^{-h_{j}+n^{\prime}})
=Kr(𝐱𝐱𝐧,ax+b+2hj+n).\displaystyle=K_{r}(\mathbf{x}\mathbf{x_{n^{\prime}}},ax+b+2^{-h_{j}+n^{\prime}}).

Similarly,

Kr(xn,axn+b)\displaystyle K_{r}(x_{n},ax_{n}+b) =Kr(xn,ax+b+2hj+n)\displaystyle=K_{r}(x_{n},ax+b+2^{-h_{j}+n})
=Kr(𝐱𝐱𝐧,ax+b+2hj+n).\displaystyle=K_{r}(\mathbf{x}\mathbf{x_{n}},ax+b+2^{-h_{j}+n}).

By our definition of 𝐱𝐧\mathbf{x_{n}} and 𝐱𝐧\mathbf{x_{n^{\prime}}}, 𝐱𝐧\mathbf{x_{n^{\prime}}} contains all the bits of 1a\frac{1}{a} that 𝐱𝐧\mathbf{x_{n}} has. Thus,

K(𝐱𝐧r𝐱𝐧r)=O(logn)K(\mathbf{x_{n}}{\upharpoonright}r\mid\mathbf{x_{n^{\prime}}}{\upharpoonright}r)=O(\log n^{\prime}).

Given the first rr bits of 𝐱𝐧\mathbf{x_{n}}, we can compute the first rr bits of 𝐱𝐧\mathbf{x_{n^{\prime}}} by simply giving the nnn^{\prime}-n bits of 1a\frac{1}{a} that 𝐱𝐧\mathbf{x_{n^{\prime}}} contains but 𝐱𝐧\mathbf{x_{n}} does not. Thus,

K(𝐱𝐧r𝐱𝐧r)nn+O(logn)K(\mathbf{x_{n^{\prime}}}{\upharpoonright}r\mid\mathbf{x_{n}}{\upharpoonright}r)\leq n^{\prime}-n+O(\log n^{\prime}).

Similarly, since axn+bax_{n}+b and axn+bax_{n^{\prime}}+b are translates of each other, to compute one given the other, we just need to give the logarithm of their distance. Therefore,

Kr(ax+b+2hj+n|ax+b+2hj+n)\displaystyle K_{r}(ax+b+2^{-h_{j}+n^{\prime}}\,|\,ax+b+2^{-h_{j}+n}) O(loghj+logn)\displaystyle\leq O(\log h_{j}+\log n^{\prime})
Kr(ax+b+2hj+n|ax+b+2hj+n)\displaystyle K_{r}(ax+b+2^{-h_{j}+n}\,|\,ax+b+2^{-h_{j}+n^{\prime}}) O(loghj+logn)\displaystyle\leq O(\log h_{j}+\log n^{\prime})

Taken together, these four inequalities show that

Kr(xn,axn+bxn,axn+b)\displaystyle K_{r}(x_{n},ax_{n}+b\mid x_{n^{\prime}},ax_{n^{\prime}}+b) =Kr(xnxn,axn+b)\displaystyle=K_{r}(x_{n}\mid x_{n^{\prime}},ax_{n^{\prime}}+b)
+Kr(axn+bxn,axn+b)\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;+K_{r}(ax_{n}+b\mid x_{n^{\prime}},ax_{n^{\prime}}+b)
O(logn)+O(loghj+logn).\displaystyle\leq O(\log n^{\prime})+O(\log h_{j}+\log n^{\prime}).

Similarly,

Kr(xn,axn+bxn,axn+b)\displaystyle K_{r}(x_{n^{\prime}},ax_{n^{\prime}}+b\mid x_{n},ax_{n}+b) =Kr(xnxn,axn+b)\displaystyle=K_{r}(x_{n^{\prime}}\mid x_{n},ax_{n}+b)
+Kr(axn+bxn,axn+b)\displaystyle\;\;\;\;\;\;\;\;\;\;\;+K_{r}(ax_{n^{\prime}}+b\mid x_{n},ax_{n}+b)
nn+O(logn)+O(loghj+logn),\displaystyle\leq n^{\prime}-n+O(\log n^{\prime})+O(\log h_{j}+\log n^{\prime}),

and the proof of the second item is complete.

For item (3), note that the first rr bits of xmx_{m},

xmr=𝐱𝐱𝐦[0(rshj)]x_{m}{\upharpoonright}r=\mathbf{x}\mathbf{x_{m}}[0\ldots(r-sh_{j})].

By definition, 𝐱𝐦[0(rshj)]=1a[0(rshj)]\mathbf{x_{m}}[0\ldots(r-sh_{j})]=\frac{1}{a}[0\ldots(r-sh_{j})]. Thus, given a 2r2^{-r} approximation of xmx_{m}, we can compute a 2r+shj+c2^{-r+sh_{j}+c}-approximation of aa. Finally, given a 2r+shj+c2^{-r+sh_{j}+c}-approximation of xmx_{m}, aa and axm+bax_{m}+b, we can compute a 2r+shj+c2^{-r+sh_{j}+c}-approximation of bb. ∎

Poof of Lemma 15.

The proof of this lemma is essentially (a discrete, approximate, version of) the mean value theorem. In particular, we will show that x0x_{0} satisfies

Kr(x0,ax0+b)(1+s)rK_{r}(x_{0},ax_{0}+b)\leq(1+s)r,

for every r[shj,hj]r\in[sh_{j},h_{j}], and xmx_{m} satisfies

Kr(xm,axm+b)(1+s)rK_{r}(x_{m},ax_{m}+b)\geq(1+s)r,

for every r[shj,hj]r\in[sh_{j},h_{j}]. By Lemma 20, the map nKr(xn,axn+b)n\mapsto K_{r}(x_{n},ax_{n}+b) is “continuous”, and therefore there must be a point xnx_{n} which satisfies

minshj<rhj|Kr(x,ax+b)(1+s)r|<rj\min\limits_{sh_{j}<r\leq h_{j}}\left|K_{r}(x,ax+b)-(1+s)r\right|<\frac{r}{j}.

We now formalize this intuition. For each nn, define

Mn=min{Kr(xn,axn+b)r|shjrhj}M_{n}=\min\{\frac{K_{r}(x_{n},ax_{n}+b)}{r}\,|\,sh_{j}\leq r\leq h_{j}\}.

To prove our theorem, it suffices to show that there is an nn such that

1+s1jMn1+s+1j.1+s-\frac{1}{j}\leq M_{n}\leq 1+s+\frac{1}{j}.

To begin, note, by our construction of hjh_{j}, j<loghjj<\log h_{j}. We also note that Khj(x0)shjK_{h_{j}}(x_{0})\leq sh_{j}, since the bits of x0x_{0} after shjsh_{j} are all zero. Therefore,

Khj(x0,ax0+b)\displaystyle K_{h_{j}}(x_{0},ax_{0}+b) =Khj(x0)+Khj(ax0+bx0)+O(loghj)\displaystyle=K_{h_{j}}(x_{0})+K_{h_{j}}(ax_{0}+b\mid x_{0})+O(\log h_{j}) [Lemma 2]
shj+Khj(ax0+bx0)+O(loghj)\displaystyle\leq sh_{j}+K_{h_{j}}(ax_{0}+b\mid x_{0})+O(\log h_{j})
shj+Khj(ax0+b)+O(loghj)\displaystyle\leq sh_{j}+K_{h_{j}}(ax_{0}+b)+O(\log h_{j})
shj+hj+O(loghj)\displaystyle\leq sh_{j}+h_{j}+O(\log h_{j})

where the constant of the O(loghj)O(\log h_{j}) term is independent of x0x_{0}. Thus, M0<1+s+1jM_{0}<1+s+\frac{1}{j}, for all jj such that hjj>O(loghj)\frac{h_{j}}{j}>O(\log h_{j}).

We now show that

Kr(xm,axm+b)(1+s)rrjK_{r}(x_{m},ax_{m}+b)\geq(1+s)r-\frac{r}{j},

for all sufficiently large jj, and for every shjrhjsh_{j}\leq r\leq h_{j}.

Let shjrhjsh_{j}\leq r\leq h_{j}, η=112j\eta=1-\frac{1}{2j} and let D=D(r,(a,b),ηD=D(r,(a,b),\eta. Let ϵ=132j2\epsilon=\frac{1}{32j^{2}} and δ=1η\delta=1-\eta. We now show that the conditions of Lemma 6 are satisfied.

Let (u,v)(u,v) be a line such that t:=(a,b)(u,v)rshjt:=\|(a,b)-(u,v)\|\geq r-sh_{j}, and uxm+v=axm+bux_{m}+v=ax_{m}+b. Note that rtshjr-t\leq sh_{j}. Then, by Lemma 7, Lemma 8 and Lemma 11(1),

KrD(u,v)\displaystyle K^{D}_{r}(u,v) KtD(a,b)+Krt,rD(xm|a,b)O(logr)\displaystyle\geq K^{D}_{t}(a,b)+K^{D}_{r-t,r}(x_{m}\,|\,a,b)-O(\log r) [Lemma 7]
KtD(a,b)+Krt,r(xm|a,b)O(logr)\displaystyle\geq K^{D}_{t}(a,b)+K_{r-t,r}(x_{m}\,|\,a,b)-O(\log r) [Lemma 8]
=min{ηr,Kt(a,b)}+Krt,r(xm|a,b)O(logr)\displaystyle=\min\{\eta r,K_{t}(a,b)\}+K_{r-t,r}(x_{m}\,|\,a,b)-O(\log r) [Lemma 8]
=min{ηr,Kt(a,b)}+rtO(logr).\displaystyle=\min\{\eta r,K_{t}(a,b)\}+r-t-O(\log r). [construction of xx]

There are two cases guaranteed by Lemma 8. If ηrKt(a,b)\eta r\leq K_{t}(a,b), then

KrD(u,v)\displaystyle K^{D}_{r}(u,v) ηr+rtO(logr)\displaystyle\geq\eta r+r-t-O(\log r)
(ηϵ)r+rt\displaystyle\geq(\eta-\epsilon)r+r-t [jj is large]
(ηϵ)r+δ(rt),\displaystyle\geq(\eta-\epsilon)r+\delta(r-t), [δ<1\delta<1]

since εrO(logr)\varepsilon r\geq O(\log r). Hence, we may apply Lemma 6 in this case. Otherwise, KtD(a,b)=Kt(a,b)K^{D}_{t}(a,b)=K_{t}(a,b). Then

KrD(u,v)\displaystyle K^{D}_{r}(u,v) Kt(a,b)+rtO(logr)\displaystyle\geq K_{t}(a,b)+r-t-O(\log r)
(d2j)t+rtO(logr)\displaystyle\geq(d-2^{-j})t+r-t-O(\log r) [Definition of hjh_{j}]
=ηr+(1η)rt(1+2jd)O(logr)\displaystyle=\eta r+(1-\eta)r-t(1+2^{-j}-d)-O(\log r)
ηr+(1η)rt(1η)O(logr)\displaystyle\geq\eta r+(1-\eta)r-t(1-\eta)-O(\log r)
=ηr+δ(rt)O(logr)\displaystyle=\eta r+\delta(r-t)-O(\log r)
(ηε)r+δ(rt).\displaystyle\geq(\eta-\varepsilon)r+\delta(r-t).

Therefore we may apply Lemma 6.

Kr(xm,axm+b)\displaystyle K_{r}(x_{m},ax_{m}+b) KrD(a,b,xm)Krshj,r(a,b|,xm,axm+b)\displaystyle\geq K_{r}^{D}(a,b,x_{m})-K_{r-sh_{j},r}(a,b\,|,x_{m},ax_{m}+b) [Lemma 6]
4ε1ηrK(ϵ,η)O(logr)\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;-\frac{4\varepsilon}{1-\eta}r-K(\epsilon,\eta)-O(\log r)
=KrD(a,b,xm)O(logr)\displaystyle=K_{r}^{D}(a,b,x_{m})-O(\log r) [Lemma 20(3)]
4ε1ηrK(ϵ,η)O(logr)\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;-\frac{4\varepsilon}{1-\eta}r-K(\epsilon,\eta)-O(\log r)
=KrD(a,b,xm)O(logr)1/8j21/2jrK(ϵ,η)O(logr)\displaystyle=K_{r}^{D}(a,b,x_{m})-O(\log r)-\frac{1/8j^{2}}{1/2j}r-K(\epsilon,\eta)-O(\log r)
=KrD(a,b,xm)O(logr)r4jK(ϵ,η)O(logr)\displaystyle=K_{r}^{D}(a,b,x_{m})-O(\log r)-\frac{r}{4j}-K(\epsilon,\eta)-O(\log r)
=KrD(a,b,xm)O(logr)r4jK(j)O(logr)\displaystyle=K_{r}^{D}(a,b,x_{m})-O(\log r)-\frac{r}{4j}-K(j)-O(\log r)
=KrD(a,b,xm)r4jO(logr)\displaystyle=K_{r}^{D}(a,b,x_{m})-\frac{r}{4j}-O(\log r)
=KrD(a,b,xm)r2j.\displaystyle=K_{r}^{D}(a,b,x_{m})-\frac{r}{2j}.

By our construction of oracle DD, and the symmetry of information,

KrD(a,b,xm)\displaystyle K_{r}^{D}(a,b,x_{m}) =KrD(a,b)+KrD(xm|a,b)O(logr)\displaystyle=K_{r}^{D}(a,b)+K_{r}^{D}(x_{m}\,|\,a,b)-O(\log r) [Lemma 2]
=KrD(a,b)+Kr(xm|a,b)O(logr)\displaystyle=K_{r}^{D}(a,b)+K_{r}(x_{m}\,|\,a,b)-O(\log r) [Lemma 8(ii)]
ηr+Kr(xm|a,b)O(logr)\displaystyle\geq\eta r+K_{r}(x_{m}\,|\,a,b)-O(\log r) [Lemma 8(i)]
ηr+shjO(logr).\displaystyle\geq\eta r+sh_{j}-O(\log r). (23)

Therefore, we have

Kr(xm,axm+b)\displaystyle K_{r}(x_{m},ax_{m}+b) ηr+shjO(logr)r2j\displaystyle\geq\eta r+sh_{j}-O(\log r)-\frac{r}{2j}
(1+s)rr2jr2j\displaystyle\geq(1+s)r-\frac{r}{2j}-\frac{r}{2j}
>(1+s)rrj.\displaystyle>(1+s)r-\frac{r}{j}.

Thus we have shown that

Kr(xm,axm+b)>(1+s)rrjK_{r}(x_{m},ax_{m}+b)>(1+s)r-\frac{r}{j},

for every shjrhjsh_{j}\leq r\leq h_{j}.

If either

  • Mm(1+s)+1jM_{m}\leq(1+s)+\frac{1}{j}, or

  • M0(1+s)1jM_{0}\geq(1+s)-\frac{1}{j}

our claim would follow. We will therefore assume that

M0<1+s1jM_{0}<1+s-\frac{1}{j}, and 1+s+1j<Mm1+s+\frac{1}{j}<M_{m}.

Define

L\displaystyle L ={n|Mn<1+s1j}\displaystyle=\{n\,|\,M_{n}<1+s-\frac{1}{j}\}
G\displaystyle G ={n|Mn>1+s+1j}.\displaystyle=\{n\,|\,M_{n}>1+s+\frac{1}{j}\}.

By our assumption, LL and GG are non-empty. Suppose that LL and GG partition {0,,m}\{0,\ldots,m\}. Then there is a nn such that nLn\in L and n+1Gn+1\in G. However, by Lemma 20,

|Kr(xn+1,axn+1+b)Kr(xn,axn+b)|1+O(logr)|K_{r}(x_{n+1},ax_{n+1}+b)-K_{r}(x_{n},ax_{n}+b)|\leq 1+O(\log r),

for every rr. Let rr be a precision testifying to xnLx_{n}\in L. Then

(1+s1j)r\displaystyle(1+s-\frac{1}{j})r >Kr(xn,axn+b)\displaystyle>K_{r}(x_{n},ax_{n}+b)
>Kr(xn+1,axn+1+b)1O(logr).\displaystyle>K_{r}(x_{n+1},ax_{n+1}+b)-1-O(\log r).

That is,

Kr(xn+1,axn+1+b)r\displaystyle\frac{K_{r}(x_{n+1},ax_{n+1}+b)}{r} <1+s1j+1r+O(logr)r\displaystyle<1+s-\frac{1}{j}+\frac{1}{r}+\frac{O(\log r)}{r}
<1+s+1j,\displaystyle<1+s+\frac{1}{j},

which contradicts our assumption that xn+1Gx_{n+1}\in G and the proof is complete. ∎

6 Conclusion and Future Directions

The behavior of the effective dimension of points on a line is not only interesting from the algorithmic randomness viewpoint, but also because of its deep connections to geometric measure theory. There are many avenues for future research in this area.

The results of this paper show that, for any line La,bL_{a,b}, the dimension spectrum sp(La,b)\operatorname{sp}(L_{a,b}) contains a unit interval. However, this is not, in general, a tight bound. It would be very interesting to have a more thorough understanding of the “low end” of the dimension spectrum. Stull [17] showed that the Hausdorff dimension of points xx such that

dim(x,ax+b)α+dim(a,b)2\dim(x,ax+b)\leq\alpha+\frac{\dim(a,b)}{2}

is at most α\alpha. Further investigation of the low-end of the spectrum is needed.

It seems plausible that, for certain lines, the dimension spectrum contains an interval of length greater than one. For example, are there lines in the plane such that sp(L)\operatorname{sp}(L) contains an interval of length strictly greater than 11?

Another interesting direction is to study the dimension spectrum of particular classes of lines. One natural class is the lines La,bL_{a,b} whose slope and intercept are both in the Cantor set. By restricting the lines to the Cantor set, or, more generally, self-similar fractals, might give enough structure to prove tight bounds not possible in the general case.

Additionally, the focus has been on the effective (Hausdorff) dimension of points. Very little is known about the effective strong dimension of points on a line. The known techniques do not seem to apply to this question. New ideas are needed to understand the strong dimension spectrum of planar lines.

Finally, it would be interesting to broaden this direction by considering the dimension spectra of other geometric objects. For example, can anything be said about the dimension spectrum of a polynomial?

References

  • [1] Randall Dougherty, Jack Lutz, R Daniel Mauldin, and Jason Teutsch. Translating the Cantor set by a random real. Transactions of the American Mathematical Society, 366(6):3027–3041, 2014.
  • [2] Rod Downey and Denis Hirschfeldt. Algorithmic Randomness and Complexity. Springer-Verlag, 2010.
  • [3] Xiaoyang Gu, Jack H Lutz, Elvira Mayordomo, and Philippe Moser. Dimension spectra of random subfractals of self-similar fractals. Annals of Pure and Applied Logic, 165(11):1707–1726, 2014.
  • [4] Ming Li and Paul M.B. Vitányi. An Introduction to Kolmogorov Complexity and Its Applications. Springer, third edition, 2008.
  • [5] Jack H. Lutz. Dimension in complexity classes. SIAM J. Comput., 32(5):1236–1259, 2003.
  • [6] Jack H. Lutz. The dimensions of individual strings and sequences. Inf. Comput., 187(1):49–79, 2003.
  • [7] Jack H. Lutz and Neil Lutz. Algorithmic information, plane Kakeya sets, and conditional dimension. ACM Trans. Comput. Theory, 10(2):Art. 7, 22, 2018.
  • [8] Jack H Lutz and Neil Lutz. Who asked us? how the theory of computing answers questions about analysis. In Complexity and Approximation, pages 48–56. Springer, 2020.
  • [9] Jack H. Lutz and Elvira Mayordomo. Dimensions of points in self-similar fractals. SIAM J. Comput., 38(3):1080–1112, 2008.
  • [10] Neil Lutz. Fractal intersections and products via algorithmic dimension. In 42nd International Symposium on Mathematical Foundations of Computer Science (MFCS 2017), 2017.
  • [11] Neil Lutz and D. M. Stull. Bounding the dimension of points on a line. In Theory and applications of models of computation, volume 10185 of Lecture Notes in Comput. Sci., pages 425–439. Springer, Cham, 2017.
  • [12] Neil Lutz and D. M. Stull. Dimension spectra of lines. In Unveiling dynamics and complexity, volume 10307 of Lecture Notes in Comput. Sci., pages 304–314. Springer, Cham, 2017.
  • [13] Neil Lutz and D. M. Stull. Projection theorems using effective dimension. In 43rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2018), 2018.
  • [14] Elvira Mayordomo. A Kolmogorov complexity characterization of constructive Hausdorff dimension. Inf. Process. Lett., 84(1):1–3, 2002.
  • [15] Elvira Mayordomo. Effective fractal dimension in algorithmic information theory. In S. Barry Cooper, Benedikt Löwe, and Andrea Sorbi, editors, New Computational Paradigms: Changing Conceptions of What is Computable, pages 259–285. Springer New York, 2008.
  • [16] Andre Nies. Computability and Randomness. Oxford University Press, Inc., New York, NY, USA, 2009.
  • [17] D. M. Stull. Results on the dimension spectra of planar lines. In 43rd International Symposium on Mathematical Foundations of Computer Science, volume 117 of LIPIcs. Leibniz Int. Proc. Inform., pages Art. No. 79, 15. Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern, 2018.
  • [18] Daniel Turetsky. Connectedness properties of dimension level sets. Theor. Comput. Sci., 412(29):3598–3603, 2011.