This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Concentration of maximum degree in random planar graphs

Mihyun Kang, Michael Missethan Institute of Discrete Mathematics, Graz University of Technology, Steyrergasse 30, 8010 Graz, Austria {kang,missethan}@math.tugraz.at
Abstract.

Let P(n,m)P(n,m) be a graph chosen uniformly at random from the class of all planar graphs on vertex set [n]:={1,,n}[n]:=\left\{1,\ldots,n\right\} with m=m(n)m=m(n) edges. We show that in the sparse regime, when m/n1m/n\leq 1, with high probability the maximum degree of P(n,m)P(n,m) takes at most two different values. In contrast, this is not true anymore in the dense regime, when m/n>1m/n>1, where the maximum degree of P(n,m)P(n,m) is not concentrated on any subset of [n][n] with bounded size.

Key words and phrases:
Random graphs, random planar graphs, maximum degree, balls into bins, Prüfer sequence
Supported by Austrian Science Fund (FWF): I3747 and W1230

1. Introduction and results

1.1. Motivation

The Erdős-Rényi random graph G(n,mG(n,m), introduced by Erdős and Rényi [16, 17], is a graph chosen uniformly at random from the class 𝒢(n,m)\mathcal{G}(n,m) of all vertex-labelled graphs on vertex set [n]:={1,,n}[n]:=\left\{1,\ldots,n\right\} with m=m(n)m=m(n) edges, denoted by G(n,m)R𝒢(n,m)G(n,m)\in_{R}\mathcal{G}(n,m). Since its introduction G(n,m)G(n,m), together with the closely related binomial random graph G(n,p)G(n,p), has been intensively studied (see e.g. [20, 30, 5]). A remarkable feature of this model is the ‘concentration’ of many graph parameters. That is, with high probability (meaning with probability tending to 1 as nn tends to infinity, whp  for short) certain graph parameters in G(n,m)G(n,m) lie in ‘small’ intervals, which only depend on nn and mm.

The graph parameter we will focus on in this paper is the maximum degree of a graph HH, denoted by Δ(H)\Delta\left(H\right). Erdős and Rényi [16] were the first to consider Δ(G(n,m))\Delta\left(G(n,m)\right) and since then, many results on Δ(G(n,m))\Delta\left(G(n,m)\right) and, more generally, the degree sequence of G(n,m)G(n,m) were obtained (see e.g. [2, 48, 18, 3, 40, 28, 4]). A particularly interesting result by Bollobás [4] is that mnlognm\sim n\log n is a ‘threshold’ for the concentration of Δ(G(n,m))\Delta\left(G(n,m)\right). More formally, whp Δ(G(n,m))\Delta\left(G(n,m)\right) takes one of two values when m=o(nlogn)m=o\left(n\log n\right), while Δ(G(n,m))\Delta\left(G(n,m)\right) is not concentrated on any subset of [n][n] with bounded size when m=ω(nlogn)m=\omega\left(n\log n\right) and m=o(n2)m=o\left(n^{2}\right).

Theorem 1.1 ([4]).

Let m=m(n)=o(nlogn)m=m(n)=o\left(n\log n\right) and G=G(n,m)R𝒢(n,m)G=G(n,m)\in_{R}\mathcal{G}(n,m). Then there exists a D=D(n)D=D(n)\in\mathbb{N} such that whp Δ(G){D,D+1}\Delta\left(G\right)\in\left\{D,D+1\right\}.

Theorem 1.2 ([4]).

Let m=m(n)=ω(nlogn)m=m(n)=\omega\left(n\log n\right), m=o(n2)m=o\left(n^{2}\right), and G=G(n,m)R𝒢(n,m)G=G(n,m)\in_{R}\mathcal{G}(n,m). If I=I(n)[n]I=I(n)\subseteq[n] is such that whp Δ(G)I\Delta\left(G\right)\in I, then |I|=ω(1)\left|I\right|=\omega\left(1\right).

We note that Bollobás [4] actually considered the binomial random graph G(n,p)G(n,p). But by using standard tools relating G(n,m)G(n,m) and G(n,p)G(n,p) (see e.g. [20, Section 1.1]) one can translate his results as stated in Theorems 1.1 and 1.2.

In recent decades various models of random graphs have been introduced by imposing additional constraints to G(n,m)G(n,m), e.g. degree restrictions or topological constraints. In particular, random planar graphs and related structures, like random graphs on surfaces and random planar maps, have attained considerable attention [8, 11, 34, 37, 9, 26, 32, 39, 38, 13, 10, 15, 14, 12, 21, 44, 46, 19, 45, 23, 22, 25, 24]. McDiarmid and Reed [38] considered the so-called nn-vertex model for random planar graphs, that is, a graph P(n)P(n) chosen uniformly at random from the class of all vertex-labelled planar graphs on vertex set [n][n]. They proved that whp Δ(P(n))=Θ(logn)\Delta\left(P(n)\right)=\Theta\left(\log n\right). Later Drmota, Giménez, Noy, Panagiotou, and Steger [13] used tools from analytic combinatorics and Boltzmann sampling techniques to show that whp Δ(P(n))\Delta\left(P(n)\right) is concentrated in an interval of length O(loglogn)O\left(\log\log n\right).

A more natural generalisation of G(n,m)G(n,m) seems to be the random planar graph P(n,m)P(n,m), which is a graph chosen uniformly at random from the class 𝒫(n,m)\mathcal{P}(n,m) of all vertex-labelled planar graphs on vertex set [n][n] with m=m(n)m=m(n) edges, denoted by P(n,m)R𝒫(n,m)P(n,m)\in_{R}\mathcal{P}(n,m). The random planar graph P(n,m)P(n,m) has been studied separately for the ‘sparse’ regime where mn+o(n)m\leq n+o\left(n\right) (see [32, 34]) and the ‘dense’ regime where m=μnm=\left\lfloor\mu n\right\rfloor for a constant μ(1,3)\mu\in(1,3) (see e.g. [26]). In this paper we show, in the flavour of Theorems 1.1 and 1.2, that in the sparse regime whp Δ(P(n,m))\Delta\left(P(n,m)\right) takes one of two values (see Theorems 1.6 and 1.4), while in the dense regime Δ(P(n,m))\Delta\left(P(n,m)\right) is not concentrated on any subset of [n][n] with bounded size (see Theorem 1.10).

1.2. Main results

In order to state our main results, we need the following definition, where we denote by log\log the natural logarithm.

Definition 1.3.

Let ν:2+\nu:\mathbb{N}^{2}\to\mathbb{R}^{+} be a function such that ν(n,k)\nu\left(n,k\right) is the unique positive zero of

f(x)=fn,k(x):=xlogk+x(x+1/2)logx(x1)logn.\displaystyle f(x)=f_{n,k}(x)\hskip 1.70709pt:=\hskip 1.70709ptx\log k+x-(x+1/2)\log x-(x-1)\log n.

In case of n=kn=k, we write ν(n):=ν(n,n)\nu\left(n\right):=\nu\left(n,n\right).

In Section 9.1 we will prove that the function ν\nu is well defined, i.e. ff has a unique positive zero. In Lemma 2.11 we will provide some important properties of ν\nu and in Section 4 we motivate the definition of ν\nu in the context of the balls-into-bins model.

We distinguish different cases according to which ‘region’ the edge density falls into. The first regime which we consider is when mn/2+O(n2/3)m\leq n/2+O\left(n^{2/3}\right).

Theorem 1.4.

Let P=P(n,m)R𝒫(n,m)P=P(n,m)\in_{R}\mathcal{P}(n,m), m=m(n)n/2+O(n2/3)m=m(n)\leq n/2+O\left(n^{2/3}\right), and ε>0\varepsilon>0. Then we have whp ν(n,2m)εΔ(P)ν(n,2m)+ε\left\lfloor\nu\left(n,2m\right)-\varepsilon\right\rfloor\hskip 1.70709pt\leq\hskip 1.70709pt\Delta\left(P\right)\hskip 1.70709pt\leq\hskip 1.70709pt\left\lfloor\nu\left(n,2m\right)+\varepsilon\right\rfloor. In particular, whp Δ(P){D,D+1}\Delta\left(P\right)\in\left\{D,D+1\right\}, where D=D(n):=ν(n,2m)1/3D=D(n):=\left\lfloor\nu\left(n,2m\right)-1/3\right\rfloor.

Next, we consider the case where mn/2+ω(n2/3)m\geq n/2+\omega\left(n^{2/3}\right) is such that mn+n1δm\leq n+n^{1-\delta} for some δ>0\delta>0. Kang and Łuczak [32] and Kang, Moßhammer, and Sprüssel [34] showed that, in contrast to the case when mn/2+O(n2/3)m\leq n/2+O\left(n^{2/3}\right), in this regime whp the largest component of P=P(n,m)P=P(n,m) contains significantly more vertices than the second largest component. Therefore, we provide a concentration result on the maximum degree not only for PP, but also for the largest component L(P)L\left(P\right) of PP and the ‘rest’ R(P):=PL(P)R\left(P\right):=P\setminus L\left(P\right). We will see that Δ(L(P))\Delta\left(L\left(P\right)\right) and Δ(R(P))\Delta\left(R\left(P\right)\right) are strongly concentrated around ν()\nu\left(\ell\right) and ν(r)\nu\left(r\right) for suitable =(n)\ell=\ell(n) and r=r(n)r=r(n), respectively.

Definition 1.5.

Let m=m(n)n/2+ω(n2/3)m=m(n)\geq n/2+\omega\left(n^{2/3}\right) be such that mn+n1δm\leq n+n^{1-\delta} for some δ>0\delta>0. Then we define =(n)\ell=\ell(n) and r=r(n)r=r(n) as follows.

ranges of mm \leavevmode\nobreak\ \leavevmode\nobreak\ \ell\leavevmode\nobreak\ \leavevmode\nobreak\ rr
(a)  m=n/2+sm=n/2+s for s=s(n)>0s=s(n)>0 such that s=o(n)s=o\left(n\right) and s3n2s^{3}n^{-2}\to\infty ss nn
(b)  m=dn/2m=dn/2 where d=d(n)d=d(n) tends to a constant in (1,2)\left(1,2\right) nn nn
(c)  m=n+tm=n+t for t=t(n)<0t=t(n)<0 such that t=o(n)t=o\left(n\right) and t5n3t^{5}n^{-3}\to-\infty nn |t|\left|t\right|
(d)  m=n+tm=n+t for t=t(n)t=t(n) such that t5n3t^{5}n^{-3} tends to a constant in \mathbb{R} nn n3/5n^{3/5}
(e)  m=n+tm=n+t for t=t(n)>0t=t(n)>0 such that t=o(n)t=o\left(n\right) and t5n3t^{5}n^{-3}\to\infty nn n3/2t3/2n^{3/2}t^{-3/2}

We note that \ell and rr are chosen such that whp the number of vertices in L(P)L\left(P\right) and R(P)R\left(P\right) are Θ()\Theta\left(\ell\right) and Θ(r)\Theta\left(r\right), respectively, according to results in [34, 32]. Throughout the paper, we will assume that if m=m(n)n/2+ω(n2/3)m=m(n)\geq n/2+\omega\left(n^{2/3}\right) is such that mn+n1δm\leq n+n^{1-\delta} for some δ>0\delta>0, then m=m(n)m=m(n) lies in one of the five regimes considered in Definition 1.5, which is due to the critical phenomena observed in random planar graphs. Our next result states that in all these cases Δ(P)\Delta\left(P\right), Δ(L(P))\Delta\left(L\left(P\right)\right), and Δ(R(P))\Delta\left(R\left(P\right)\right) are strongly concentrated.

Theorem 1.6.

Let P=P(n,m)R𝒫(n,m)P=P(n,m)\in_{R}\mathcal{P}(n,m), L=L(P)L=L\left(P\right) be the largest component of PP, and R=PLR=P\setminus L. Assume m=m(n)n/2+ω(n2/3)m=m(n)\geq n/2+\omega\left(n^{2/3}\right) is such that mn+n1δm\leq n+n^{1-\delta} for some δ>0\delta>0. Let =(n)\ell=\ell(n) and r=r(n)r=r(n) be as in Definition 1.5 and ε>0\varepsilon>0. Then whp

  1. (a)

    ν()ε+1Δ(L)ν()+ε+1\left\lfloor\nu\left(\ell\right)-\varepsilon\right\rfloor+1\hskip 1.70709pt\leq\hskip 1.70709pt\Delta\left(L\right)\hskip 1.70709pt\leq\hskip 1.70709pt\left\lfloor\nu\left(\ell\right)+\varepsilon\right\rfloor+1;

  2. (b)

    ν(r)εΔ(R)ν(r)+ε\left\lfloor\nu\left(r\right)-\varepsilon\right\rfloor\hskip 1.70709pt\leq\hskip 1.70709pt\Delta\left(R\right)\hskip 1.70709pt\leq\hskip 1.70709pt\left\lfloor\nu\left(r\right)+\varepsilon\right\rfloor.

In particular, whp Δ(P){D,D+1}\Delta\left(P\right)\in\left\{D,D+1\right\}, where D=D(n):=max{ν()+2/3,ν(r)1/3}D=D(n):=\max\left\{\left\lfloor\nu\left(\ell\right)+2/3\right\rfloor,\left\lfloor\nu\left(r\right)-1/3\right\rfloor\right\}.

For example, Theorem 1.6 says that if m=n/2+sm=n/2+s for s=s(n)>0s=s(n)>0 such that s=o(n)s=o\left(n\right) and s3n2s^{3}n^{-2}\to\infty, known as the weakly supercritical regime, then whp ν(s)ε+1Δ(L)ν(s)+ε+1\left\lfloor\nu\left(s\right)-\varepsilon\right\rfloor+1\hskip 1.70709pt\leq\hskip 1.70709pt\Delta\left(L\right)\hskip 1.70709pt\leq\hskip 1.70709pt\left\lfloor\nu\left(s\right)+\varepsilon\right\rfloor+1. In contrast, if m=dn/2m=dn/2 where d=d(n)d=d(n) tends to a constant in (1,2)\left(1,2\right), which is the so-called intermediate regime, then whp ν(n)ε+1Δ(L)ν(n)+ε+1\left\lfloor\nu\left(n\right)-\varepsilon\right\rfloor+1\hskip 1.70709pt\leq\hskip 1.70709pt\Delta\left(L\right)\hskip 1.70709pt\leq\hskip 1.70709pt\left\lfloor\nu\left(n\right)+\varepsilon\right\rfloor+1.

Combining Theorems 1.6 and 1.4 we can determine the asymptotic order of Δ(P)\Delta\left(P\right) in the sparse regime.

Corollary 1.7.

Let P=P(n,m)R𝒫(n,m)P=P(n,m)\in_{R}\mathcal{P}(n,m) and assume m=m(n)m=m(n) is such that lim infnm/n>0\liminf_{n\to\infty}m/n>0 and mn+n1δm\leq n+n^{1-\delta} for some δ>0\delta>0. Then whp

Δ(P)=(1+o(1))lognloglogn.\displaystyle\Delta\left(P\right)=\left(1+o\left(1\right)\right)\frac{\log n}{\log\log n}.

It is well known that when mn/2+O(n2/3)m\leq n/2+O\left(n^{2/3}\right), the probability that G(n,m)G(n,m) is planar is bounded away from 0 (see e.g. Theorem 5.2 and [29, 35, 43]) and therefore, P(n,m)P(n,m) ‘behaves’ asymptotically like G(n,m)G(n,m). However, this is not the case anymore when mn/2+ω(n2/3)m\geq n/2+\omega\left(n^{2/3}\right), since then whp G(n,m)G(n,m) is not planar (see [35, 43]). Theorem 1.6 reveals the following, perhaps surprising, difference between P=P(n,m)P=P(n,m) and G=G(n,m)G=G(n,m) in the case that m=m(n)=dn/2m=m(n)=dn/2 where d=d(n)d=d(n) tends to a constant in (1,2)\left(1,2\right). Roughly speaking, the maximum degrees Δ(P)\Delta\left(P\right), Δ(L(P))\Delta\left(L\left(P\right)\right), and Δ(R(P))\Delta\left(R\left(P\right)\right) are independent of d=d(n)d=d(n). Furthermore, Δ(L(P))\Delta\left(L\left(P\right)\right) and Δ(R(P))\Delta\left(R\left(P\right)\right) typically differ at most by two.

Corollary 1.8.

Let P=P(n,m)R𝒫(n,m)P=P(n,m)\in_{R}\mathcal{P}(n,m), L=L(P)L=L\left(P\right) be the largest component of PP, and R=PLR=P\setminus L. There exists a D=D(n)D=D(n) such that for all m=m(n)=dn/2m=m(n)=dn/2 where d=d(n)d=d(n) tends to a constant in (1,2)\left(1,2\right), we have whp

Δ(L)\displaystyle\Delta\left(L\right) {D,D+1};\displaystyle\in\left\{D,D+1\right\};
Δ(R)\displaystyle\Delta\left(R\right) {D1,D};\displaystyle\in\left\{D-1,D\right\};
Δ(P)\displaystyle\Delta\left(P\right) {D,D+1}.\displaystyle\in\left\{D,D+1\right\}.

In particular, whp Δ(R)Δ(L)Δ(R)+2\Delta\left(R\right)\leq\Delta\left(L\right)\leq\Delta\left(R\right)+2.

In contrast to Corollary 1.8, G=G(n,m)G=G(n,m) exhibits a perhaps more intuitive behaviour. If the average degree d=2m/nd=2m/n is growing, then Δ(G)\Delta\left(G\right) and Δ(L(G))\Delta\left(L\left(G\right)\right) are increasing, while Δ(R(G))\Delta\left(R\left(G\right)\right) is decreasing. As a consequence, Δ(L(G))\Delta\left(L\left(G\right)\right) is typically much larger than Δ(R(G))\Delta\left(R\left(G\right)\right).

Proposition 1.9.

For i=1,2i=1,2, let mi=mi(n)=din/2m_{i}=m_{i}(n)=d_{i}n/2 where di=di(n)d_{i}=d_{i}(n) tends to a constant ci>1c_{i}>1 and Gi=G(n,mi)R𝒢(n,mi)G_{i}=G(n,m_{i})\in_{R}\mathcal{G}(n,m_{i}). If c1<c2c_{1}<c_{2} and G1G_{1} and G2G_{2} are chosen independently from each other, then whp Δ(G2)Δ(G1)\Delta\left(G_{2}\right)-\Delta\left(G_{1}\right), Δ(L(G2))Δ(L(G1))\Delta\left(L\left(G_{2}\right)\right)-\Delta\left(L\left(G_{1}\right)\right), Δ(R(G1))Δ(R(G2))\Delta\left(R\left(G_{1}\right)\right)-\Delta\left(R\left(G_{2}\right)\right), and Δ(L(G1))Δ(R(G1))\Delta\left(L\left(G_{1}\right)\right)-\Delta\left(R\left(G_{1}\right)\right) are strictly positive and of order Θ(logn/(loglogn)2)\Theta\left(\log n/\left(\log\log n\right)^{2}\right).

Proposition 1.9 follows by a generalised version of Theorem 1.1 (see Theorem 5.1(b)) and classical results on the Erdős-Rényi random graph G(n,m)G(n,m). (For the sake of completeness, we provide a sketch of the proof of Proposition 1.9 in Appendix A.)

Finally, we consider the dense case when m=m(n)=μnm=m(n)=\left\lfloor\mu n\right\rfloor for μ(1,3)\mu\in(1,3) and show that in this regime Δ(P)\Delta\left(P\right) is not concentrated on any subset of [n][n] with bounded size.

Theorem 1.10.

Let P=P(n,m)R𝒫(n,m)P=P(n,m)\in_{R}\mathcal{P}(n,m) and assume m=m(n)=μnm=m(n)=\left\lfloor\mu n\right\rfloor for μ(1,3)\mu\in(1,3). If I=I(n)[n]I=I(n)\subseteq[n] is such that whp Δ(P)I\Delta\left(P\right)\in I, then |I|=ω(1)\left|I\right|=\omega\left(1\right).

We note that in a planar graph on n3n\geq 3 vertices there are at most 3n63n-6 edges, while a general (not necessarily planar) graph can have up to (n2)\binom{n}{2} edges. In view of this fact, it seems natural that the ‘transition’ from the two-point concentration to the non-concentration of the maximum degree occurs much earlier in P(n,m)P(n,m) than in G(n,m)G(n,m), namely at mnm\sim n in P(n,m)P(n,m) (cf. Theorems 1.4, 1.6 and 1.10) instead of mnlognm\sim n\log n in G(n,m)G(n,m) (cf. Theorems 1.1 and 1.2). It is worth noting that the ‘threshold’ where the number of vertices outside the largest component drops from linear to sublinear is mnm\sim n for the random planar graph P(n,m)P(n,m), while it is mnlognm\sim n\log n in the case of G(n,m)G(n,m).

1.3. Outline of the paper

The rest of the paper is structured as follows. After giving the necessary definitions, notations, and concepts in Section 2, we provide our proof strategy in Section 3. Section 4 is devoted to the balls-into-bins model, which we use in Sections 5 and 6 to show concentration of the maximum degree in the Erdős-Rényi random graph, in a random graph without complex components, and in a random forest with specified roots, respectively. In Sections 7 and 8 we provide the proofs of our main results. Subsequently in Section 9, we consider the function ν(n,k)\nu\left(n,k\right) introduced in Definition 1.3 in more detail. Finally in Section 10, we discuss a possible generalisation of our results and various questions that remain unanswered.

2. Preliminaries

2.1. Notations for graphs

We consider only undirected graphs or multigraphs and we always assume that the graphs are vertex-labelled.

Definition 2.1.

Given a (simple or multi) graph HH we denote by

  • V(H)V\left(H\right) the vertex set of HH and

  • v(H)v\left({H}\right) the order of HH, i.e. the number of vertices in HH;

  • E(H)E\left(H\right) the edge set of HH and

  • e(H)e\left({H}\right) the size of HH, i.e. the number of edges in HH;

  • L(H)L\left(H\right) the largest component of HH (if there are more than one largest component, we pick that which contains the vertex with smallest label);

  • R(H):=HL(H)R\left(H\right):=H\setminus L\left(H\right) the graph obtained from HH by deleting the largest component;

  • dH(v)d_{H}\left(v\right) the degree of a vertex vV(H)v\in V\left(H\right). If V(H)=[n]V\left(H\right)=[n], then we call (dH(1),,dH(n))\left(d_{H}\left(1\right),\ldots,d_{H}\left(n\right)\right) the degree sequence of HH.

Definition 2.2.

Given a class 𝒜\mathcal{A} of graphs (e.g. the class of planar graphs), we denote by 𝒜(n)\mathcal{A}(n) the subclass of 𝒜\mathcal{A} containing the graphs on vertex set [n][n] and by 𝒜(n,m)\mathcal{A}(n,m) the subclass of 𝒜\mathcal{A} containing the graphs on vertex set [n][n] with mm edges, respectively. We write A(n)R𝒜(n)A(n)\in_{R}\mathcal{A}(n) for a graph chosen uniformly at random from 𝒜(n)\mathcal{A}(n) and A(n,m)R𝒜(n,m)A(n,m)\in_{R}\mathcal{A}(n,m) for a graph chosen uniformly at random from 𝒜(n,m)\mathcal{A}(n,m), respectively.

2.2. Random variables and asymptotic notation

Definition 2.3.

Let SS be a finite set and let YY and ZZ be random variables with values in SS. Then we say that YY is distributed like ZZ, denoted by YZY\sim Z, if for all xSx\in S we have [Y=x]=[Z=x]\mathbb{P}\left[Y=x\right]=\mathbb{P}\left[Z=x\right].

Throughout this paper, we use the standard Landau notation and all asymptotics are taken with respect to nn, i.e. when nn\to\infty. In order to express that two random variables have asymptotically a ‘similar’ distribution, we use the notion of contiguity.

Definition 2.4.

For each nn\in\mathbb{N}, let S=S(n)S=S(n) be a finite set and let Y=Y(n)Y=Y(n) and Z=Z(n)Z=Z(n) be random variables with values in SS. We say that ZZ is contiguous with respect to YY, denoted by ZYZ\triangleleft Y, if for all sequences I=I(n)S(n)I=I(n)\subseteq S(n)

(limn[YI]=1)(limn[ZI]=1).\displaystyle\left(\lim\limits_{n\to\infty}\mathbb{P}\left[Y\in I\right]=1\right)\leavevmode\nobreak\ \implies\leavevmode\nobreak\ \left(\lim\limits_{n\to\infty}\mathbb{P}\left[Z\in I\right]=1\right).

2.3. Complex part and core

We say that a component of a graph HH is complex if it has at least two cycles. The union of all complex components is called the complex part Q(H)Q\left(H\right). We call the graph HH complex if all its components are complex. The union of all non-complex components is the non-complex part U(H):=HQ(H)U\left(H\right):=H\setminus Q\left(H\right). The core C(H)C\left(H\right), also known as the 2-core, is the maximal subgraph of Q(H)Q\left(H\right) of minimum degree at least two. We observe that the core C(H)C\left(H\right) can be obtained from HH by first removing vertices of degree one recursively and then deleting isolated cycles. We emphasise that the core C(H)C\left(H\right) is not necessarily connected. We denote by QL(H)Q_{L}\left(H\right) the component of Q(H)Q\left(H\right) containing the largest component of the core L(C(H))L\left(C\left(H\right)\right). The rest of the complex part is denoted by QS(H):=Q(H)QL(H)Q_{S}\left(H\right):=Q\left(H\right)\setminus Q_{L}\left(H\right). We call QL(H)Q_{L}\left(H\right) and QS(H)Q_{S}\left(H\right) the large complex part and the small complex part, respectively. We note that the number of vertices in QL(H)Q_{L}\left(H\right) is not necessarily larger than in QS(H)Q_{S}\left(H\right), but it will be true in most cases we consider. Using this decomposition we can split HH into the three disjoint parts QL(H)Q_{L}\left(H\right), QS(H)Q_{S}\left(H\right), and U(H)U\left(H\right). Moreover, we have the relations C(QL(H))=L(C(H))C\left(Q_{L}\left(H\right)\right)=L\left(C\left(H\right)\right) and C(QS(H))=R(C(H))C\left(Q_{S}\left(H\right)\right)=R\left(C\left(H\right)\right).

Later we will construct the large complex part, the small complex part, and the non-complex part of a random planar graph independently of each other. To that end, we will use the following two graph classes.

Definition 2.5.

Let CC be a core, i.e. a graph with minimum degree at least two containing no isolated cycles, and qq\in\mathbb{N}. Then we denote by 𝒬(C,q)\mathcal{Q}(C,q) the class consisting of complex graphs having core CC and vertex set [q][q]. We let Q(C,q)R𝒬(C,q)Q(C,q)\in_{R}\mathcal{Q}(C,q) be a graph chosen uniformly at random from this class.

Definition 2.6.

We denote by 𝒰\mathcal{U} the class consisting of all graphs without complex components. For n,mn,m\in\mathbb{N} we let 𝒰(n,m)𝒰\mathcal{U}(n,m)\subseteq\mathcal{U} be the subclass of all graphs on vertex set [n][n] with mm edges and we write U(n,m)R𝒰(n,m)U(n,m)\in_{R}\mathcal{U}(n,m) for a graph chosen uniformly at random from 𝒰(n,m)\mathcal{U}(n,m).

Remark 2.7.

Let CC be a core, q,n,mq,n,m\in\mathbb{N}, and H𝒬(C,q)H\in\mathcal{Q}(C,q) be a fixed graph. Then there are precisely |𝒰(u,w)|\left|\mathcal{U}(u,w)\right| many graphs H𝒫(n,m)H^{\prime}\in\mathcal{P}(n,m) whose complex part is HH, where u:=nqu:=n-q and w:=me(C)+v(C)qw:=m-e\left({C}\right)+v\left({C}\right)-q. As this number is independent of H𝒬(C,q)H\in\mathcal{Q}(C,q), there is a nice relation between the complex part Q(P)Q\left(P\right) of the random planar graph P=P(n,m)P=P(n,m) and Q(C,q)R𝒬(C,q)Q(C,q)\in_{R}\mathcal{Q}(C,q), the latter being as in Definition 2.5: Conditioned on the event that the core C(P)C\left(P\right) is CC and v(Q(P))=qv\left({Q\left(P\right)}\right)=q, the complex part Q(P)Q\left(P\right) is distributed like Q(C,q)Q(C,q). Similarly, for fixed n~,m~,n,m\tilde{n},\tilde{m},n,m\in\mathbb{N} let U(P)U\left(P\right) be the non-complex part of PP and U(n~,m~)R𝒰(n~,m~)U(\tilde{n},\tilde{m})\in_{R}\mathcal{U}(\tilde{n},\tilde{m}) be as in Definition 2.6. Then, conditioned on the event that v(U(P))=n~v\left({U\left(P\right)}\right)=\tilde{n} and e(U(P))=m~e\left({U\left(P\right)}\right)=\tilde{m}, the non-complex part U(P)U\left(P\right) is distributed like U(n~,m~)U(\tilde{n},\tilde{m}).

2.4. Conditional random graphs

Given a class 𝒜\mathcal{A} of graphs it is sometimes quite difficult to directly analyse the random graph A=A(n)R𝒜(n)A=A(n)\in_{R}\mathcal{A}(n). In such cases we will use the idea of conditional random graphs. Loosely speaking, we split 𝒜\mathcal{A} into disjoint subclasses and consider for each subclass 𝒜~\tilde{\mathcal{A}} the random graph A~=A~(n)R𝒜~(n)\tilde{A}=\tilde{A}(n)\in_{R}\tilde{\mathcal{A}}(n), in other words, the random graph AA conditioned on the event that A𝒜~A\in\tilde{\mathcal{A}}. If we can show that some graph property holds in all these ‘conditional’ random graphs whp, then whp this property holds also in AA. The following definition and lemma make that idea more precise.

Definition 2.8.

Given a class 𝒜\mathcal{A} of graphs, a set SS, and a function Φ:𝒜S\Phi:\mathcal{A}\to S, we call a sequence 𝐚=(an)n\mathbf{a}=(a_{n})_{n\in\mathbb{N}} feasible for (𝒜,Φ)\left(\mathcal{A},\Phi\right) if for each nn\in\mathbb{N} there exists a graph H𝒜(n)H\in\mathcal{A}(n) such that Φ(H)=an\Phi(H)=a_{n}. Moreover, for each nn\in\mathbb{N} we denote by (A𝐚)(n)\left(A\mid\mathbf{a}\right)(n) a graph chosen uniformly at random from the set {H𝒜(n):Φ(H)=an}\left\{H\in\mathcal{A}(n):\Phi(H)=a_{n}\right\}. We will often omit the dependence on nn and write just A𝐚A\mid\mathbf{a} (i.e. ‘AA conditioned on 𝐚\mathbf{a}’) instead of (A𝐚)(n)\left(A\mid\mathbf{a}\right)(n).

Lemma 2.9 ([33, Lemma 3.2]).

Let 𝒜\mathcal{A} be a class of graphs, SS a set, Φ:𝒜S\Phi:\mathcal{A}\to S a function, \mathcal{R} a graph property, and A=A(n)R𝒜(n)A=A(n)\in_{R}\mathcal{A}(n). If for every sequence 𝐚=(an)n\mathbf{a}=(a_{n})_{n\in\mathbb{N}} that is feasible for (𝒜,Φ)\left(\mathcal{A},\Phi\right) we have whp A𝐚A\mid\mathbf{a}\in\mathcal{R}, then we have whp AA\in\mathcal{R}.

2.5. Internal structure of a random planar graph

In the proofs of our main results we will use some results from [32, 34] on the internal structure of a random planar graph P(n,m)P(n,m), e.g. asymptotic order of the core, which are reformulated to simplify asymptotic notation.

Theorem 2.10 ([32, 34]).

Let P=P(n,m)R𝒫(n,m)P=P(n,m)\in_{R}\mathcal{P}(n,m), C=C(P)C=C\left(P\right) be the core, QL=QL(P)Q_{L}=Q_{L}\left(P\right) the large complex part, QS=QS(P)Q_{S}=Q_{S}\left(P\right) the small complex part, U=U(P)U=U\left(P\right) the non-complex part, and L=L(P)L=L\left(P\right) the largest component of PP. In addition, let h=h(n)=ω(1)h=h(n)=\omega\left(1\right) be a function tending to \infty arbitrarily slowly and =(n)\ell=\ell(n) and r=r(n)r=r(n) be as in Definition 1.5. We assume that m=m(n)n/2+ω(n2/3)m=m(n)\geq n/2+\omega\left(n^{2/3}\right) is such that mn+n1δm\leq n+n^{1-\delta} for some δ>0\delta>0 and let β:=min{δ/2,1/5}\beta:=\min\left\{\delta/2,1/5\right\}. Then whp the following hold.

  1. (a)

    Δ(C)=Θ(1)\Delta\left(C\right)=\Theta\left(1\right);

  2. (b)

    v(L(C))=O(1β)v\left({L\left(C\right)}\right)=O\left(\ell^{1-\beta}\right);

  3. (c)

    QL=LQ_{L}=L;

  4. (d)

    v(QL)=Θ()v\left({Q_{L}}\right)=\Theta\left(\ell\right);

  5. (e)

    v(QS)=O(hr2/3)v\left({Q_{S}}\right)=O\left(hr^{2/3}\right);

  6. (f)

    v(U)=Θ(r)v\left({U}\right)=\Theta\left(r\right);

  7. (g)

    e(U)=v(U)/2+O(hv(U)2/3)e\left({U}\right)=v\left({U}\right)/2+O\left(hv\left({U}\right)^{2/3}\right).

2.6. Properties of ν(n,k)\nu\left(n,k\right)

We will use the following basic properties of ν(n,k)\nu\left(n,k\right) introduced in Definition 1.3.

Lemma 2.11.

Let the function ν(n,k)\nu\left(n,k\right) be defined as in Definition 1.3 and ν(n)=ν(n,n)\nu\left(n\right)=\nu\left(n,n\right). Then we have

  1. (a)

    ν(n,k)>1\nu\left(n,k\right)>1 for all n,kn,k\in\mathbb{N};

  2. (b)

    if k=k(n)=O(n1/3)k=k(n)=O\left(n^{1/3}\right), then ν(n,k)5/3+o(1)\nu\left(n,k\right)\leq 5/3+o\left(1\right);

  3. (c)

    if k=k(n)=Θ(n)k=k(n)=\Theta\left(n\right), then ν(n,k)=(1+o(1))logn/loglogn\nu\left(n,k\right)=\left(1+o\left(1\right)\right)\log n/\log\log n;

  4. (d)

    ν(n,k)\nu\left(n,k\right) is strictly increasing in the argument kk;

  5. (e)

    if k=k(n)=O(n)k=k(n)=O\left(n\right), then ν(n,k)=ω(k/n)\nu\left(n,k\right)=\omega\left(k/n\right);

  6. (f)

    if k=k(n)=Θ(n)k=k(n)=\Theta\left(n\right) and d=d(n)=o(n(loglogn)2/logn)d=d(n)=o\left(n\left(\log\log n\right)^{2}/\log n\right), then ν(n,k+d)ν(n,k)=o(1)\nu\left(n,k+d\right)-\nu\left(n,k\right)=o\left(1\right);

  7. (g)

    ν(n)\nu\left(n\right) is strictly increasing;

  8. (h)

    if c=c(n)=Θ(1)c=c(n)=\Theta\left(1\right) and k=k(n)=Θ(n)k=k(n)=\Theta\left(n\right), then ν(cn,ck)=ν(n,k)+o(1)\nu\left(cn,ck\right)=\nu\left(n,k\right)+o\left(1\right);

  9. (i)

    if c=c(n)=Θ(1)c=c(n)=\Theta\left(1\right), then ν(n,cn)ν(n)=(logc+o(1))logn/(loglogn)2\nu\left(n,cn\right)-\nu\left(n\right)=\left(\log c+o\left(1\right)\right)\log n/\left(\log\log n\right)^{2}.

We provide a proof of Lemma 2.11 in Section 9.2.

3. Proof strategy

In order to prove Theorem 1.4 on the two-point concentration of Δ(P(n,m))\Delta\left(P(n,m)\right) when mn/2+O(n2/3)m\leq n/2+O\left(n^{2/3}\right), we will use the known fact that with positive probability the Erdős-Rényi random graph G(n,m)G(n,m) is planar in this regime (see Theorem 5.2). Thus, it suffices to determine Δ(G(n,m))\Delta\left(G(n,m)\right) instead of Δ(P(n,m))\Delta\left(P(n,m)\right), which we will do by proving that Δ(G(n,m))\Delta\left(G(n,m)\right) ‘behaves’ like the maximum load of an appropriate ball-into-bins model (see Section 3.3 for details).

The proof of Theorem 1.6 will be based on the following result on the typical structure of P=P(n,m)P=P(n,m), which can be derived by using statements from [32, 34]: Informally speaking, the largest component L=L(P)L=L\left(P\right) consists of a family FF of rooted tree components, which are connected via ‘few’ edges between the roots of the tree components that are exactly the vertices of L(C(P))L\left(C\left(P\right)\right), i.e. the largest component of the core. The number of tree components in FF is much smaller than v(F)v\left({F}\right), because the order of the core is typically much smaller than the order of the largest component (see Theorem 2.10(b) and (d)). In addition, the remaining part R=R(P)=PLR=R\left(P\right)=P\setminus L ‘behaves’ like an Erdős-Rényi random graph G(n~,n~/2)G(\tilde{n},\tilde{n}/2) for a suitable n~=n~(n)\tilde{n}=\tilde{n}(n). We refer to Figure 1 for an illustration of this structure.

Then we will derive the two-point concentration of Δ(R)\Delta\left(R\right) by studying G(n~,n~/2)G(\tilde{n},\tilde{n}/2). Using the property that the number of tree components in FF, and therefore also the number of roots, is small compared to v(F)v\left({F}\right), we will show that the degrees of the roots are typically much smaller than Δ(F)\Delta\left(F\right) (see Theorem 6.3(b)). Together with the fact that the number of ‘additional’ edges connecting the roots is ‘small’, this will yield Δ(L)=Δ(F)\Delta\left(L\right)=\Delta\left(F\right). Then the two-point concentration of Δ(L)\Delta\left(L\right) will follow by analysing Δ(F)\Delta\left(F\right) via the balls-into-bins model and Prüfer sequences (see Section 6). In the following sections we will describe these ideas in more detail. In Section 3.1 we will use a graph decomposition and conditional random graphs to make the aforementioned structural result on PP more formal. Subsequently, we determine the maximum degrees of FF and G(n,m)G(n,m) in Sections 3.2 and 3.3, respectively.

LLRR
Figure 1. Typical structure of P=P(n,m)P=P(n,m) when mm is as in Theorem 1.6: The largest component L=L(P)L=L\left(P\right) consists of a family of rooted tree components, which are connected via ‘few’ edges (drawn with thin lines) between the roots (square boxes). The remaining part R=PLR=P\setminus L ‘behaves’ like an Erdős-Rényi random graph G(n~,n~/2)G(\tilde{n},\tilde{n}/2) for a suitable n~=n~(n)\tilde{n}=\tilde{n}(n).

3.1. Decomposition and conditional random graphs

Instead of considering LL and RR directly, we will actually split the random planar graph PP into the large complex part QL=QL(P)Q_{L}=Q_{L}\left(P\right), the small complex part QS=QS(P)Q_{S}=Q_{S}\left(P\right), and the non-complex part U=U(P)U=U\left(P\right) (see Section 2.3 for a formal definition of QLQ_{L}, QSQ_{S}, and UU). We then use the fact that by Theorem 2.10(c) we have whp

L=QL,\displaystyle L=Q_{L}, (1)

which also implies that whp R=QSUR=Q_{S}\cup U. In order to analyse QLQ_{L}, QSQ_{S}, and UU, we will use the concept of conditional random graphs (see Section 2.4): For given λ,σ\lambda,\sigma\in\mathbb{N} and a core CC, we denote by P~\tilde{P} the random planar graph PP conditioned on the event that C(P)=CC\left(P\right)=C, v(QL(P))=λv\left({Q_{L}\left(P\right)}\right)=\lambda, and v(QS(P))=σv\left({Q_{S}\left(P\right)}\right)=\sigma. By Remark 2.7 we have

QL(P~)\displaystyle Q_{L}\left(\tilde{P}\right) Q(L(C),λ),\displaystyle\sim Q\left(L\left(C\right),\lambda\right), (2)
QS(P~)\displaystyle Q_{S}\left(\tilde{P}\right) Q(R(C),σ),\displaystyle\sim Q\left(R\left(C\right),\sigma\right), (3)
U(P~)\displaystyle U\left(\tilde{P}\right) U(u,w),\displaystyle\sim U(u,w), (4)

where the random graphs on the right hand side are as defined in Definitions 2.5 and 2.6, L(C)L\left(C\right) the largest component of CC, R(C)=CL(C)R\left(C\right)=C\setminus L\left(C\right), u:=nλσu:=n-\lambda-\sigma, and w:=me(C)+v(C)λσw:=m-e\left({C}\right)+v\left({C}\right)-\lambda-\sigma.

Roughly speaking, there is the following elementary but useful relation between the ‘conditional’ random graph P~\tilde{P} and the original random planar graph PP (see Lemma 2.9): If for all ‘typical’ choices of CC, λ\lambda, and σ\sigma whp a graph property holds in P~\tilde{P}, then whp this property holds in PP. In order to determine what ‘typical’ choices of CC, λ\lambda, and σ\sigma are, we use known results on the internal structure of PP (see Theorem 2.10). For example, if we know that whp the core C(P)C\left(P\right) satisfies a certain structure, e.g. the maximum degree is bounded or the number of vertices lies in a certain interval, then typical choices of CC are those cores having this structure.

Due to this relation between PP and P~\tilde{P} and (2)–(4) it suffices to consider the random graphs Q(C,q)Q\left(C,q\right) and U(n,m)U(n,m) for fixed values of CC, qq, nn, and mm. We will see that if we consider U(n,m)U(n,m), then we always have m=n/2+O(n2/3)m=n/2+O\left(n^{2/3}\right). It is well known that in this regime the Erdős-Rényi random graph G(n,m)G(n,m) has with positive probability no complex components (see Theorem 5.2). Hence, we can consider Δ(G(n,m))\Delta\left(G(n,m)\right) instead of Δ(U(n,m))\Delta\left(U(n,m)\right), which we will do in Section 3.3. Furthermore, in Section 3.2 we will study Q(C,q)Q\left(C,q\right) by using the balls-into-bins model.

We emphasize that the decomposition P=QL˙QS˙UP=Q_{L}\leavevmode\nobreak\ \dot{\cup}\leavevmode\nobreak\ Q_{S}\leavevmode\nobreak\ \dot{\cup}\leavevmode\nobreak\ U describes the structure of PP as stated at the beginning of Section 3 and illustrated in Figure 1: By (1) the large complex part QLQ_{L} corresponds to the largest component L=L(P)L=L\left(P\right). Using (2) this implies that the asymptotic behaviour of LL can be deduced by that of Q(C,q)Q\left(C,q\right) for a suitable core CC and qq\in\mathbb{N}. The random graph Q(C,q)Q\left(C,q\right) can be constructed by replacing each vertex of CC randomly by a rooted tree component such that a complex graph with qq vertices is obtained. Furthermore, in our applications Δ(C)\Delta\left(C\right) will be bounded and v(C)v\left({C}\right) will be ‘small’ compared to qq (see Theorem 2.10(a) and (b)). This implies that Q(C,q)Q\left(C,q\right), and therefore also LL, consists of a family of rooted tree components (containing the edges not lying in CC), which are connected via ‘few’ additional edges (which are the edges lying in CC). For the structure of the remaining part R=PLR=P\setminus L we observe that RR corresponds to QSUQ_{S}\cup U (see (1)). Combining the facts that v(QS)v\left({Q_{S}}\right) will be ‘small’ compared to v(U)v\left({U}\right) and e(U)v(U)/2e\left({U}\right)\approx v\left({U}\right)/2 (see Theorem 2.10(e) and (g)) with (4), we obtain that RR is closely related to U(n~,n~/2)U(\tilde{n},\tilde{n}/2), and therefore also to G(n~,n~/2)G(\tilde{n},\tilde{n}/2), for a suitable n~\tilde{n}\in\mathbb{N}.

3.2. Random complex part and forests with specified roots

Let CC be a core (on vertex set [v(C)][v\left({C}\right)]) and qq\in\mathbb{N}. In Definition 2.5 we denoted by Q(C,q)Q\left(C,q\right) a graph chosen uniformly at random from the family 𝒬(C,q)\mathcal{Q}(C,q) of all complex graphs with core CC and vertex set [q][q]. Moreover, we let (n,t)\mathcal{F}(n,t) be the class of forests on vertex set [n][n] consisting of tt tree components such that each vertex from [t][t] lies in a different tree component. The elements in (n,t)\mathcal{F}(n,t) are called forests with specified roots and the vertices in [t][t] roots. For simplicity, we will often just write forest instead of forest with specified roots. We can construct Q=Q(C,q)Q=Q\left(C,q\right) by choosing a random forest F=F(q,v(C))R(q,v(C))F=F(q,v\left({C}\right))\in_{R}\mathcal{F}(q,v\left({C}\right)) and replacing each vertex vv in CC by the tree component with root vv. For the degrees of vertices in QQ we obtain dQ(v)=dC(v)+dF(v)d_{Q}\left(v\right)=d_{C}\left(v\right)+d_{F}\left(v\right) for vCv\in C and dQ(v)=dF(v)d_{Q}\left(v\right)=d_{F}\left(v\right) otherwise. In our applications we will have that Δ(C)\Delta\left(C\right) is bounded and v(C)=O(q1β)v\left({C}\right)=O\left(q^{1-\beta}\right) for some β>0\beta>0, i.e. v(C)v\left({C}\right) is ‘small’ compared to qq (see Theorem 2.10(a), (b), and (d)). This will imply that whp Δ(Q)=Δ(F)\Delta\left(Q\right)=\Delta\left(F\right) (see Theorem 6.4).

In order to determine Δ(F)\Delta\left(F\right), we will introduce a bijection between (n,t)\mathcal{F}(n,t) and 𝒮(n,t):=[n]nt1×[t]\mathcal{S}\left(n,t\right):=[n]^{n-t-1}\times[t] similar to Prüfer sequences for trees (see Section 6.1). Given a forest F(n,t)F\in\mathcal{F}(n,t) we recursively delete the leaf, i.e. a vertex with degree one, with largest label and thereby build a sequence by noting the unique neighbours of the leaves. We will show in Theorem 6.1 that this is indeed a bijection and that the degree of a vertex vv is determined by the number of occurrences of vv in the sequence (see (13)). It is straightforward to construct a random element from 𝒮(n,t)\mathcal{S}\left(n,t\right) by a balls-into-bins model such that the load of a bin equals the number of occurrences in the sequence of the corresponding element. Thus, we will derive the concentration of the maximum degree Δ(F)\Delta\left(F\right) from a concentration result on the maximum load. We refer to Figure 3 for an illustration of the construction of Q(C,q)Q\left(C,q\right) via the random forest FF and the balls-into-bins model.

3.3. Erdős-Rényi random graph and the balls-into-bins model

Given nn bins 1,,n\mathcal{B}_{1},\ldots,\mathcal{B}_{n} and 2m2m balls B1,,B2mB_{1},\ldots,B_{2m} we denote by AiA_{i} the index of the bin to which the ii-th ball BiB_{i} is assigned for each i[2m]i\in[2m]. We will consider the random multigraph MM with V(M)=[n]V\left(M\right)=[n] and E(M)={{A2i1,A2i}i[m]}E\left(M\right)=\left\{\left\{A_{2i-1},A_{2i}\right\}\mid i\in[m]\right\} (see also Figure 2). We will see that conditioned on MM being simple, MM is distributed like G(n,m)G(n,m). Furthermore, we will show that as long as m=O(n)m=O\left(n\right), with positive probability MM is simple. Hence, the concentration of Δ(G(n,m))\Delta\left(G(n,m)\right) will follow by the concentration of the maximum load of a bin (see Theorem 4.1).

1234512345(a) balls-into-bins experiment(b) multigraph MM12345678
Figure 2. Construction of a multigraph MM via the balls-into-bins model. In (a) we have n=5n=5 bins, 2m=82m=8 balls, and A1=5,A2=3,A3=5,,A8=3A_{1}=5,A_{2}=3,A_{3}=5,\ldots,A_{8}=3. In (b) this results in a multigraph MM with V(M)=[5]V\left(M\right)=[5] and E(M)={{5,3},{5,1},{2,5},{2,3}}E\left(M\right)=\left\{\{5,3\},\{5,1\},\{2,5\},\{2,3\}\right\}. The maximum load of a bin (=3=3) corresponds to the maximum degree Δ(M)=3\Delta\left(M\right)=3 of MM.

3.4. Double counting

To prove Theorem 1.10, we will combine results on the asymptotic number of planar graphs from [26] (see Theorem 8.1) and a double counting argument (see Lemma 8.2) and deduce that for all fixed k,lk,l\in\mathbb{N} we have

lim infn[P has k isolated vertices and l isolated edges]>0,\displaystyle\liminf_{n\to\infty}\leavevmode\nobreak\ \mathbb{P}\left[P\text{ has }k\text{ isolated vertices and }l\text{ isolated edges}\right]>0, (5)

where we call a vertex isolated if it has degree zero and say that an edge is isolated if both endpoints have degree one. Then we introduce an operation that uses an isolated vertex and two isolated edges to increase the maximum degree of a graph by one (see Figure 4). Starting with a graph that has ‘many’ isolated vertices and isolated edges, we can repeatedly apply this operation to create lots of graphs with various maximum degrees (see Lemma 8.4). Together with (5) this will imply that also Δ(P)\Delta\left(P\right) takes ‘many’ different values.

4. Balls into bins

Balls-into-bins models have been extensively studied in the literature (see e.g. [31, 41]). Throughout the paper, we will use the following model. Given nn bins 1,,n\mathcal{B}_{1},\ldots,\mathcal{B}_{n} we sequentially assign kk balls B1,,BkB_{1},\ldots,B_{k} to those nn bins by choosing a bin for each ball, independently and uniformly at random. Let 𝐀=(A1,,Ak)\mathbf{A}=\left(A_{1},\ldots,A_{k}\right) be the location vector, i.e. AiA_{i} is the index of the bin to which the ii-th ball BiB_{i} is assigned. For each j[n]j\in[n] we call the number of balls in the jj-th bin j\mathcal{B}_{j} the load λj=λj(𝐀)\lambda_{j}=\lambda_{j}(\mathbf{A}). We write λ=λ(𝐀)=(λ1,,λn)\mathbf{\lambda}=\mathbf{\lambda}(\mathbf{A})=\left(\lambda_{1},\ldots,\lambda_{n}\right) for the vector of all loads and denote by λ=λ(𝐀)=maxj[n]λj\lambda^{\ast}=\lambda^{\ast}(\mathbf{A})=\max_{j\in[n]}\lambda_{j} the maximum load in a bin. For t[n]t\in[n] we let λt=λt(𝐀)=maxj[t]λj\lambda^{\ast}_{t}=\lambda^{\ast}_{t}(\mathbf{A})=\max_{j\in[t]}\lambda_{j} be the maximum load among the first tt bins 1,,t\mathcal{B}_{1},\ldots,\mathcal{B}_{t}. We write BB(n,k)\operatorname{BB}\left(n,k\right) for a random vector distributed like the location vector 𝐀\mathbf{A} of a balls-into-bins experiment with nn bins and kk balls, denoted by

𝐀BB(n,k)\displaystyle\mathbf{A}\sim\operatorname{BB}\left(n,k\right)

and M(n,k)\operatorname{M}\left(n,k\right) for a random variable distributed like the maximum load λ\lambda^{\ast}, which we denote by

λ=λ(𝐀)M(n,k).\displaystyle\lambda^{\ast}=\lambda^{\ast}(\mathbf{A})\sim\operatorname{M}\left(n,k\right).

Gonnet [27] proved in the case n=kn=k that whp M(n,n)=(1+o(1))logn/loglogn\operatorname{M}\left(n,n\right)=\left(1+o\left(1\right)\right)\log n/\log\log n. Later Raab and Steger [47] considered M(n,k)\operatorname{M}\left(n,k\right) for different ranges of kk. Amongst other results, they showed that whp M(n,k)=(1+o(1))logn/loglogn\operatorname{M}\left(n,k\right)=\left(1+o\left(1\right)\right)\log n/\log\log n is still true, as long as k=Θ(n)k=\Theta\left(n\right). In the following we refine their result, showing that if k=O(n)k=O\left(n\right), then whp M(n,k)\operatorname{M}\left(n,k\right) is actually concentrated at two values.

Before proving that rigorously, we motivate this result by providing the following heuristic. For l=l(n)l=l(n)\in\mathbb{N} we let X(l)X^{(l)} be the number of bins with load ll. We have

𝔼[X(l)]=n(kl)(1/n)l(11/n)kl=:μ(l).\displaystyle\mathbb{E}\left[X^{(l)}\right]=n\binom{k}{l}\left(1/n\right)^{l}\left(1-1/n\right)^{k-l}=:\mu(l). (6)

We expect that the load ll of a bin is much smaller than kk and therefore we have

μ(l)=Θ(1)klelll1/2nl+1.\displaystyle\mu(l)=\Theta\left(1\right)k^{l}e^{l}l^{-l-1/2}n^{-l+1}.

Intuitively, the maximum load λM(n,k)\lambda^{\ast}\sim\operatorname{M}\left(n,k\right) should be close to the largest ll for which μ(l)=Θ(1)\mu(l)=\Theta\left(1\right) is satisfied, in other words, log(μ(λ))\log\left(\mu(\lambda^{\ast})\right) should be close to 0. This motivates the definition of ν(n,k)\nu\left(n,k\right) in Definition 1.3 as the unique positive zero of the function

f(l)=fn,k(l):=llogk+l(l+1/2)logl(l1)logn,\displaystyle f(l)=f_{n,k}(l):=l\log k+l-(l+1/2)\log l-(l-1)\log n,

which is asymptotically equal to log(μ(l))\log\left(\mu(l)\right) up to an additive constant. We will use the first and second moment method (see e.g. [1, 20]) to make that heuristic rigorous and show that the maximum load λM(n,k)\lambda^{\ast}\sim\operatorname{M}\left(n,k\right) is strongly concentrated around ν(n,k)\nu\left(n,k\right).

Theorem 4.1.

If k=k(n)=O(n)k=k(n)=O\left(n\right) and ε>0\varepsilon>0, then whp

ν(n,k)εM(n,k)ν(n,k)+ε.\displaystyle\left\lfloor\nu\left(n,k\right)-\varepsilon\right\rfloor\hskip 1.70709pt\leq\hskip 1.70709pt\operatorname{M}\left(n,k\right)\hskip 1.70709pt\leq\hskip 1.70709pt\left\lfloor\nu\left(n,k\right)+\varepsilon\right\rfloor.
Proof.

Let 𝐀BB(n,k)\mathbf{A}\sim\operatorname{BB}\left(n,k\right) be the location vector, λj=λj(𝐀)\lambda_{j}=\lambda_{j}(\mathbf{A}) the load of bin j\mathcal{B}_{j} for each j[n]j\in[n], and λ=λ(𝐀)M(n,k)\lambda^{\ast}=\lambda^{\ast}(\mathbf{A})\sim\operatorname{M}\left(n,k\right) the maximum load. First we consider the case kn1/3k\leq n^{1/3}. Then we have

[λ=1]=i=1k1(1in)(1kn)k=1o(1).\displaystyle\mathbb{P}\left[\lambda^{\ast}=1\right]=\prod_{i=1}^{k-1}\left(1-\frac{i}{n}\right)\hskip 1.70709pt\geq\hskip 1.70709pt\left(1-\frac{k}{n}\right)^{k}=1-o\left(1\right). (7)

Due to Lemma 2.11(a) and (b) we have 1<ν(n,k)7/41<\nu\left(n,k\right)\leq 7/4 for nn large enough. Together with (7) this shows the statement for the case kn1/3k\leq n^{1/3}. Hence, it remains to consider the case k>n1/3k>n^{1/3}. For l[k]l\in[k] and j[n]j\in[n] we let Xj(l)=1X_{j}^{(l)}=1 if λj=l\lambda_{j}=l, i.e. the number λj\lambda_{j} of balls (among kk balls) in the jj-th bin j\mathcal{B}_{j} is equal to ll, and Xj(l)=0X_{j}^{(l)}=0 otherwise. In addition, we let X(l)=j=1nXj(l)X^{(l)}=\sum_{j=1}^{n}X_{j}^{(l)} be the number of bins with load ll. Then we have [Xj(l)=1]=(kl)(1/n)l(11/n)kl\mathbb{P}\left[X_{j}^{(l)}=1\right]=\binom{k}{l}\left(1/n\right)^{l}\left(1-1/n\right)^{k-l} and obtain (6). If l=O(k1/2)l=O\left(k^{1/2}\right), then (kl)=Θ(1)klel/ll+1/2\binom{k}{l}=\Theta\left(1\right)k^{l}e^{l}/l^{l+1/2}, where we used Stirling’s formula for l!l!. Hence, we get

μ(l)=Θ(1)klelll+1/2nl1,\displaystyle\mu(l)=\Theta\left(1\right)\frac{k^{l}e^{l}}{l^{l+1/2}n^{l-1}}, (8)

because (11/n)kl=Θ(1)\left(1-1/n\right)^{k-l}=\Theta(1). For an upper bound of the maximum load λ\lambda^{\ast} we will use the first moment method. Let l=l(n):=ν(n,k)+ε+1l^{\ast}=l^{\ast}(n):=\left\lfloor\nu\left(n,k\right)+\varepsilon\right\rfloor+1 and τ=τ(n):=lν(n,k)ε\tau=\tau(n):=l^{\ast}-\nu\left(n,k\right)\geq\varepsilon. Due to Lemma 2.11(c) and (d) and the assumption k>n1/3k>n^{1/3} we have l=O(k1/2)l^{\ast}=O\left(k^{1/2}\right). Thus, equation (8) holds for l=ll=l^{\ast} and by the definition of ν=ν(n,k)\nu=\nu\left(n,k\right) we obtain

μ(l)=Θ(1)kν+τeν+τ(ν+τ)ν+τ+1/2nν+τ1=Θ(1)(ken(ν+τ))τ(νν+τ)ν+1/2.\displaystyle\mu\left(l^{\ast}\right)=\Theta\left(1\right)\frac{k^{\nu+\tau}e^{\nu+\tau}}{\left(\nu+\tau\right)^{\nu+\tau+1/2}n^{\nu+\tau-1}}=\Theta\left(1\right)\left(\frac{ke}{n\left(\nu+\tau\right)}\right)^{\tau}\left(\frac{\nu}{\nu+\tau}\right)^{\nu+1/2}. (9)

Together with Lemma 2.11(e) this yields μ(l)=o(1)\mu\left(l^{\ast}\right)=o\left(1\right). Due to Lemma 2.11(e) we have μ(l+1)/μ(l)=(kl)/((l+1)(n1))=o(1)\mu\left(l+1\right)/\mu\left(l\right)=\left(k-l\right)/\left(\left(l+1\right)\left(n-1\right)\right)=o(1) for all lll\geq l^{\ast}. Hence,

[λl]llμ(l)=(1+o(1))μ(l)=o(1).\displaystyle\mathbb{P}\left[\lambda^{\ast}\geq l^{\ast}\right]\hskip 1.70709pt\leq\hskip 1.70709pt\sum_{l\geq l^{\ast}}\mu(l)=\left(1+o\left(1\right)\right)\mu(l^{\ast})=o(1).

For a lower bound, we will show that [X(l)>0]=1o(1)\mathbb{P}\left[X^{\left(l_{\ast}\right)}>0\right]=1-o(1), where l=l(n):=ν(n,k)εl_{\ast}=l_{\ast}(n):=\left\lfloor\nu\left(n,k\right)-\varepsilon\right\rfloor, using the second moment method. In the following we consider the random variables Xj(l)X_{j}^{(l)} and X(l)X^{(l)} only for l=ll=l_{\ast} and therefore we use Xj=Xj(l)X_{j}=X_{j}^{\left(l_{\ast}\right)} and X=X(l)X=X^{\left(l_{\ast}\right)} for simplicity. In order to apply the second moment method, we will show 𝔼[X]=ω(1)\mathbb{E}\left[X\right]=\omega\left(1\right) and 𝔼[XiXj]=(1+o(1))𝔼[Xi]𝔼[Xj]\mathbb{E}\left[X_{i}X_{j}\right]=\left(1+o\left(1\right)\right)\mathbb{E}\left[X_{i}\right]\mathbb{E}\left[X_{j}\right] for all iji\neq j. We let ρ=ρ(n):=νlε\rho=\rho(n):=\nu-l_{\ast}\geq\varepsilon and by (8), Lemma 2.11(e), and the definition of ν\nu we obtain

μ(l)=Θ(1)kνρeνρ(νρ)νρ+1/2nνρ1=Θ(1)(n(νρ)ke)ρ(ννρ)ν+1/2=ω(1).\displaystyle\mu\left(l_{\ast}\right)=\Theta\left(1\right)\frac{k^{\nu-\rho}e^{\nu-\rho}}{\left(\nu-\rho\right)^{\nu-\rho+1/2}n^{\nu-\rho-1}}=\Theta\left(1\right)\left(\frac{n\left(\nu-\rho\right)}{ke}\right)^{\rho}\left(\frac{\nu}{\nu-\rho}\right)^{\nu+1/2}=\omega\left(1\right).

Next, we note that conditioned on the event Xi=1X_{i}=1, i.e. λi=l\lambda_{i}=l_{\ast}, the loads λj\lambda_{j} for jij\neq i are distributed like the loads of a balls-into-bins experiment with n1n-1 bins and klk-l_{\ast} balls, and thus

[Xj=1|Xi=1]=(kll)(1/(n1))l(11/(n1))k2l.\displaystyle\mathbb{P}\left[X_{j}=1\;\middle|\;X_{i}=1\right]\hskip 1.70709pt=\hskip 1.70709pt\binom{k-l_{\ast}}{l_{\ast}}\left(1/(n-1)\right)^{l_{\ast}}\left(1-1/(n-1)\right)^{k-2l_{\ast}}.

Hence, we obtain

𝔼[XiXj]𝔼[Xi]𝔼[Xj]=[Xj=1|Xi=1][Xj=1]=(kll)(1/(n1))l(11/(n1))k2l(kl)(1/n)l(11/n)kl=1+o(1),\displaystyle\frac{\mathbb{E}\left[X_{i}X_{j}\right]}{\mathbb{E}\left[X_{i}\right]\mathbb{E}\left[X_{j}\right]}\hskip 1.70709pt=\hskip 1.70709pt\frac{\mathbb{P}\left[X_{j}=1\;\middle|\;X_{i}=1\right]}{\mathbb{P}\left[X_{j}=1\right]}\hskip 1.70709pt=\hskip 1.70709pt\frac{\binom{k-l_{\ast}}{l_{\ast}}\left(1/(n-1)\right)^{l_{\ast}}\left(1-1/(n-1)\right)^{k-2l_{\ast}}}{\binom{k}{l_{\ast}}\left(1/n\right)^{l_{\ast}}\left(1-1/n\right)^{k-l_{\ast}}}\hskip 1.70709pt=\hskip 1.70709pt1+o\left(1\right),

where we used the assumption k>n1/3k>n^{1/3} and the fact l=O(logn/loglogn)l_{\ast}=O\left(\log n/\log\log n\right) due to Lemma 2.11(c) and (d). Thus, by the second moment method we obtain [X>0]=1o(1)\mathbb{P}\left[X>0\right]=1-o(1), which finishes the proof. ∎

Next, we show that if we consider a ‘small’ subset of bins, then the maximum load in one of these bins is significantly smaller than the maximum load of all bins. We will use this fact later when we relate random forests to the balls-into-bins model (see Section 6), in which this ‘small’ subset will correspond to the set of all roots.

Lemma 4.2.

Let k=k(n)k=k(n) and t=t(n)t=t(n)\in\mathbb{N} be such that k=Θ(n)k=\Theta\left(n\right) and t=O(n1β)t=O\left(n^{1-\beta}\right) for some β>0\beta>0. Let 𝐀BB(n,k)\mathbf{A}\sim\operatorname{BB}\left(n,k\right), λ=λ(𝐀)M(n,k)\lambda^{\ast}=\lambda^{\ast}(\mathbf{A})\sim\operatorname{M}\left(n,k\right) be the maximum load, and λt=λt(𝐀)\lambda^{\ast}_{t}=\lambda^{\ast}_{t}(\mathbf{A}) be the maximum load in one of the first tt bins. Then, whp λλt=ω(1)\lambda^{\ast}-\lambda^{\ast}_{t}\hskip 1.70709pt=\hskip 1.70709pt\omega\left(1\right).

Proof.

We observe that λλt\lambda^{\ast}-\lambda^{\ast}_{t} is strictly decreasing in tt. Thus, it suffices to show λλt=ω(1)\lambda^{\ast}-\lambda^{\ast}_{t}\hskip 1.70709pt=\hskip 1.70709pt\omega\left(1\right) for t=n1βt=\left\lfloor n^{1-\beta}\right\rfloor and β(0,1)\beta\in(0,1). We denote by StS_{t} the total number of balls in the first tt bins. We have 𝔼[St]=tk/n\mathbb{E}\left[S_{t}\right]=tk/n and 𝕍[St]𝔼[St]\mathbb{V}\left[S_{t}\right]\leq\mathbb{E}\left[S_{t}\right]. Hence, by Chebyshev’s inequality, we have whp

Sttkn+(tkn)2/3=:l¯=l¯(n).\displaystyle S_{t}\hskip 1.70709pt\leq\hskip 1.70709pt\frac{tk}{n}+\left(\frac{tk}{n}\right)^{2/3}=:\bar{l}=\bar{l}(n). (10)

Conditioned on the event St=lS_{t}=l for ll\in\mathbb{N}, λt\lambda^{\ast}_{t} is distributed like M(t,l)\operatorname{M}\left(t,l\right). Thus,

[λtν(t,l¯)+1]\displaystyle\mathbb{P}\left[\lambda^{\ast}_{t}\hskip 1.70709pt\leq\hskip 1.70709pt\left\lfloor\nu\left(t,\bar{l}\right)\right\rfloor+1\right] l=1l¯[St=l][λtν(t,l¯)+1|St=l]\displaystyle\hskip 1.70709pt\geq\hskip 1.70709pt\sum_{l=1}^{\bar{l}}\mathbb{P}\big{[}S_{t}=l\big{]}\mathbb{P}\left[\lambda^{\ast}_{t}\hskip 1.70709pt\leq\hskip 1.70709pt\left\lfloor\nu\left(t,\bar{l}\right)\right\rfloor+1\;\middle|\;S_{t}=l\right]
[Stl¯][M(t,l¯)ν(t,l¯)+1]=(1o(1)),\displaystyle\hskip 1.70709pt\geq\hskip 1.70709pt\mathbb{P}\left[S_{t}\leq\bar{l}\right]\mathbb{P}\left[\operatorname{M}\left(t,\bar{l}\right)\hskip 1.70709pt\leq\hskip 1.70709pt\left\lfloor\nu\left(t,\bar{l}\right)\right\rfloor+1\right]\hskip 1.70709pt=\hskip 1.70709pt\left(1-o\left(1\right)\right),

where the last equality follows from Theorem 4.1 and (10). Due to Lemma 2.11(c) and the assumption t=n1βt=\left\lfloor n^{1-\beta}\right\rfloor we get ν(t,l¯)=(1+o(1))logt/loglogt=(1β+o(1))logn/loglogn\nu\left(t,\bar{l}\right)=\left(1+o\left(1\right)\right)\log t/\log\log t=\left(1-\beta+o\left(1\right)\right)\log n/\log\log n, which yields whp λt(1β+o(1))logn/loglogn\lambda^{\ast}_{t}\leq\left(1-\beta+o\left(1\right)\right)\log n/\log\log n. By Lemma 2.11(c) we have whp λ=(1+o(1))logn/loglogn\lambda^{\ast}=\left(1+o\left(1\right)\right)\log n/\log\log n. Hence, we obtain whp λλt(β+o(1))logn/loglogn=ω(1)\lambda^{\ast}-\lambda^{\ast}_{t}\hskip 1.70709pt\geq\hskip 1.70709pt\left(\beta+o\left(1\right)\right)\log n/\log\log n=\omega\left(1\right), as desired. ∎

5. Erdős-Rényi random graph and graphs without complex components

We start this section by providing a relation between the degree sequence of the Erdős-Rényi random graph G(n,m)G(n,m) and the loads of a balls-into-bins model. In particular, this yields a refined version of Theorem 1.1.

Theorem 5.1.

Let m=m(n)=O(n)m=m(n)=O\left(n\right) and 𝐝=𝐝(n)=(dG(1),,dG(n))\mathbf{d}=\mathbf{d}(n)=\left(d_{G}\left(1\right),\ldots,d_{G}\left(n\right)\right) be the degree sequence of G=G(n,m)R𝒢(n,m)G=G(n,m)\in_{R}\mathcal{G}(n,m). Moreover, let 𝐀=𝐀(n)BB(n,2m)\mathbf{A}=\mathbf{A}(n)\sim\operatorname{BB}\left(n,2m\right), λ=λ(n)=λ(𝐀)\mathbf{\lambda}=\mathbf{\lambda}(n)=\mathbf{\lambda}(\mathbf{A}) be the vector of loads of 𝐀\mathbf{A}, and ε>0\varepsilon>0. Then

  1. (a)

    the degree sequence 𝐝\mathbf{d} is contiguous with respect to λ\mathbf{\lambda}, i.e. 𝐝λ\mathbf{d}\triangleleft\mathbf{\lambda};

  2. (b)

    whp ν(n,2m)εΔ(G)ν(n,2m)+ε\left\lfloor\nu\left(n,2m\right)-\varepsilon\right\rfloor\hskip 1.70709pt\leq\hskip 1.70709pt\Delta\left(G\right)\hskip 1.70709pt\leq\hskip 1.70709pt\left\lfloor\nu\left(n,2m\right)+\varepsilon\right\rfloor.

Proof.

We consider the random multigraph MM given by V(M)=[n]V\left(M\right)=[n] and E(M)={{A2i1,A2i}i[m]}E\left(M\right)=\left\{\left\{A_{2i-1},A_{2i}\right\}\mid i\in[m]\right\}, where 𝐀=(A1,,A2m)\mathbf{A}=\left(A_{1},\ldots,A_{2m}\right) is the location vector (see Figure 2 for an illustration). We observe that for v[n]v\in[n] the load λv\lambda_{v} equals the degree dM(v)d_{M}\left(v\right). For each graph H𝒢(n,m)H\in\mathcal{G}(n,m) we have [M=H]=2mm!/n2m\mathbb{P}\left[M=H\right]=2^{m}m!/n^{2m}. Hence, conditioned on the event that MM is simple, MM is distributed like GG. Moreover, for nn large enough we have

[M is simple]\displaystyle\mathbb{P}\left[M\text{ is simple}\right] =[M has no loops][M has no multiple edges|M has no loops]\displaystyle\hskip 1.70709pt=\hskip 1.70709pt\mathbb{P}\left[M\text{ has no loops}\right]\cdot\mathbb{P}\left[M\text{ has no multiple edges}\;\middle|\;M\text{ has no loops}\right]
=(11n)mi=0m1(1i(n2))exp(2mn4m2n2)>ρ,\displaystyle\hskip 1.70709pt=\hskip 1.70709pt\left(1-\frac{1}{n}\right)^{m}\hskip 1.42271pt\cdot\hskip 1.42271pt\prod_{i=0}^{m-1}\left(1-\frac{i}{\binom{n}{2}}\right)\hskip 1.70709pt\geq\hskip 1.70709pt\exp\left(\frac{-2m}{n}-\frac{4m^{2}}{n^{2}}\right)\hskip 1.70709pt>\hskip 1.70709pt\rho,

for a suitable chosen ρ>0\rho>0, since m=O(n)m=O\left(n\right). This shows lim infn[M is simple]>0\liminf_{n\to\infty}\mathbb{P}\left[M\text{ is simple}\right]>0. Thus, each property that holds whp in MM, is also true whp in GG. In particular, the degree sequence 𝐝\mathbf{d} of GG is contiguous with respect to the degree sequence λ\mathbf{\lambda} of MM, i.e. 𝐝λ\mathbf{d}\triangleleft\mathbf{\lambda}. Together with Theorem 4.1 this yields whp ν(n,2m)εΔ(G)ν(n,2m)+ε\left\lfloor\nu\left(n,2m\right)-\varepsilon\right\rfloor\hskip 1.70709pt\leq\hskip 1.70709pt\Delta\left(G\right)\hskip 1.70709pt\leq\hskip 1.70709pt\left\lfloor\nu\left(n,2m\right)+\varepsilon\right\rfloor, as desired. ∎

We recall that we denote by U(n,m)U(n,m) a graph chosen uniformly at random from the class 𝒰(n,m)\mathcal{U}(n,m) consisting of graphs having no complex components, vertex set [n][n], and mm edges. Later U(n,m)U(n,m) will take the role of the non-complex part of the random planar graph. In this case the relation m=n/2+O(n2/3)m=n/2+O\left(n^{2/3}\right) is satisfied (see Theorem 2.10). Britikov [6] provided a useful relation between U(n,m)U(n,m) and G(n,m)G(n,m) in this range.

Theorem 5.2 ([6]).

Let m=m(n)n/2+O(n2/3)m=m(n)\leq n/2+O\left(n^{2/3}\right) and G=G(n,m)R𝒢(n,m)G=G(n,m)\in_{R}\mathcal{G}(n,m). Then

lim infn[G has no complex components ]>0.\displaystyle\liminf_{n\to\infty}\leavevmode\nobreak\ \mathbb{P}\left[G\text{ has no complex components }\right]>0.

In particular, lim infn[G is planar ]>0\liminf_{n\to\infty}\leavevmode\nobreak\ \mathbb{P}\left[G\text{ is planar }\right]>0.

Combining Theorems 5.1(b) and 5.2 we can deduce that whp Δ(U(n,m))\Delta\left(U(n,m)\right) is concentrated at two values.

Lemma 5.3.

Let m=m(n)=n/2+O(n2/3)m=m(n)=n/2+O\left(n^{2/3}\right), U=U(n,m)R𝒰(n,m)U=U(n,m)\in_{R}\mathcal{U}(n,m) be a random graph without complex components, and ε>0\varepsilon>0. Then whp ν(n)εΔ(U)ν(n)+ε\left\lfloor\nu\left(n\right)-\varepsilon\right\rfloor\hskip 1.70709pt\leq\hskip 1.70709pt\Delta\left(U\right)\hskip 1.70709pt\leq\hskip 1.70709pt\left\lfloor\nu\left(n\right)+\varepsilon\right\rfloor.

Proof.

Combining Theorems 5.1(b) and 5.2 yields whp

ν(n,2m)ε/2Δ(U)ν(n,2m)+ε/2.\displaystyle\left\lfloor\nu\left(n,2m\right)-\varepsilon/2\right\rfloor\hskip 1.70709pt\leq\hskip 1.70709pt\Delta\left(U\right)\hskip 1.70709pt\leq\hskip 1.70709pt\left\lfloor\nu\left(n,2m\right)+\varepsilon/2\right\rfloor. (11)

Using Lemma 2.11(f) we obtain ν(n,2m)=ν(n)+o(1)\nu\left(n,2m\right)=\nu\left(n\right)+o\left(1\right). Together with (11) this shows the statement. ∎

6. Random complex part and forests with specified roots

The goal of this section is to prove that whp the maximum degree of a random complex part is concentrated at two values (see Theorem 6.4(b)). As a random complex part can be constructed by using a random forest, we start by analysing the class (n,t)\mathcal{F}(n,t) of forests on vertex set [n][n] having tt tree components (some of which might just be isolated vertices) such that the vertices 1,,t1,\ldots,t lie all in different tree components.

In Section 6.1 we generalise the concept of Prüfer sequences to forests. Then we determine the maximum degree in a random forest in Section 6.2. Finally, we derive the concentration result on the maximum degree in a random complex part in Section 6.3.

6.1. Prüfer sequences for forests with specified roots

Similar to Prüfer sequences for trees (see e.g. [50, 36]), there is a bijection between (n,t)\mathcal{F}(n,t) and 𝒮(n,t):=[n]nt1×[t]\mathcal{S}\left(n,t\right):=[n]^{n-t-1}\times[t] (see e.g. [49, Section 6.6]): Given a forest F(n,t)F\in\mathcal{F}(n,t) we construct a sequence (F0,,Fnt)\left(F_{0},\ldots,F_{n-t}\right) of forests and two sequences (x1,,xnt)\left(x_{1},\ldots,x_{n-t}\right) and (y1,,ynt)\left(y_{1},\ldots,y_{n-t}\right) of vertices as follows. We start with F0:=FF_{0}:=F. Given Fi1F_{i-1} for an i[nt]i\in[n-t], we let yiy_{i} be the leaf with largest label in Fi1F_{i-1} and xix_{i} be the unique neighbour of yiy_{i}. Furthermore, we obtain FiF_{i} by deleting the edge xiyix_{i}y_{i} in Fi1F_{i-1}. We note that this construction is always possible, since Fi1F_{i-1} has nti+1n-t-i+1 edges and therefore at least one leaf. We call

ψ(F):=(x1,,xnt)\displaystyle\psi(F):=\left(x_{1},\ldots,x_{n-t}\right) (12)

the Prüfer sequence of FF. We denote by #(v,𝐰):=|{i[nt]wi=v}|\#\left(v,\mathbf{w}\right):=\left|\left\{i\in[n-t]\mid w_{i}=v\right\}\right| the number of occurrences of an element v[n]v\in[n] in 𝐰=(w1,,wnt)[n]nt\mathbf{w}=\left(w_{1},\ldots,w_{n-t}\right)\in[n]^{n-t}.

Theorem 6.1.

Let n,tn,t\in\mathbb{N} and (n,t)\mathcal{F}(n,t) be the class of forests on vertex set [n][n] consisting of tt tree components such that the vertices 1,,t1,\ldots,t lie all in different tree components. In addition, let 𝒮(n,t)=[n]nt1×[t]\mathcal{S}\left(n,t\right)=[n]^{n-t-1}\times[t] and ψ(F)\psi(F) be the Prüfer sequence of F(n,t)F\in\mathcal{F}(n,t) as defined in (12). Then ψ:(n,t)𝒮(n,t)\psi:\mathcal{F}(n,t)\to\mathcal{S}\left(n,t\right) is a bijection. For F(n,t)F\in\mathcal{F}(n,t) and v[n]v\in[n] we have

dF(v)={#(v,ψ(F)) if v[t]#(v,ψ(F))+1 if v[n][t].\displaystyle d_{F}\left(v\right)\hskip 1.70709pt=\hskip 1.70709pt\begin{cases}\#\left(v,\psi(F)\right)&\text{\leavevmode\nobreak\ \leavevmode\nobreak\ if\leavevmode\nobreak\ \leavevmode\nobreak\ }v\in[t]\\ \#\left(v,\psi(F)\right)+1&\text{\leavevmode\nobreak\ \leavevmode\nobreak\ if\leavevmode\nobreak\ \leavevmode\nobreak\ }v\in[n]\setminus[t].\end{cases} (13)

Theorem 6.1 can be shown by using similar ideas as in the classical case of trees. For the sake of completeness, we provide a proof of Theorem 6.1 in Appendix B.

6.2. Degree sequence and maximum degree of a random forest

We consider a random forest F=F(n,t)R(n,t)F=F(n,t)\in_{R}\mathcal{F}(n,t) and determine the degree sequence of FF and the maximum degree Δ(F)\Delta\left(F\right).

Theorem 6.2.

Let n,tn,t\in\mathbb{N} and 𝐝=(dF(1),,dF(n))\mathbf{d}=\left(d_{F}\left(1\right),\ldots,d_{F}\left(n\right)\right) be the degree sequence of F=F(n,t)R(n,t)F=F\left(n,t\right)\in_{R}\mathcal{F}\left(n,t\right). Let 𝐀BB(n,nt1)\mathbf{A}\sim\operatorname{BB}\left(n,n-t-1\right) and λj=λj(𝐀)\lambda_{j}=\lambda_{j}(\mathbf{A}) be the load in bin j\mathcal{B}_{j} for each j[n]j\in[n]. In addition, let ZR[t]Z\in_{R}[t] (independent of FF) and for j[t]j\in[t] we define Yj=1Y_{j}=1 if Z=jZ=j and Yj=0Y_{j}=0 otherwise. Then

(λ1+Y1,,λt+Yt,λt+1+1,,λn+1)𝐝.\displaystyle\big{(}\lambda_{1}+Y_{1},\ldots,\lambda_{t}+Y_{t},\lambda_{t+1}+1,\ldots,\lambda_{n}+1\big{)}\sim\mathbf{d}.
Proof.

Instead of directly choosing FF from (n,t)\mathcal{F}(n,t), we can equivalently create FF by Prüfer sequences from Section 6.1: First we perform a balls-into-bins experiment with nn bins and nt1n-t-1 balls and let 𝐀=(A1,,Ant1)BB(n,nt1)\mathbf{A}=\left(A_{1},\ldots,A_{n-t-1}\right)\sim\operatorname{BB}\left(n,n-t-1\right) be the location vector. Then we independently choose AntR[t]A_{n-t}\in_{R}[t] and set F=ψ1(A1,,Ant)F=\psi^{-1}\left(A_{1},\ldots,A_{n-t}\right) and the statement follows by (13). ∎

Using this connection to the balls-into-bins model we obtain an upper bound on Δ(F(n,t))\Delta\left(F(n,t)\right) (see Theorem 6.3(a)). If we assume that tt is not too ‘large’, we can even show that whp Δ(F(n,t))\Delta\left(F(n,t)\right) is concentrated at two values and that the maximum degree of a root vertex, i.e. a vertex in [t][t], is much smaller than Δ(F(n,t))\Delta\left(F(n,t)\right) (see Theorem 6.3(b)). We will need these facts later when we use random forests to build a random complex part (see Section 6.3).

Theorem 6.3.

Let t=t(n)t=t(n), F=F(n,t)R(n,t)F=F\left(n,t\right)\in_{R}\mathcal{F}\left(n,t\right), and ε>0\varepsilon>0. Then

  1. (a)

    whp Δ(F)ν(n)+2\Delta\left(F\right)\hskip 1.70709pt\leq\hskip 1.70709pt\left\lfloor\nu\left(n\right)\right\rfloor+2;

  2. (b)

    if t=O(n1β)t=O\left(n^{1-\beta}\right) for some β>0\beta>0, then whp ν(n)ε+1Δ(F)ν(n)+ε+1\left\lfloor\nu\left(n\right)-\varepsilon\right\rfloor+1\hskip 1.70709pt\leq\hskip 1.70709pt\Delta\left(F\right)\hskip 1.70709pt\leq\hskip 1.70709pt\left\lfloor\nu\left(n\right)+\varepsilon\right\rfloor+1 and Δ(F)max{dF(r)r[t]}=ω(1)\Delta\left(F\right)-\max\left\{d_{F}(r)\mid r\in[t]\right\}=\omega\left(1\right).

Proof.

Let 𝐀BB(n,nt1)\mathbf{A}\sim\operatorname{BB}\left(n,n-t-1\right), λ=λ(𝐀)M(n,nt1)\lambda^{\ast}=\lambda^{\ast}(\mathbf{A})\sim\operatorname{M}\left(n,n-t-1\right) be the maximum load of 𝐀\mathbf{A}, and λt=λt(𝐀)\lambda^{\ast}_{t}=\lambda^{\ast}_{t}(\mathbf{A}) be the maximum load of one of the first tt bins of 𝐀\mathbf{A}. Due to Theorem 4.1 we have whp λν(n,nt1)+1ν(n)+1\lambda^{\ast}\leq\left\lfloor\nu\left(n,n-t-1\right)\right\rfloor+1\leq\left\lfloor\nu\left(n\right)\right\rfloor+1, where we used in the last inequality Lemma 2.11(d). Combining it with Theorem 6.2 we have

[Δ(F)>ν(n)+2][λ>ν(n)+1]=o(1).\displaystyle\mathbb{P}\big{[}\Delta\left(F\right)>\left\lfloor\nu\left(n\right)\right\rfloor+2\big{]}\hskip 1.70709pt\leq\hskip 1.70709pt\mathbb{P}\left[\lambda^{\ast}>\left\lfloor\nu\left(n\right)\right\rfloor+1\right]=o\left(1\right).

This shows statement (a).

By Lemma 4.2 we have whp λλt=ω(1)\lambda^{\ast}-\lambda^{\ast}_{t}=\omega\left(1\right). This together with Theorem 6.2 implies whp Δ(F)max{dF(r)r[t]}=ω(1)\Delta\left(F\right)-\max\left\{d_{F}(r)\mid r\in[t]\right\}=\omega\left(1\right) and Δ(F)λ+1\Delta\left(F\right)\triangleleft\lambda^{\ast}+1. Thus, we obtain by Theorem 4.1 that whp

ν(n,nt1)ε/2+1Δ(F)ν(n,nt1)+ε/2+1.\displaystyle\left\lfloor\nu\left(n,n-t-1\right)-\varepsilon/2\right\rfloor+1\hskip 1.70709pt\leq\hskip 1.70709pt\Delta\left(F\right)\hskip 1.70709pt\leq\hskip 1.70709pt\left\lfloor\nu\left(n,n-t-1\right)+\varepsilon/2\right\rfloor+1.

By Lemma 2.11(f) we have ν(n,nt1)=ν(n)+o(1)\nu\left(n,n-t-1\right)=\nu\left(n\right)+o\left(1\right), which proves statement (b). ∎

We note that the special case of random trees, i.e. when t=1t=1, was studied in [42, 7]. In particular, Carr, Goh, and Schmutz [7] used the saddle-point method to show that whp the maximum degree in random trees is concentrated at two values.

6.3. Random complex part

We consider the class 𝒬(C,q)\mathcal{Q}\left(C,q\right) consisting of complex graphs with core CC and vertex set [q][q], where CC is a given core and qq\in\mathbb{N} (cf. Definition 2.5). As illustrated in Figure 3, we can construct Q(C,q)R𝒬(C,q)Q\left(C,q\right)\in_{R}\mathcal{Q}\left(C,q\right) via the balls-into-bins model. Assuming that Δ(C)\Delta\left(C\right) is bounded and v(C)v\left({C}\right) is ‘small’ compared to qq, we will use Theorem 6.3 to show that the maximum degree of Q(C,q)Q\left(C,q\right) is strongly concentrated.

123456789(a) Given:Core C=‘triangle’C=\text{`triangle'}, q=9q=9 123456789325416(b) Balls into bins: q=9q=9 bins, qv(C)1=5q-v\left({C}\right)-1=5 balls, and a ball (6)that can be allocated only to one of the first v(C)=3v\left({C}\right)=3 bins
498182(c) Prüfer sequence 123456789(d) Random forest F(9,3)F(9,3) 123456789(e) Random complex part Q(C,q)Q(C,q)
Figure 3. Construction of the random complex part Q(C,q)Q(C,q): (a) Given a core CC and qq\in\mathbb{N}, (b) a balls-into-bins experiment is translated (deterministically) to (c) a Prüfer sequence, (d) a random forest, and finally to (e) the random complex part Q(C,q)Q(C,q).
Theorem 6.4.

For each nn\in\mathbb{N}, let C=C(n)C=C(n) be a core and q=q(n)q=q(n)\in\mathbb{N}. In addition, let Q=Q(C,q)R𝒬(C,q)Q=Q\left(C,q\right)\in_{R}\mathcal{Q}\left(C,q\right) be a random complex part with core CC and vertex set [q][q] as in Definition 2.5 and ε>0\varepsilon>0. If Δ(C)=Θ(1)\Delta\left(C\right)=\Theta\left(1\right), then the following hold.

  1. (a)

    Whp Δ(Q)ν(q)+O(1)\Delta\left(Q\right)\hskip 1.70709pt\leq\hskip 1.70709pt\nu\left(q\right)+O\left(1\right).

  2. (b)

    If in addition v(C)=O(q1β)v\left({C}\right)=O\left(q^{1-\beta}\right) for some β>0\beta>0, then whp ν(q)ε+1Δ(Q)ν(q)+ε+1\left\lfloor\nu\left(q\right)-\varepsilon\right\rfloor+1\hskip 1.70709pt\leq\hskip 1.70709pt\Delta\left(Q\right)\hskip 1.70709pt\leq\hskip 1.70709pt\left\lfloor\nu\left(q\right)+\varepsilon\right\rfloor+1.

Proof.

We observe that QQ can be obtained by choosing a random forest F=F(q,v(C))R(q,v(C))F=F(q,v\left({C}\right))\in_{R}\mathcal{F}(q,v\left({C}\right)) and then replacing each vertex rr in CC by the tree component of FF with root rr. For a vertex vV(Q)v\in V\left(Q\right) we have

dQ(v)={dC(v)+dF(v)ifvV(C)dF(v) otherwise.\displaystyle d_{Q}\left(v\right)=\begin{cases}d_{C}\left(v\right)+d_{F}\left(v\right)&\text{if}\leavevmode\nobreak\ \leavevmode\nobreak\ v\in V\left(C\right)\\ d_{F}\left(v\right)&\text{ otherwise}.\end{cases} (14)

Hence, we have Δ(Q)Δ(C)+Δ(F)\Delta\left(Q\right)\hskip 1.70709pt\leq\hskip 1.70709pt\Delta\left(C\right)+\Delta\left(F\right). By Theorem 6.3(a) we get whp Δ(F)ν(q)+2\Delta\left(F\right)\hskip 1.70709pt\leq\hskip 1.70709pt\nu\left(q\right)+2. Together with the fact Δ(C)=Θ(1)\Delta\left(C\right)=\Theta\left(1\right) this yields statement (a). For (b) we apply Theorem 6.3(b) to FF. Together with (14) and Δ(C)=Θ(1)\Delta\left(C\right)=\Theta\left(1\right) this implies whp Δ(Q)=Δ(F)\Delta\left(Q\right)=\Delta\left(F\right). Thus, statement (b) follows by applying again Theorem 6.3(b). ∎

7. Proofs of Theorems 1.4 and 1.6 and Corollaries 1.7 and 1.8

Throughout this section, let P=P(n,m)R𝒫(n,m)P=P(n,m)\in_{R}\mathcal{P}(n,m) be the random planar graph.

7.1. Proof of Theorem 1.4

In Theorem 5.2 we have seen that lim infn[G(n,m) is planar]>0\liminf_{n\to\infty}\mathbb{P}\left[G(n,m)\text{ is planar}\right]>0. Thus, each graph property that holds whp in G(n,m)G(n,m) is also true whp in PP and the first statement follows by Theorem 5.1(b). By taking ε=1/3\varepsilon=1/3 we get the ‘in particular’ statement. ∎

7.2. Proof of Theorem 1.6

We split PP into the large complex part QL=QL(P)Q_{L}=Q_{L}\left(P\right), the small complex part QS=QS(P)Q_{S}=Q_{S}\left(P\right), and the non-complex part U=U(P)U=U\left(P\right) as described in Section 2.3. We claim that whp the following hold.

  1. (i)

    ν()ε+1Δ(QL)ν()+ε+1\left\lfloor\nu\left(\ell\right)-\varepsilon\right\rfloor+1\hskip 1.70709pt\leq\hskip 1.70709pt\Delta\left(Q_{L}\right)\hskip 1.70709pt\leq\hskip 1.70709pt\left\lfloor\nu\left(\ell\right)+\varepsilon\right\rfloor+1;

  2. (ii)

    Δ(QS)ν(r2/3)+O(1)\Delta\left(Q_{S}\right)\hskip 1.70709pt\leq\hskip 1.70709pt\nu\left(r^{2/3}\right)+O\left(1\right);

  3. (iii)

    ν(r)εΔ(U)ν(r)+ε\left\lfloor\nu\left(r\right)-\varepsilon\right\rfloor\hskip 1.70709pt\leq\hskip 1.70709pt\Delta\left(U\right)\hskip 1.70709pt\leq\hskip 1.70709pt\left\lfloor\nu\left(r\right)+\varepsilon\right\rfloor.

Assuming these three claims are true we can finish the proof as follows. By Theorem 2.10(c) we have whp L=QLL=Q_{L} and therefore also whp R=QSUR=Q_{S}\cup U. Thus, statement (a) of Theorem 1.6 follows by (i). By Lemma 2.11(c) we have ν(r2/3)=(2/3+o(1))logr/loglogr\nu\left(r^{2/3}\right)=\left(2/3+o\left(1\right)\right)\log r/\log\log r and ν(r)=(1+o(1))logr/loglogr\nu\left(r\right)=\left(1+o\left(1\right)\right)\log r/\log\log r. Combining that with (ii) and (iii) yields whp Δ(QSU)=Δ(U)\Delta\left(Q_{S}\cup U\right)=\Delta\left(U\right) and therefore also whp Δ(R)=Δ(U)\Delta\left(R\right)=\Delta\left(U\right). Hence, statement (b) of Theorem 1.6 follows by (iii). Finally, we obtain the ‘in particular’ statement by taking ε=1/3\varepsilon=1/3.

To prove the claims (i)(iii), we will follow the strategy described in Section 3: We will construct a conditional random graph A𝐚A\mid\mathbf{a} which is distributed like the random graph P~\tilde{P} introduced in Section 3.1. Then we will determine the maximum degrees of the large complex part, small complex part and non-complex part of A𝐚A\mid\mathbf{a} (or equivalently of P~\tilde{P}). Finally, we will apply Lemma 2.9 to translate these results to the random planar graph PP.

Let β:=min{δ/2,1/5}\beta:=\min\left\{\delta/2,1/5\right\} and 𝒜(n)\mathcal{A}(n) be the subclass of 𝒫(n,m)\mathcal{P}(n,m) consisting of those graphs HH satisfying

Δ(C(H))\displaystyle\Delta\big{(}C\left(H\right)\big{)} =Θ(1),\displaystyle\hskip 1.70709pt=\hskip 1.70709pt\Theta\left(1\right), (15)
v(L(C(H)))\displaystyle v\big{(}{L\left(C\left(H\right)\right)}\big{)} =O(1β),\displaystyle\hskip 1.70709pt=\hskip 1.70709ptO\left(\ell^{1-\beta}\right), (16)
v(QL(H))\displaystyle v\big{(}{Q_{L}\left(H\right)}\big{)} =Θ(),\displaystyle\hskip 1.70709pt=\hskip 1.70709pt\Theta\left(\ell\right), (17)
v(QS(H))\displaystyle v\big{(}{Q_{S}\left(H\right)}\big{)} =O(r2/3),\displaystyle\hskip 1.70709pt=\hskip 1.70709ptO\left(r^{2/3}\right), (18)
v(U(H))\displaystyle v\big{(}{U\left(H\right)}\big{)} =Θ(r),\displaystyle\hskip 1.70709pt=\hskip 1.70709pt\Theta\left(r\right), (19)
e(U(H))\displaystyle e\big{(}{U\left(H\right)}\big{)} =v(U(H))/2+O(v(U(H))2/3).\displaystyle\hskip 1.70709pt=\hskip 1.70709ptv\big{(}{U\left(H\right)}\big{)}/2+O\left(v\big{(}{U\left(H\right)}\big{)}^{2/3}\right). (20)

Due to Theorem 2.10 we can choose the implicit hidden constants in the equations (15)–(20) such that P𝒜(n)P\in\mathcal{A}(n) with a probability of at least 1γ/21-\gamma/2, for arbitrary γ>0\gamma>0. We will apply Lemma 2.9 to the class 𝒜:=n𝒜(n)\mathcal{A}:=\bigcup_{n\in\mathbb{N}}\mathcal{A}(n). To that end, we define the function Φ\Phi such that for H𝒜H\in\mathcal{A} we have

Φ(H):=(C(H),v(QL(H)),v(QS(H))).\displaystyle\Phi(H):=\big{(}C\left(H\right),v\left({Q_{L}\left(H\right)}\right),v\left({Q_{S}\left(H\right)}\right)\big{)}.

Let 𝐚=(Cn,λn,σn)n\mathbf{a}=\left(C_{n},\lambda_{n},\sigma_{n}\right)_{n\in\mathbb{N}} be a sequence that is feasible for (𝒜,Φ)(\mathcal{A},\Phi) and let A=A(n)R𝒜(n)A=A(n)\in_{R}\mathcal{A}(n). By definition the possible realisations of A𝐚A\mid\mathbf{a} are those graphs H𝒜H\in\mathcal{A} with C(H)=CnC\left(H\right)=C_{n}, v(QL(H))=λnv\left({Q_{L}\left(H\right)}\right)=\lambda_{n}, and v(QS(H))=σnv\left({Q_{S}\left(H\right)}\right)=\sigma_{n}. Hence, A𝐚=QL(A𝐚)˙QS(A𝐚)˙U(A𝐚)A\mid\mathbf{a}=Q_{L}\left(A\mid\mathbf{a}\right)\leavevmode\nobreak\ \dot{\cup}\leavevmode\nobreak\ Q_{S}\left(A\mid\mathbf{a}\right)\leavevmode\nobreak\ \dot{\cup}\leavevmode\nobreak\ U\left(A\mid\mathbf{a}\right) can be constructed as follows. For QL(A𝐚)Q_{L}\left(A\mid\mathbf{a}\right) we choose uniformly at random a complex graph with λn\lambda_{n} vertices and core L(Cn)L\left(C_{n}\right) and for QS(A𝐚)Q_{S}\left(A\mid\mathbf{a}\right) a complex graph with σn\sigma_{n} vertices and core R(Cn)R\left(C_{n}\right). For U(A𝐚)U\left(A\mid\mathbf{a}\right) we choose a graph without complex components having un:=nλnσnu_{n}:=n-\lambda_{n}-\sigma_{n} vertices and wn:=me(Cn)+v(Cn)λnσnw_{n}:=m-e\left({C_{n}}\right)+v\left({C_{n}}\right)-\lambda_{n}-\sigma_{n} edges. Summing up, we have

QL(A𝐚)\displaystyle Q_{L}\big{(}A\mid\mathbf{a}\big{)} Q(L(Cn),λn);\displaystyle\sim Q\big{(}L\left(C_{n}\right),\lambda_{n}\big{)}; (21)
QS(A𝐚)\displaystyle Q_{S}\big{(}A\mid\mathbf{a}\big{)} Q(R(Cn),σn);\displaystyle\sim Q\big{(}R\left(C_{n}\right),\sigma_{n}\big{)}; (22)
U(A𝐚)\displaystyle U\big{(}A\mid\mathbf{a}\big{)} U(un,wn),\displaystyle\sim U\big{(}u_{n},w_{n}\big{)}, (23)

where the random complex parts and the random graph without complex components on the right hand side are as defined in Definitions 2.5 and 2.6, respectively. Due to (15) and (16) we have Δ(Cn)=Θ(1)\Delta\left(C_{n}\right)=\Theta\left(1\right) and v(L(Cn))=O(λn1β)v\big{(}{L\left(C_{n}\right)}\big{)}=O\left(\lambda_{n}^{1-\beta}\right). Hence, we can apply Theorem 6.4(b) to Q(L(Cn),λn)Q\big{(}L\left(C_{n}\right),\lambda_{n}\big{)}. Together with (21) this implies whp

ν(λn)ε/2+1Δ(QL(A𝐚))ν(λn)+ε/2+1.\displaystyle\left\lfloor\nu\left(\lambda_{n}\right)-\varepsilon/2\right\rfloor+1\hskip 1.70709pt\leq\hskip 1.70709pt\Delta\big{(}Q_{L}\left(A\mid\mathbf{a}\right)\big{)}\hskip 1.70709pt\leq\hskip 1.70709pt\left\lfloor\nu\left(\lambda_{n}\right)+\varepsilon/2\right\rfloor+1. (24)

Using Lemma 2.11(h) we have ν(λn)=ν()+o(1)\nu\left(\lambda_{n}\right)=\nu\left(\ell\right)+o\left(1\right), as λn=Θ()\lambda_{n}=\Theta\left(\ell\right) by (17). Together with (24) this shows whp

ν()ε+1Δ(QL(A𝐚))ν()+ε+1.\displaystyle\left\lfloor\nu\left(\ell\right)-\varepsilon\right\rfloor+1\hskip 1.70709pt\leq\hskip 1.70709pt\Delta\big{(}Q_{L}\left(A\mid\mathbf{a}\right)\big{)}\hskip 1.70709pt\leq\hskip 1.70709pt\left\lfloor\nu\left(\ell\right)+\varepsilon\right\rfloor+1. (25)

By Lemma 2.9 inequality (25) is also whp true if we replace A𝐚A\mid\mathbf{a} by AA. Combining it with the fact that P𝒜P\in\mathcal{A} with probability at least 1γ/21-\gamma/2 we obtain that with probability at least 1γ1-\gamma

ν()ε+1Δ(QL)ν()+ε+1\displaystyle\left\lfloor\nu\left(\ell\right)-\varepsilon\right\rfloor+1\hskip 1.70709pt\leq\hskip 1.70709pt\Delta\left(Q_{L}\right)\hskip 1.70709pt\leq\hskip 1.70709pt\left\lfloor\nu\left(\ell\right)+\varepsilon\right\rfloor+1

for all nn large enough. As γ>0\gamma>0 was arbitrary, this shows claim (i).

Next, we prove claims (ii) and (iii) in a similar fashion. Combining (22) and Theorem 6.4(a) yields Δ(QS(A𝐚))ν(σn)+O(1)\Delta\big{(}Q_{S}\left(A\mid\mathbf{a}\right)\big{)}\leq\nu\left(\sigma_{n}\right)+O\left(1\right). Due to Lemma 2.11(g) and (h) we have ν(σn)ν(r2/3)+o(1)\nu\left(\sigma_{n}\right)\leq\nu\left(r^{2/3}\right)+o(1), where we used σn=O(r2/3)\sigma_{n}=O\left(r^{2/3}\right) by (18). This yields Δ(QS(A𝐚))ν(r2/3)+O(1)\Delta\big{(}Q_{S}\left(A\mid\mathbf{a}\right)\big{)}\leq\nu\left(r^{2/3}\right)+O\left(1\right). Thus, claim (ii) follows by Lemma 2.9. Due to (20) we have wn=un/2+O(un2/3)w_{n}=u_{n}/2+O\left(u_{n}^{2/3}\right). Hence, we can combine (23) and Lemma 5.3 to obtain whp

ν(un)ε/2Δ(U(A𝐚))ν(un)+ε/2.\displaystyle\left\lfloor\nu\left(u_{n}\right)-\varepsilon/2\right\rfloor\hskip 1.70709pt\leq\hskip 1.70709pt\Delta\big{(}U\left(A\mid\mathbf{a}\right)\big{)}\hskip 1.70709pt\leq\hskip 1.70709pt\left\lfloor\nu\left(u_{n}\right)+\varepsilon/2\right\rfloor. (26)

Due to (19) we have un=Θ(r)u_{n}=\Theta\left(r\right) and therefore, we obtain ν(un)=ν(r)+o(1)\nu\left(u_{n}\right)=\nu\left(r\right)+o\left(1\right) by Lemma 2.11(h). Using that in (26) we get whp

ν(r)εΔ(U(A𝐚))ν(r)+ε.\displaystyle\left\lfloor\nu\left(r\right)-\varepsilon\right\rfloor\hskip 1.70709pt\leq\hskip 1.70709pt\Delta\big{(}U\left(A\mid\mathbf{a}\right)\big{)}\hskip 1.70709pt\leq\hskip 1.70709pt\left\lfloor\nu\left(r\right)+\varepsilon\right\rfloor.

Now claim (iii) follows by Lemma 2.9. ∎

7.3. Proof of Corollary 1.7

We distinguish two cases. If mm is as in Theorem 1.4, then whp Δ(P)=ν(n,2m)+O(1)=(1+o(1))logn/loglogn\Delta\left(P\right)=\nu\left(n,2m\right)+O\left(1\right)=\left(1+o\left(1\right)\right)\log n/\log\log n, where we used Theorem 1.4, Lemma 2.11(c), and that m=Θ(n)m=\Theta\left(n\right). If mm is as in Theorem 1.6, then max{(n),r(n)}=n\max\left\{\ell(n),r(n)\right\}=n. Together with Lemma 2.11(c) and (g) this implies max{ν(),ν(r)}=ν(n)=(1+o(1))logn/loglogn\max\left\{\nu\left(\ell\right),\nu\left(r\right)\right\}=\nu\left(n\right)=\left(1+o\left(1\right)\right)\log n/\log\log n. Combining that with the ‘in particular’ statement of Theorem 1.6 we obtain whp Δ(P)=(1+o(1))logn/loglogn\Delta\left(P\right)=\left(1+o\left(1\right)\right)\log n/\log\log n, as desired. ∎

7.4. Proof of Corollary 1.8

The assertion follows directly from Theorem 1.6 and the fact from Definition 1.5(b) that =n\ell=n and r=nr=n. ∎

8. Non-concentration of the maximum degree: Proof of Theorem 1.10

We recall that we denote by 𝒫(n,m)\mathcal{P}(n,m) the class of all vertex-labelled planar graphs on vertex set [n][n] with mm edges. Furthermore, let 𝒫C(n,m)𝒫(n,m)\mathcal{P}_{C}(n,m)\subseteq\mathcal{P}(n,m) be the subclass containing all connected planar graphs on vertex set [n][n] with mm edges. Our starting point is the following result of Giménez and Noy [26].

Theorem 8.1 ([26]).

Let μ(1,3)\mu\in(1,3) and m=m(n)=μnm=m(n)=\left\lfloor\mu n\right\rfloor. Then there exist constants γ,u>0\gamma,u>0 and cc1>0c\geq c_{1}>0 such that

|𝒫(n,m)|\displaystyle\left|\mathcal{P}(n,m)\right| =(1+o(1))cn4γnumn!;\displaystyle=\left(1+o\left(1\right)\right)cn^{-4}\gamma^{n}u^{m}n!;
|𝒫C(n,m)|\displaystyle\left|\mathcal{P}_{C}(n,m)\right| =(1+o(1))c1n4γnumn!.\displaystyle=\left(1+o\left(1\right)\right)c_{1}n^{-4}\gamma^{n}u^{m}n!.

Using Theorem 8.1 we can show that the probability that the dense random planar graph P(n,m)P(n,m) has kk isolated vertices and ll isolated edges is bounded away from 0, for each fixed k,l0:={0}k,l\in\mathbb{N}_{0}:=\mathbb{N}\cup\left\{0\right\}.

Lemma 8.2.

Let P=P(n,m)R𝒫(n,m)P=P(n,m)\in_{R}\mathcal{P}(n,m) and assume m=m(n)=μnm=m(n)=\left\lfloor\mu n\right\rfloor for μ(1,3)\mu\in(1,3). Then we have for all fixed k,l0k,l\in\mathbb{N}_{0}

lim infn[P has k isolated vertices and l isolated edges]>0.\displaystyle\liminf_{n\to\infty}\leavevmode\nobreak\ \mathbb{P}\left[P\text{ has }k\text{ isolated vertices and }l\text{ isolated edges}\right]>0.
Proof.

Throughout the proof, let nn be sufficiently large. Let HH be a fixed planar graph having kk isolated vertices and ll isolated edges and satisfying e(H)=μv(H)e\left({H}\right)=\left\lfloor\mu\cdot v\left({H}\right)\right\rfloor. Then we can construct ‘many’ graphs in 𝒫(n,m)\mathcal{P}(n,m) with kk isolated vertices and ll isolated edges by adding a copy of HH to a connected graph H𝒫C(nv(H),me(H))H^{\prime}\in\mathcal{P}_{C}(n-v\left({H}\right),m-e\left({H}\right)). More precisely, we consider the following construction:

  • Choose a subset L[n]L\subseteq[n] of size v(H)v\left({H}\right) and label the vertices of HH with LL;

  • Choose a graph H𝒫C(nv(H),me(H))H^{\prime}\in\mathcal{P}_{C}(n-v\left({H}\right),m-e\left({H}\right)), label the vertices with [n]L[n]\setminus L, and add the copy of HH to HH^{\prime}.

As these constructed graphs are all pairwise distinct, we obtain

[P has k isolated vertices and l isolated edges](nv(H))|𝒫C(nv(H),me(H))||𝒫(n,m)|.\displaystyle\mathbb{P}\left[P\text{ has }k\text{ isolated vertices and }l\text{ isolated edges}\right]\geq\frac{\binom{n}{v\left({H}\right)}\left|\mathcal{P}_{C}(n-v\left({H}\right),m-e\left({H}\right))\right|}{\left|\mathcal{P}(n,m)\right|}. (27)

We note that μ(nv(H))\left\lfloor\mu\cdot\left(n-v\left({H}\right)\right)\right\rfloor is either me(H)m-e\left({H}\right) or me(H)1m-e\left({H}\right)-1. We assume μ(nv(H))=me(H)\left\lfloor\mu\cdot\left(n-v\left({H}\right)\right)\right\rfloor=m-e\left({H}\right), as the latter case can be done analogously by considering instead of HH a graph having v(H)v\left({H}\right) vertices, e(H)+1e\left({H}\right)+1 edges, kk isolated vertices, and ll isolated edges. Theorem 8.1 implies

|𝒫C(nv(H),me(H))|=Θ(1)|𝒫(nv(H),me(H))|=Θ(1)nv(H)|𝒫(n,m)|.\displaystyle\left|\mathcal{P}_{C}(n-v\left({H}\right),m-e\left({H}\right))\right|=\Theta\left(1\right)\left|\mathcal{P}(n-v\left({H}\right),m-e\left({H}\right))\right|=\Theta\left(1\right)n^{-v\left({H}\right)}\left|\mathcal{P}(n,m)\right|. (28)

Finally, the statement follows by using (28) and the fact (nv(H))=Θ(1)nv(H)\binom{n}{v\left({H}\right)}=\Theta\left(1\right)n^{v\left({H}\right)} in (27). ∎

We note that the formulas on the number of planar graphs in Theorem 8.1 are not true in the sparse regime where mn+o(n)m\leq n+o\left(n\right). As a consequence, also Lemma 8.2 does not hold in that case.

Next, we will show that we can locally change a graph so that the maximum degree increases by one, the number of isolated vertices by three, and the number of isolated edges decreases by two. Using it we can create graphs with many different maximum degrees. The following definition and lemma make this idea more precise.

Definition 8.3.

For n,m,dn,m,d\in\mathbb{N} and k,l0k,l\in\mathbb{N}_{0} we let 𝒫(n,m,k,l,d)𝒫(n,m)\mathcal{P}(n,m,k,l,d)\subseteq\mathcal{P}(n,m) be the subclass of all planar graphs on vertex set [n][n] having mm edges, kk isolated vertices, ll isolated edges, and maximum degree dd.

Lemma 8.4.

Let n,m,k,l,dn,m,k,l,d\in\mathbb{N} with l2l\geq 2 and d3d\geq 3. Then we have

|𝒫(n,m,k+3,l2,d+1)||𝒫(n,m,k,l,d)|18k3.\displaystyle\frac{\left|\mathcal{P}(n,m,k+3,l-2,d+1)\right|}{\left|\mathcal{P}(n,m,k,l,d)\right|}\geq\frac{1}{8k^{3}}.
Proof.

We consider the following operation that transforms a graph H𝒫(n,m,k,l,d)H\in\mathcal{P}(n,m,k,l,d) to a graph H𝒫(n,m,k+3,l2,d+1)H^{\prime}\in\mathcal{P}(n,m,k+3,l-2,d+1) (see also Figure 4). We pick in HH a vertex v1v_{1} of degree dd, a neighbour v2v_{2} of v1v_{1}, an isolated vertex v3v_{3}, and isolated edges v4v5v_{4}v_{5}, v6v7v_{6}v_{7}. Then we obtain HH^{\prime} from HH by deleting the edges v4v5v_{4}v_{5}, v6v7v_{6}v_{7} and adding v1v3v_{1}v_{3} and v2v3v_{2}v_{3}. For two graphs H𝒫(n,m,k,l,d)H\in\mathcal{P}(n,m,k,l,d) and H𝒫(n,m,k+3,l2,d+1)H^{\prime}\in\mathcal{P}(n,m,k+3,l-2,d+1) we write HHH\rightarrow H^{\prime} if HH can be transformed to HH^{\prime} via the above operation. For a fixed graph HH we have

|{HHH}|dk(l2).\displaystyle\left|\left\{H^{\prime}\mid H\rightarrow H^{\prime}\right\}\right|\geq dk\binom{l}{2}. (29)

Next, we note that if we perform our operation HHH\rightarrow H^{\prime}, then HH^{\prime} satisfies the following properties:

  • There are at most two vertices in HH^{\prime} with degree d+1d+1;

  • The vertex v1v_{1} has degree d+1d+1;

  • The vertex v3v_{3} has exactly two neighbours, which are v1v_{1} and v2v_{2};

  • The vertices v4,v5,v6v_{4},v_{5},v_{6} and v7v_{7} are isolated.

Using these observations we can bound for a fixed HH^{\prime} the number of graphs HH with HHH\rightarrow H^{\prime}. There are at most two possible vertices in HH^{\prime} which can be v1v_{1} and knowing v1v_{1} there are at most d+1d+1 options for the vertex v3v_{3}. Given v1v_{1} and v3v_{3} the vertex v2v_{2} is already determined. Finally, for the vertices v4,v5,v6v_{4},v_{5},v_{6} and v7v_{7} there are 3(k+34)3\binom{k+3}{4} possibilities. Hence, we obtain

|{HHH}|2(d+1)3(k+34).\displaystyle\left|\left\{H\mid H\rightarrow H^{\prime}\right\}\right|\leq 2(d+1)3\binom{k+3}{4}. (30)

Combining (29) and (30) yields

|𝒫(n,m,k+3,l2,d+1)||𝒫(n,m,k,l,d)|dk(l2)2(d+1)3(k+34)18k3,\displaystyle\frac{\left|\mathcal{P}(n,m,k+3,l-2,d+1)\right|}{\left|\mathcal{P}(n,m,k,l,d)\right|}\geq\frac{dk\binom{l}{2}}{2(d+1)3\binom{k+3}{4}}\geq\frac{1}{8k^{3}},

where we used k/(k+34)1/k3k/\binom{k+3}{4}\geq 1/k^{3} and d/(d+1)3/4d/(d+1)\geq 3/4, as d3d\geq 3. This shows the statement. ∎

v1v_{1}v2v_{2}v4v_{4}v5v_{5}v6v_{6}v7v_{7}v3v_{3}v1v_{1}v2v_{2}v4v_{4}v5v_{5}v6v_{6}v7v_{7}v3v_{3}\rightarrow
Figure 4. The operation used in the proof of Lemma 8.4: A graph H𝒫(n,m,k,l,d)H\in\mathcal{P}(n,m,k,l,d) is transformed to a graph H𝒫(n,m,k+3,l2,d+1)H^{\prime}\in\mathcal{P}(n,m,k+3,l-2,d+1).

Finally, we can show Theorem 1.10 by using Lemma 8.2 and applying Lemma 8.4 repeatedly.

8.1. Proof of Theorem 1.10

We assume to the contrary that there is a AA\in\mathbb{N} such that |I|A\left|I\right|\leq A for infinitely many nn. To simplify notation, we even assume that |I|A\left|I\right|\leq A is true for all nn\in\mathbb{N}. Otherwise, we could restrict our considerations to the subsequence consisting of all nn satisfying |I|A\left|I\right|\leq A. By Lemma 8.2 there is a ρ>0\rho>0 such that for all nn large enough

dI|𝒫(n,m,1,2A,d)|ρ|𝒫(n,m)|.\displaystyle\sum_{d\in I}\left|\mathcal{P}(n,m,1,2A,d)\right|\geq\rho\left|\mathcal{P}(n,m)\right|.

In particular, we can choose d=d(n)d=d(n)\in\mathbb{N} such that

|𝒫(n,m,1,2A,d)|ρA|𝒫(n,m)|.\displaystyle\left|\mathcal{P}(n,m,1,2A,d)\right|\geq\frac{\rho}{A}\left|\mathcal{P}(n,m)\right|.

Combining that with Lemma 8.4 we get for all iAi\leq A

|𝒫(n,m,1+3i,2A2i,d+i)|(18(3i2)3)i|𝒫(n,m,1,2A,d)|(18(3A)3)AρA|𝒫(n,m)|.\displaystyle\left|\mathcal{P}(n,m,1+3i,2A-2i,d+i)\right|\geq\left(\frac{1}{8(3i-2)^{3}}\right)^{i}\left|\mathcal{P}(n,m,1,2A,d)\right|\geq\left(\frac{1}{8(3A)^{3}}\right)^{A}\frac{\rho}{A}\left|\mathcal{P}(n,m)\right|.

This implies that

[Δ(P)=d+i]|𝒫(n,m,1+3i,2A2i,d+i)||𝒫(n,m)|(18(3A)3)AρA.\displaystyle\mathbb{P}\left[\Delta\left(P\right)=d+i\right]\geq\frac{\left|\mathcal{P}(n,m,1+3i,2A-2i,d+i)\right|}{\left|\mathcal{P}(n,m)\right|}\geq\left(\frac{1}{8(3A)^{3}}\right)^{A}\frac{\rho}{A}.

As AA and ρ\rho are fixed constants and whp Δ(P)I\Delta\left(P\right)\in I, this shows that for all nn large enough we have d+iId+i\in I. Hence, we get {d,d+1,,d+A}I\left\{d,d+1,\ldots,d+A\right\}\subseteq I, and therefore |I|A+1|I|\geq A+1, contradicting the fact |I|A\left|I\right|\leq A. This finishes the proof. ∎

9. Properties of ν(n,k)\nu\left(n,k\right)

In this section we consider the function ν(n,k)\nu\left(n,k\right) introduced in Definition 1.3. First we will show that ν(n,k)\nu\left(n,k\right) is well defined and then we will provide a proof of Lemma 2.11.

9.1. Well-definedness of ν(n,k)\nu\left(n,k\right)

We recall that for given n,kn,k\in\mathbb{N} we defined the function ff as

f(x)=fn,k(x):=xlogk+x(x+1/2)logx(x1)logn.\displaystyle f(x)=f_{n,k}(x):=x\log k+x-\left(x+1/2\right)\log x-(x-1)\log n. (31)

By basic calculus we obtain f(x)>0f(x)>0 for all x(0,1]x\in(0,1], f′′(x)<0f^{\prime\prime}(x)<0 for all x1x\geq 1, and f(x)f(x)\to-\infty as xx\to\infty. This implies that ff has a unique zero in (0,)(0,\infty), which shows that ν(n,k)\nu\left(n,k\right) is well defined.

Moreover, we obtain the following fact, which we will use in Section 9.2:

f(x){>0if x<ν(n,k),=0if x=ν(n,k),<0if x>ν(n,k).\displaystyle f(x)\begin{cases}>0&\text{if }x<\nu\left(n,k\right),\\ =0&\text{if }x=\nu\left(n,k\right),\\ <0&\text{if }x>\nu\left(n,k\right).\end{cases} (32)

9.2. Proof of Lemma 2.11

Let ff be as defined in (31) and let ν(n,k)\nu\left(n,k\right) be the unique positive zero of ff. For x(0,1]x\in(0,1] we have f(x)>0f(x)>0, which together with (32) implies (a), i.e. ν(n,k)>1\nu\left(n,k\right)>1.

In order to prove (b), we may assume that kCn1/3k\leq Cn^{1/3} for a suitable constant C>0C>0. Now we get for nn large enough

f(5/3)1/9logn+5/3logC+5/313/6log(5/3)<0.\displaystyle f(5/3)\hskip 1.70709pt\leq\hskip 1.70709pt-1/9\log n+5/3\log C+5/3-13/6\log\left(5/3\right)<0.

Together with (32) this implies ν(n,k)5/3\nu\left(n,k\right)\leq 5/3 for all nn large enough, which yields (b).

For (c) we may assume k=Θ(n)k=\Theta\left(n\right). Then we have for a>0a>0

f(alognloglogn)=(1a+o(1))logn.\displaystyle f\left(a\frac{\log n}{\log\log n}\right)=\left(1-a+o\left(1\right)\right)\log n.

Thus, (c) follows by (32).

For (d) we fix nn\in\mathbb{N} and define K(x):=(1+1/(2x))logx+(11/x)logn1K(x):=\left(1+1/(2x)\right)\log x+\left(1-1/x\right)\log n-1. It is easy to check that K(ν(n,k))=logkK(\nu\left(n,k\right))=\log k and KK is strictly increasing. This implies (d).

For (e) we observe that by definition of ν=ν(n,k)\nu=\nu\left(n,k\right)

1=eknνexp((logn1/2logν)/ν).\displaystyle 1=e\frac{k}{n\nu}\exp\left(\left(\log n-1/2\log\nu\right)/\nu\right). (33)

Due to (c) and (d) we have ν=o(logn)\nu=o\left(\log n\right), which yields (logn1/2logν)/ν=ω(1)\left(\log n-1/2\log\nu\right)/\nu=\omega\left(1\right). Combining that with (33) shows ν=ω(k/n)\nu=\omega\left(k/n\right).

For (f) we let ν=ν(n,k)\nu=\nu\left(n,k\right) and ρ\rho\in\mathbb{R}. Due to (c) we have ν=(1+o(1))logn/loglogn\nu=\left(1+o\left(1\right)\right)\log n/\log\log n and therefore

K(ν+ρ)K(ν)=(loglogn)2logn(ρ+o(1)).\displaystyle K\left(\nu+\rho\right)-K(\nu)=\frac{\left(\log\log n\right)^{2}}{\log n}\left(\rho+o\left(1\right)\right). (34)

On the other hand, we have

K(ν(n,k+d))K(ν(n,k))=log(k+d)logk=Θ(d/k)=o((loglogn)2/logn).\displaystyle K\big{(}\nu\left(n,k+d\right)\big{)}-K\big{(}\nu\left(n,k\right)\big{)}=\log(k+d)-\log k=\Theta\left(d/k\right)=o\left(\left(\log\log n\right)^{2}/\log n\right).

Together with (34) this implies (f), as KK is strictly increasing.

Similarly, we define for (g) the function g(x):=(x+1/2)logxxg(x):=\left(x+1/2\right)\log x-x. Now (g) follows by the facts that g(ν(n))=logng(\nu\left(n\right))=\log n and gg is strictly increasing.

For (h), let C(x)=Cn,k(x):=(x+1/2)logx+x(lognlogk1)lognC(x)=C_{n,k}(x):=\left(x+1/2\right)\log x+x\left(\log n-\log k-1\right)-\log n and ρ\rho\in\mathbb{R} be fixed. Then we have

C(x+ρ)C(x)=ρlog(x+ρ)+(x+1/2)log((x+ρ)/x)+ρ(lognlogk1)=ρlog(x+ρ)+O(1).\displaystyle C(x+\rho)-C(x)=\rho\log(x+\rho)+\left(x+1/2\right)\log\left((x+\rho)/x\right)+\rho\left(\log n-\log k-1\right)=\rho\log(x+\rho)+O\left(1\right).

Moreover, there is a constant A>0A>0 independent of nn such that CC is strictly increasing for x>Ax>A. Due to (c) we have ν(n,k)=ω(1)\nu\left(n,k\right)=\omega\left(1\right) and ν(cn,ck)=ω(1)\nu\left(cn,ck\right)=\omega\left(1\right) and using the definition of ν\nu we get C(ν(cn,ck))C(ν(n,k))=logc0=O(1)C\left(\nu\left(cn,ck\right)\right)-C\left(\nu\left(n,k\right)\right)=\log c-0=O\left(1\right). This implies ν(cn,ck)=ν(n,k)+o(1)\nu\left(cn,ck\right)=\nu\left(n,k\right)+o\left(1\right).

Finally for (i), let x=ν(n)x=\nu\left(n\right) and y=ν(n,cn)y=\nu\left(n,cn\right). Using the definition of ν\nu we have 0=x(x+1/2)logx+logn=ylogc+y(y+1/2)logy+logn0=x-(x+1/2)\log x+\log n=y\log c+y-(y+1/2)\log y+\log n. Hence, we obtain

yx=ylogc(y+1/2)log(y/x)logx1.\displaystyle y-x=\frac{y\log c-\left(y+1/2\right)\log\left(y/x\right)}{\log x-1}. (35)

By (c) we have x,y=(1+o(1))logn/loglognx,y=\left(1+o\left(1\right)\right)\log n/\log\log n and y/x=1+o(1)y/x=1+o\left(1\right). Together with (35) this yields (i). ∎

10. Discussion

The only properties about the random planar graph P=P(n,m)P=P(n,m) which we used in the proofs of our main results in the sparse regime (Theorems 1.4 and 1.6 and Corollaries 1.7 and 1.8) are the results on the internal structure in Theorem 2.10. Kang, Moßhammer, and Sprüssel [34] showed that Theorem 2.10 is true for much more general classes of graphs. Prominent examples of such classes are cactus graphs, series-parallel graphs, and graphs embeddable on an orientable surface of genus g{0}g\in\mathbb{N}\cup\{0\} (see [33, Section 4]). Using the generalised version of Theorem 2.10 and analogous proofs of Theorems 1.4 and 1.6 and Corollaries 1.7 and 1.8, one can show the following.

Theorem 10.1.

Theorems 1.4 and 1.6 and Corollaries 1.7 and 1.8 are true for the class of cactus graphs, the class of series-parallel graphs, and the class of graphs embeddable on an orientable surface of genus g0g\in\mathbb{N}_{0}.

Theorem 1.6 does not cover the whole regime m=n+o(n)m=n+o\left(n\right) and leaves a small gap of order no(1)n^{o\left(1\right)} to the dense case where m=m(n)=μnm=m(n)=\left\lfloor\mu n\right\rfloor for μ(1,3)\mu\in(1,3). This leads to the following natural question on the behaviour of Δ(P)\Delta\left(P\right) in the unconsidered region where m=m(n)=n+tm=m(n)=n+t for a t=t(n)>0t=t(n)>0 such that t=n1+o(1)t=n^{1+o\left(1\right)} and t=o(n)t=o\left(n\right).

Question 10.2.

Let P=P(n,m)R𝒫(n,m)P=P(n,m)\in_{R}\mathcal{P}(n,m) and m=m(n)=n+tm=m(n)=n+t where t=t(n)>0t=t(n)>0 is such that t=n1+o(1)t=n^{1+o\left(1\right)} and t=o(n)t=o\left(n\right). Is Δ(P)\Delta\left(P\right) concentrated on a subset of [n][n] with bounded size?

In Theorem 1.10 we saw that Δ(P)\Delta\left(P\right) is not concentrated on any subset of [n][n] with bounded size if m=m(n)=μnm=m(n)=\left\lfloor\mu n\right\rfloor for μ(1,3)\mu\in(1,3). This raises the question how large a set I=I(n)[n]I=I(n)\subseteq[n] needs to be such that whp Δ(P)I\Delta\left(P\right)\in I can hold. Furthermore, it would be interesting to know the precise asymptotic order of Δ(P)\Delta\left(P\right) in that regime.

Question 10.3.

Let P=P(n,m)R𝒫(n,m)P=P(n,m)\in_{R}\mathcal{P}(n,m) and assume m=m(n)=μnm=m(n)=\left\lfloor\mu n\right\rfloor for μ(1,3)\mu\in(1,3). What is the smallest size of a set I=I(n)[n]I=I(n)\subseteq[n] satisfying whp Δ(P)I\Delta\left(P\right)\in I? Moreover, what is the asymptotic order of Δ(P)\Delta\left(P\right)?

Acknowledgement

The authors thank the anonymous referees for many helpful remarks to improve the presentation of this paper.

References

  • [1] N. Alon and J.H. Spencer. The Probabilistic Method. Wiley Series in Discrete Mathematics and Optimization. Wiley, 3rd edition, 2011.
  • [2] B. Bollobás. The distribution of the maximum degree of a random graph. Discrete Math., 32(2):201–203, 1980.
  • [3] B. Bollobás. Degree sequences of random graphs. Discrete Math., 33(1):1–19, 1981.
  • [4] B. Bollobás. Vertices of given degree in a random graph. J. Graph Theory, 6(2):147–155, 1982.
  • [5] B. Bollobás. Random Graphs. Cambridge University Press, 2nd edition, 2001.
  • [6] V. E. Britikov. The structure of a random graph near a critical point. Diskret. Mat., 1(3):121–128, 1989.
  • [7] R. Carr, W. Goh, and E. Schmutz. The maximum degree in a random tree and related problems. Random Structures Algorithms, 5(1):13–24, 1994.
  • [8] G. Chapuy, É. Fusy, O. Giménez, B. Mohar, and M. Noy. Asymptotic enumeration and limit laws for graphs of fixed genus. J. Combin. Theory Ser. A, 118(3):748–777, 2011.
  • [9] G. Chapuy, É. Fusy, O. Giménez, and M. Noy. On the diameter of random planar graphs. Combin. Probab. Comput., 24(1):145–178, 2015.
  • [10] G. Collet, M. Drmota, and L. D. Klausner. Limit laws of planar maps with prescribed vertex degrees. Combin. Probab. Comput., 28(4):519–541, 2019.
  • [11] C. Dowden, M. Kang, and P. Sprüssel. The evolution of random graphs on surfaces. SIAM J. Discrete Math., 32(1):695–727, 2018.
  • [12] M. Drmota, O. Giménez, and M. Noy. Degree distribution in random planar graphs. J. Combin. Theory Ser. A, 118(7):2102–2130, 2011.
  • [13] M. Drmota, O. Giménez, M. Noy, K. Panagiotou, and A. Steger. The maximum degree of random planar graphs. Proc. Lond. Math. Soc. (3), 109(4):892–920, 2014.
  • [14] M. Drmota, M. Noy, and B. Stufler. Cut Vertices in Random Planar Maps. In 31st International Conference on Probabilistic, Combinatorial and Asymptotic Methods for the Analysis of Algorithms (AofA 2020), volume 159 of Leibniz International Proceedings in Informatics (LIPIcs), pages 10:1–10:18, 2020.
  • [15] M. Drmota and B. Stufler. Pattern occurrences in random planar maps. Statist. Probab. Lett., 158:108666, 2020.
  • [16] P. Erdős and A. Rényi. On random graphs. I. Publ. Math. Debrecen, 6:290–297, 1959.
  • [17] P. Erdős and A. Rényi. On the evolution of random graphs. Magyar Tud. Akad. Mat. Kutató Int. Közl., 5:17–61, 1960.
  • [18] P. Erdős and R. Wilson. On the chromatic index of almost all graphs. J. Combin. Theory Ser. B, 23(2-3):255–257, 1977.
  • [19] N. Fountoulakis and K. Panagiotou. 3-connected cores in random planar graphs. Combin. Probab. Comput., 20(3):381–412, 2011.
  • [20] A. Frieze and M. Karoński. Introduction to Random Graphs. Cambridge University Press, 2015.
  • [21] É. Fusy. Uniform random sampling of planar graphs in linear time. Random Structures Algorithms, 35(4):464–522, 2009.
  • [22] S. Gerke and C. McDiarmid. On the number of edges in random planar graphs. Combin. Probab. Comput., 13(2):165–183, 2004.
  • [23] S. Gerke, C. McDiarmid, A. Steger, and A. Weißl. Random planar graphs with nn nodes and a fixed number of edges. In Proceedings of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 999–1007. ACM, New York, 2005.
  • [24] S. Gerke, C. McDiarmid, A. Steger, and A. Weißl. Random planar graphs with given average degree. In Combinatorics, complexity, and chance, volume 34 of Oxford Lecture Ser. Math. Appl., pages 83–102. Oxford Univ. Press, Oxford, 2007.
  • [25] S. Gerke, D. Schlatter, A. Steger, and A. Taraz. The random planar graph process. Random Structures Algorithms, 32(2):236–261, 2008.
  • [26] O. Giménez and M. Noy. Asymptotic enumeration and limit laws of planar graphs. J. Amer. Math. Soc., 22(2):309–329, 2009.
  • [27] G. H. Gonnet. Expected length of the longest probe sequence in hash code searching. J. Assoc. Comput. Mach., 28(2):289–304, 1981.
  • [28] G. I. Ivčenko. The asymptotic behavior of the degrees of vertices in a random graph. Teor. Verojatnost. i Primenen., 18:195–203, 1973.
  • [29] S. Janson, D. E. Knuth, T. Łuczak, and B. Pittel. The birth of the giant component. Random Structures Algorithms, 4(3):231–358, 1993.
  • [30] S. Janson, T. Łuczak, and A. Ruciński. Random Graphs. Wiley, 2000.
  • [31] N. L. Johnson and S. Kotz. Urn models and their application: an approach to modern discrete probability theory. Wiley, 1977.
  • [32] M. Kang and T. Łuczak. Two critical periods in the evolution of random planar graphs. Trans. Amer. Math. Soc., 364(8):4239–4265, 2012.
  • [33] M. Kang and M. Missethan. Longest and shortest cycles in random planar graphs. arXiv:2006.09697.
  • [34] M. Kang, M. Moßhammer, and P. Sprüssel. Phase transitions in graphs on orientable surfaces. Random Structures Algorithms, 56(4):1117–1170, 2020.
  • [35] T. Łuczak, B. Pittel, and J. C. Wierman. The structure of a random graph at the point of the phase transition. Trans. Amer. Math. Soc., 341(2):721–748, 1994.
  • [36] J. Matoušek and J. Nešetřil. Invitation to Discrete Mathematics. Oxford University Press, 2nd edition, 2009.
  • [37] C. McDiarmid. Random graphs on surfaces. J. Combin. Theory Ser. B, 98(4):778–797, 2008.
  • [38] C. McDiarmid and B. Reed. On the maximum degree of a random planar graph. Combin. Probab. Comput., 17(4):591–601, 2008.
  • [39] C. McDiarmid, A. Steger, and D. Welsh. Random planar graphs. J. Combin. Theory Ser. B, 93(2):187–205, 2005.
  • [40] B. McKay and N. Wormald. The degree sequence of a random graph. I. The models. Random Structures Algorithms, 11(2):97–117, 1997.
  • [41] M. Mitzenmacher and E. Upfal. Probability and Computing: Randomization and Probabilistic Techniques in Algorithms and Data Analysis. Cambridge University Press, 2nd edition, 2017.
  • [42] J. W. Moon. On the maximum degree in a random tree. Michigan Math. J., 15(4):429–432, 1968.
  • [43] M. Noy, V. Ravelomanana, and J. Rué. On the probability of planarity of a random graph near the critical point. Proc. Amer. Math. Soc., 143(3):925–936, 2015.
  • [44] M. Noy, C. Requilé, and J. Rué. Further results on random cubic planar graphs. Random Structures Algorithms, 56(3):892–924, 2020.
  • [45] K. Panagiotou and A. Steger. Maximal biconnected subgraphs of random planar graphs. ACM Trans. Algorithms, 6(2):Art. 31, 21, 2010.
  • [46] K. Panagiotou and A. Steger. On the degree distribution of random planar graphs. In Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1198–1210. SIAM, Philadelphia, PA, 2011.
  • [47] M. Raab and A. Steger. “Balls into bins” – a simple and tight analysis. In Randomization and Approximation Techniques in Computer Science, pages 159–170. Springer Berlin, 1998.
  • [48] O. Riordan and A. Selby. The maximum degree of a random graph. Combin. Probab. Comput., 9(6):549–572, 2000.
  • [49] J. Spencer. Asymptopia, volume 71 of Student Mathematical Library. American Mathematical Society, Providence, RI, 2014. With L. Florescu.
  • [50] J.H. van Lint and R.M. Wilson. A Course in Combinatorics. Cambridge University Press, 2nd edition, 2001.

Appendix A Sketch of the proof of Proposition 1.9

Due to Theorem 5.1(b) we have whp

Δ(Gi)=ν(n,din)+O(1).\displaystyle\Delta\left(G_{i}\right)=\nu\left(n,d_{i}n\right)+O\left(1\right). (36)

Together with Lemma 2.11(i) this shows whp 0<Δ(G2)Δ(G1)=Θ(logn/(loglogn)2)0<\Delta\left(G_{2}\right)-\Delta\left(G_{1}\right)=\Theta\left(\log n/\left(\log\log n\right)^{2}\right). The so-called symmetry rule (see e.g. [30, Section 5.6]) says that R(Gi)R(G_{i}) ‘behaves’ asymptotically like G(ni,mi)G\left(n_{i}^{\prime},m_{i}^{\prime}\right) for suitable ni=ni(n)=Θ(n)n_{i}^{\prime}=n_{i}^{\prime}(n)=\Theta\left(n\right) and mi=mi(n)m_{i}^{\prime}=m_{i}^{\prime}(n) such that 2mi/ni2m_{i}^{\prime}/n_{i}^{\prime} tends to a constant ci(0,1)c_{i}^{\prime}\in(0,1) with c1>c2c_{1}^{\prime}>c_{2}^{\prime}. Together with Theorem 5.1(b) this implies that whp

Δ(R(Gi))=ν(ni,2mi)+O(1)=ν(n,cin)+o(logn/(loglogn)2),\displaystyle\Delta\left(R\left(G_{i}\right)\right)=\nu\left(n_{i}^{\prime},2m_{i}^{\prime}\right)+O\left(1\right)=\nu\left(n,c_{i}^{\prime}n\right)+o\left(\log n/\left(\log\log n\right)^{2}\right), (37)

where we used in the last equality Lemma 2.11(h) and (i). Using (37), Lemma 2.11(i), and the fact c1>c2c_{1}^{\prime}>c_{2}^{\prime}, we obtain whp 0<Δ(R(G1))Δ(R(G2))=Θ(logn/(loglogn)2)0<\Delta\left(R\left(G_{1}\right)\right)-\Delta\left(R\left(G_{2}\right)\right)=\Theta\left(\log n/\left(\log\log n\right)^{2}\right). Combining Lemma 2.11(i), (36), and (37) yields that whp Δ(L(Gi))=Δ(Gi)=ν(n,din)+O(1)\Delta\left(L\left(G_{i}\right)\right)=\Delta\left(G_{i}\right)=\nu\left(n,d_{i}n\right)+O\left(1\right). Hence, we get whp 0<Δ(L(G2))Δ(L(G1))=Θ(logn/(loglogn)2)0<\Delta\left(L\left(G_{2}\right)\right)-\Delta\left(L\left(G_{1}\right)\right)=\Theta\left(\log n/\left(\log\log n\right)^{2}\right) and 0<Δ(L(G1))Δ(R(G1))=Θ(logn/(loglogn)2)0<\Delta\left(L\left(G_{1}\right)\right)-\Delta\left(R\left(G_{1}\right)\right)=\Theta\left(\log n/\left(\log\log n\right)^{2}\right). ∎

Appendix B Proof of Theorem 6.1

We start by proving (13). To that end, let r[t]r\in[t] be a root vertex. Throughout the construction of ψ(F)\psi(F) the root rr is always the vertex with smallest label in the component of FiF_{i} containing rr. This implies ryir\neq y_{i} for all i[nt]i\in[n-t]. As the elements of the sequence 𝐲=(y1,,ynt)\mathbf{y}=\left(y_{1},\ldots,y_{n-t}\right) are all distinct, we obtain

#(v,𝐲)={0 if v[t]1 if v[n][t].\displaystyle\#\left(v,\mathbf{y}\right)=\begin{cases}0&\text{\leavevmode\nobreak\ \leavevmode\nobreak\ if\leavevmode\nobreak\ \leavevmode\nobreak\ }v\in[t]\\ 1&\text{\leavevmode\nobreak\ \leavevmode\nobreak\ if\leavevmode\nobreak\ \leavevmode\nobreak\ }v\in[n]\setminus[t].\end{cases} (38)

This proves (13), since dF(v)=#(v,𝐱)+#(v,𝐲)d_{F}\left(v\right)=\#\left(v,\mathbf{x}\right)+\#\left(v,\mathbf{y}\right).

Next, we provide an algorithm that builds a graph ψ1(𝐰)\psi^{-1}(\mathbf{w}) for each 𝐰𝒮(n,t)\mathbf{w}\in\mathcal{S}\left(n,t\right). Later we will see that the algorithm indeed reconstructs F(n,t)F\in\mathcal{F}(n,t) if the input is 𝐰=ψ(F)\mathbf{w}=\psi(F). Let 𝐰=(w1,,wnt)𝒮(n,t)\mathbf{w}=\left(w_{1},\ldots,w_{n-t}\right)\in\mathcal{S}\left(n,t\right) be given. We construct sequences (x~1,,x~nt)(\tilde{x}_{1},\ldots,\tilde{x}_{n-t}) and (y~1,,y~nt)(\tilde{y}_{1},\ldots,\tilde{y}_{n-t}) of vertices, a sequence (F~0,,F~nt)\left(\tilde{F}_{0},\ldots,\tilde{F}_{n-t}\right) of forests and for each v[n]v\in[n] a sequence (d~0(v),,d~nt(v))(\tilde{d}_{0}(v),\ldots,\tilde{d}_{n-t}(v)) of degrees as follows. We start with V(F~0)=[n]V\left(\tilde{F}_{0}\right)=[n], E(F~0)=E\left(\tilde{F}_{0}\right)=\emptyset, d~0(v)=#(v,𝐰)\tilde{d}_{0}(v)=\#\left(v,\mathbf{w}\right) if v[t]v\in[t], and d~0(v)=#(v,𝐰)+1\tilde{d}_{0}(v)=\#\left(v,\mathbf{w}\right)+1 if v[n][t]v\in[n]\setminus[t]. For i[nt]i\in[n-t] we set x~i=wi\tilde{x}_{i}=w_{i} and y~i=max{vd~i1(v)=1}\tilde{y}_{i}=\max\left\{v\mid\tilde{d}_{i-1}(v)=1\right\}. In addition, we let d~i(v)=d~i1(v)1\tilde{d}_{i}(v)=\tilde{d}_{i-1}(v)-1 if v{x~i,y~i}v\in\{\tilde{x}_{i},\tilde{y}_{i}\} and d~i(v)=d~i1(v)\tilde{d}_{i}(v)=\tilde{d}_{i-1}(v) otherwise. Finally, we obtain F~i\tilde{F}_{i} by adding the edge x~iy~i\tilde{x}_{i}\tilde{y}_{i} in F~i1\tilde{F}_{i-1} and we set ψ1(𝐰):=F~nt\psi^{-1}(\mathbf{w}):=\tilde{F}_{n-t}. Next, we show that this algorithm is well defined and that the output is indeed a graph. To that end, we note that for v[n][t]v\in[n]\setminus[t] and i[nt]i\in[n-t] we have

(d~i1(v)1,d~i(v)=0)(y~i=v).\displaystyle\big{(}\tilde{d}_{i-1}(v)\geq 1,\tilde{d}_{i}(v)=0\big{)}\leavevmode\nobreak\ \implies\leavevmode\nobreak\ \big{(}\tilde{y}_{i}=v\big{)}.

This yields that there are at least (nti)(n-t-i) vertices v[n][t]v\in[n]\setminus[t] with d~i(v)1\tilde{d}_{i}(v)\geq 1. Thus, if d~i1(wnt)1\tilde{d}_{i-1}(w_{n-t})\geq 1 for some i[nt]i\in[n-t], then v[n][t]d~i1(v)2(nti)+1\sum_{v\in[n]\setminus[t]}\tilde{d}_{i-1}(v)\leq 2(n-t-i)+1 and therefore y~i[n][t]\tilde{y}_{i}\in[n]\setminus[t]. This yields d~i(wnt)1\tilde{d}_{i}(w_{n-t})\geq 1 unless i=nti=n-t. As d~0(wnt)1\tilde{d}_{0}(w_{n-t})\geq 1 we obtain by induction that y~i[n][t]\tilde{y}_{i}\in[n]\setminus[t] for all i[nt]i\in[n-t]. In particular, this shows that y~i\tilde{y}_{i} is well defined and x~iy~i\tilde{x}_{i}\neq\tilde{y}_{i}. Thus, the algorithm is always executable and F~i\tilde{F}_{i} is a graph for all i[nt]i\in[n-t].

In order to prove that ψ:(n,t)𝒮(n,t)\psi:\mathcal{F}(n,t)\to\mathcal{S}\left(n,t\right) is a bijection, it suffices to show the following claims.

  1. (i)

    ψ(F)𝒮(n,t)\psi(F)\in\mathcal{S}\left(n,t\right) for all F(n,t)F\in\mathcal{F}(n,t);

  2. (ii)

    ψ1(ψ(F))=F\psi^{-1}(\psi(F))=F for all F(n,t)F\in\mathcal{F}(n,t);

  3. (iii)

    ψ1(𝐰)(n,t)\psi^{-1}(\mathbf{w})\in\mathcal{F}\left(n,t\right) for all 𝐰𝒮(n,t)\mathbf{w}\in\mathcal{S}\left(n,t\right);

  4. (iv)

    ψ(ψ1(𝐰))=𝐰\psi\left(\psi^{-1}\left(\mathbf{w}\right)\right)=\mathbf{w} for all 𝐰𝒮(n,t)\mathbf{w}\in\mathcal{S}\left(n,t\right).

We observe that xnt{y1,,ynt}x_{n-t}\notin\left\{y_{1},\ldots,y_{n-t}\right\}. Thus, using (38) yields xnt[t]x_{n-t}\in[t], which implies (i).

To show (ii), we suppose that we first apply the algorithm to obtain ψ(F)\psi(F) and then the algorithm ψ1\psi^{-1} with input 𝐰=ψ(F)\mathbf{w}=\psi(F). Due to (13) the degree sequence of F0=FF_{0}=F equals (d~0(1),,d~0(n))\left(\tilde{d}_{0}(1),\ldots,\tilde{d}_{0}(n)\right) and therefore y~1=y1\tilde{y}_{1}=y_{1}. By construction we also have x~1=x1\tilde{x}_{1}=x_{1}, which implies that (d~1(1),,d~1(n))\left(\tilde{d}_{1}(1),\ldots,\tilde{d}_{1}(n)\right) is the degree sequence of F1F_{1}. By repeating that argument we obtain by induction y~i=yi\tilde{y}_{i}=y_{i} for all i[nt]i\in[n-t]. As E(F)={xiyii[nt]}E\left(F\right)=\left\{x_{i}y_{i}\mid i\in[n-t]\right\} and E(F~nt)={x~iy~ii[nt]}E\left(\tilde{F}_{n-t}\right)=\left\{\tilde{x}_{i}\tilde{y}_{i}\mid i\in[n-t]\right\} this shows F~nt=F\tilde{F}_{n-t}=F, i.e. ψ1(ψ(F))=F\psi^{-1}(\psi(F))=F.

For (iii) we assume that we apply the algorithm ψ1\psi^{-1} with input 𝐰𝒮(n,t)\mathbf{w}\in\mathcal{S}\left(n,t\right). By induction it follows that for all i{0,,nt}i\in\left\{0,\ldots,n-t\right\} each component of F~i\tilde{F}_{i} contains at most one vertex vv with d~i(v)>0\tilde{d}_{i}(v)>0. This implies that we never close a cycle when adding the edge x~i+1y~i+1\tilde{x}_{i+1}\tilde{y}_{i+1} in F~i\tilde{F}_{i}, which shows that ψ1(𝐰)\psi^{-1}(\mathbf{w}) is a forest. We saw before that y~i[n][t]\tilde{y}_{i}\in[n]\setminus[t] for all i[nt]i\in[n-t]. Thus, if r[t]r\in[t] is a root and the component of F~i\tilde{F}_{i} containing rr has a vertex vv with d~i(v)>0\tilde{d}_{i}(v)>0, then v=rv=r. This implies that adding the edge x~i+1y~i+1\tilde{x}_{i+1}\tilde{y}_{i+1} never connects two components of F~i\tilde{F}_{i} which contain both a root. Hence, ψ1(𝐰)(n,t)\psi^{-1}(\mathbf{w})\in\mathcal{F}(n,t).

Finally for (iv), we suppose that for given 𝐰𝒮(n,t)\mathbf{w}\in\mathcal{S}\left(n,t\right) we first apply the algorithm to construct ψ1(𝐰)\psi^{-1}(\mathbf{w}) and then the algorithm to obtain the Prüfer sequence of ψ1(𝐰)\psi^{-1}(\mathbf{w}). We note that the degree sequence of F0=ψ1(𝐰)F_{0}=\psi^{-1}(\mathbf{w}) equals (d~0(1),,d~0(n))\left(\tilde{d}_{0}(1),\ldots,\tilde{d}_{0}(n)\right) and therefore y1=y~1y_{1}=\tilde{y}_{1}. By construction x~1\tilde{x}_{1} is the unique neighbour of y~1\tilde{y}_{1} in F0F_{0}, which implies x1=x~1x_{1}=\tilde{x}_{1}. This yields that the degree sequence of F1F_{1} is (d~1(1),,d~1(n))\left(\tilde{d}_{1}(1),\ldots,\tilde{d}_{1}(n)\right). Repeating that argument we obtain by induction x~i=xi\tilde{x}_{i}=x_{i} for all i[nt]i\in[n-t], which proves (iv). ∎